Next Gen Image Compression in Sitecore

Spoiler: This post is not a post about Dianoga, I take a deep dive into Tiny PNG and Kraken.IO integrations into Sitecore. The results are worth checking out at the bottom.


At the start of the year, I’ve picked up where I left off, on page speed. Last year I took a deep dive into attempting to improve the page speed on Sitecore SXA sites by using some of Google’s recommended techniques to structure the page. If you haven’t already seen it, head on over the Sitecore Speedy and see some of the results we achieved.

I’ll be the first to admit that getting really good page speed scores isn’t easy. It takes a lot of different factors to come together. Just as a reminder, here is the main list that I would consider you need to check off to be winning at this game.

1) Introduce image lazy loading

2) Ensure a cache strategy is in place and verify its working.

3) Dianoga is your friend for image compression

4) Use responsive images (must serve up smaller images sizes for mobile)

5) Introduce Critical CSS and deferred CSS files

6) Javascript is not a page speed friend. Defer Defer Defer

For this post, i’m going to look at an alternative to Dianoga. I’m a big fan of Dianoga and have used it over the years to crunch loads of oversized images introduced by Content Editors. I will, however, say that it can add complexity to deployments and CI/CD pipelines and while some claim to have had success in Azure Apps, others have not.

On the flip side, content editors love Tiny PNG, which is one of the most popular image compression website utilities going around. Tiny PNG also has a developer API, so we have used this to build in a compression tool that can be used directly from your Sitecore toolbar.

The button below is hooked up to chat to Tiny PNG API. It will send across your image data and receive a compressed image back for storage.


Full disclosure, I’m not the first person to hook up Tiny PNG to the image library. I could find two other implementations

One will allow you to run a powershell script to connect to the Tiny PNG API and the other is a module to connect to the API on upload.


This implementation of the Tiny PNG API introduces the following variances:

  • A button in the CMS to crunch any single image.
  • A scheduled task that will process any image not already processed.
  • Error handling for when the API limits are reached
  • Logging that outlines which images were processed.
  • Before and After compression information stored in any Image field of choice.
  • A feature toggle to turn the whole feature on/off

All the source code is available at: https://github.com/Aceik/ImageCompression

Now let’s jump in have a look at the results just from crunching a few images down:

Without image compression:

Click to Enlarge Image

To compress the images on the page, we head on over to the “Compress” button in the Media tab that we have introduced.

Click to enlarge

A few examples of compression results taken from homepage images:

Before: 158.4 KB | After: 110.6 KB

Before: 197.8 KB | After: 135.3 KB

Before: 640.0 KB | After: 120.7 KB

After compressing all the images on the page the saving can be seen below.

Click to enlarge

So our total image size saving is 2.4MB – 1.3MB = 1.1MB

A pretty decent saving from just pressing the compress button on 27 homepage images. Also, consider that the user won’t notice any difference in image quality as this method uses lossless compression.


The compression achieved is great for helping us tick off one of the requirements for fast pages with Google. But as we are about to find out Google will likely still complain about two other criteria. When it comes to Google Page Speed insights a page that does not have properly processed images will bring up the following three recommendations:

Here is a break down of how we address each one:

  1. Serve image in next-gen formats – Image formats like JPEG 2000, JPEG XR, and WebP often provide better compression than PNG or JPEG, which means faster downloads and less data consumption. Learn more.
  2. Properly Size images – Your CSS layouts should be responsive and use modern image retrieval techniques that adapt the image size requested based on screen size. Read More
  3. Efficiently encode images – The Tiny PNG integration above will take care of this. This is all about compressing the image to as small as it can get without a visible loss of quality.

So assuming you have already achieved number three using the Tiny PNG integration or another source, let us look at how we can solve the next-gen image requirement.

As a quick side note the testing I did after converting the images to next-gen also ticked item number two above. I don't think this should be relied on however and its best to incorporate responsive images into your projects from the beginning.  

When looking into how to convert images to a next-gen format I opted to target webp. Google has a nice little page explaining the format here.

WebP is natively supported in Google Chrome, Firefox, Edge, the Opera browser, and by many other tools and software libraries.

Once again I opted to look for an API that would provide the conversion for me so that Sitecore could easily connect, send the image and then store the result. All without any extra hosting requirements. I opted to go with Kraken.IO image APIs as they have a free 100MB trial offer and well free is a good price when building proof of concepts. The integration is all available on Aceik’s github repository. Just signup for your own API keys add them to the module settings (in the CMS) and start converting.

To test out just how much this would impact the image payload size for the whole page, I once again converted all the images on the SXA habitat homepage.

Here are the results:

Click to enlarge

So our total image size saving is now 2.4MB – 0.79MB = 1.61MB

The reduction in size from a non-compressed image to a webp formatted image is truly impressive.

A few examples:


Conclusion

I can only conclude by saying that if page speed is really an important factor for your Sitecore project take a look at Tiny PNG. If you want to go next level with your image formats and achieve great compression try out the Kraken.IO API integration as it could be well worth the small subscription fee.


Results

CompressionTotal Image SizeSaving
None2.4 MB
Tiny PNG1.3 MB1.1MB
Kraken.IO (webp)0.79MB1.61MB

Notes:

The module and code mentioned in this blog post are available on Aceik’s Github account. This also contains installations instructions.

GitHub: https://github.com/Aceik/ImageCompression

After installation, your content editors will simply be able to compress and convert images as needed from within the CMS.

Click to enlarge

The Github Readme contains a run down and the standard settings inside Sitecore as shown below:

Accessing the JSS Dictionary in C#

This is a quick post to guide developers through gaining access to the JSS Dictionary in the backend C# code.

Why would you want to be able to do this?

The reason we originally had to do this was that our JSS Angular application had editable content from the dictionary that we also wanted to access in C#. In our particular case, it was to inject the content into an email template that would be sent to the user. To save duplicating content it made sense for both the front end and C# to have access to the same dictionary.

Where do we start?

The following assumes you have a Sitecore instance with JSS installed and a JSS application you are working on. Grab your favourite de-compilation tool (I use ILSpy) and locate the following DLL in the bin folder of your running Sitecore instance:

Sitecore.JavaScriptServices.Globalization.dll

Once you have that open in ILSpy you want to have a search for DictionaryServiceController

public class DictionaryServiceController : ApiController

The following method is what we want to use in our C# code:

public DictionaryServiceResult GetDictionary(string appName, string language)

It takes the unique application name (that belongs to your application) and the language (“en”) as a parameter. As a result, you will get back a dictionary object that you can use to lookup up your content.

This is the Controller that would normally be called via an API on the front end. So how do we call it from normal C# service for instance?

Firstly, the controller has a constructor that has three parameters that are injected via DI (Dependency Injection).

IConfigurationResolver configurationResolver, 
BaseLanguageManager languageManager, 
IApplicationDictionaryReader appDictionaryReader

Using ILSpy once again you can find that the above three parameters are all set up in the DI container via RegisterDependencies.cs in various JSS assemblies. The Controller itself is already registered in the DI Container as well, which is very handy.

If you have a look at showconfig.aspx in the admin tools you can see that a lot of the dependencies are registered via RegisterDependencies.cs

For example:

<configurator type="Sitecore.JavaScriptServices.AppServices.RegisterDependencies, Sitecore.JavaScriptServices.AppServices" patch:source="Sitecore.JavaScriptServices.AppServices.config"/>

Dependency injection is a whole other topic so I will leave that to your personal preference as to how you achieve it. For the purposes of the following complete code example, I have used the Services Attribute style setup. If you want to keep consistency with Sitecore you could also setup via the RegisterDependencies.cs class of your own and use a patch file to kick it off.


Example Service:

using Sitecore.Foundation.DependencyInjection; // Borrowed from habitat
using Sitecore.Diagnostics;
using Sitecore.JavaScriptServices.Globalization.Controllers;

namespace Sitecore.Foundation.JSS.Services
{
    public interface ITranslationService
    {
        string TranslateKey(string key);
    }

    [Service(typeof(ITranslationService), Lifetime = Lifetime.Transient)] 
    public class TranslationService : ITranslationService
    {
        private readonly DictionaryServiceController _controller;
        
        public TranslationService(DictionaryServiceController controller)
        {
            this._controller = controller;
        }

        public string TranslateKey(string key)
        {
            var dictionary = GetDictionary();
            if (dictionary.phrases.ContainsKey(key))
                return dictionary.phrases[key];
            Log.Error("Dictionary key {key} not found", this);
            return string.Empty;
        }

        private DictionaryServiceResult GetDictionary(string appName = "myAppName", string language = "en")
        {
           return _controller.GetDictionary(appName, "en");
        }
    }
}

Above is a simple service that can be used from just about anywhere in your C# code.

Simply change the appName and language as required to access the correct JSS dictionary. Also, remember to publish your app dictionary to the web database or you may get no results.


There we have it, accessing the JSS dictionary from C# in a nutshell. I hope this helps some other folks get this done quickly on JSS builds.

JSS – Some Key Takeaways

Introduction

In the following outline I will take you through some learning points that Aceik discovered on our first JSS project. Some of this is personal opinion and based on our experience with the Angular framework.

Disconnected Mode – Get started quickly

If your team is just starting out and you want to see JSS in action you will likely start out running in Disconnected Mode. This seems like a great way to get non Sitecore developers up and running without them needing to run an actual Sitecore instance. A key benefit is also that front end developers in your team can work on the project without having any Sitecore skills.

It has been mentioned by Nick Wesselman in the global Sitecore slack (Hi Nick, hope its ok if I quote you) that:

“*I wrote the import process for JSS and I can tell you it was not intended for anything beyond quick start for front end devs and short lived campaign sites”

and  ….

“Sitecore-first was always the intended workflow for anything non-trivial”.

I have to be honest I was disappointed when I read the above comments as we did get a fair way into the project supporting our front end Devs and our backend Devs. It does make sense though that Disconnected mode has its limitations. Your not going to get all the bells and whistles available that Sitecore provides, without actually having Sitecore itself. Still the opportunity to support those developers that don’t know Sitecore is a big draw card and I for one see Disconnected mode as something that is very useful.  

Disconnected Mode – Example usage

From personal experience building out a member portal in Angular we were able to mock up secure API calls in Disconnected mode by detecting which mode the app was run in. The workflow involved building individual components that were verified to be working in Disconnected mode by front end developers. These same components were then verified in Connected mode running in Sitecore. Some people may consider this to be double handling in some ways. I still think the the benefits of continuing to support front end developers has its advantages.  

GraphQL Endpoints

GraphQL is a paradigm shift from traditional APIs in that you have a single API endpoint that you can run queries and mutations against to produce results and updates.

Custom GraphQL Endpoints

A couple of things that took us a while to figure out were

  1. Adding multiple schemes to our single endpoint.
  2. Sending mutations with complex object structures (Nested POCOs)
    1. See Example Mutation Query
    2. See Example Variables that match the above query
    3. See Example Schema in C#

How to Turn on Mutations

If you need to send updates to the server by convention in GraphQL you write these as mutations.

You won’t get far unless you actually enable mutations in your JSS app config. This might seem obvious but it took us a while to find an example and work it out.

See Example Line 128

            <mutations hint="raw:AddMutation">
              <mutation name="createItem" type="Sitecore.Services.GraphQL.Content.Mutations.CreateItemMutation, Sitecore.Services.GraphQL.Content" />
              <mutation name="updateItem" type="Sitecore.Services.GraphQL.Content.Mutations.UpdateItemMutation, Sitecore.Services.GraphQL.Content" />
            </mutations>

Secure vs Insecure Graphql Endpoints

Something we required in the case of our member portal project was custom GraphQL endpoints for logged in users and in some cases insecure endpoints for data that did not require an Authenticated user.

Essentially in Angular we solved this using multiple Apollo clients. A full example is available here: (along with detailed explanation)

https://sitecore.stackexchange.com/questions/22229/in-jss-how-do-i-support-both-secure-and-open-graphql-endpoints/22230#22230

Conclusion

That’s a wrap for some of our key JSS learnings so far. We may come back and add to these over time as we learn more. Happy JSS-ing !!

SXA Speedy – Supercharge your SXA Page Speed Scores in Google

We are excited to preview our latest Open Source module. Before jumping into the actual technical details here are some of the early results we are seeing against the Habitat SXA Demo.


Results:

Results

Before:

After

After:

Before
* Results based on Mobile Lighthouse Audit in chrome. 
* Results are based on a local developer machine. Production results usually incur an additional penalty due to network latency.

Want to know more about our latest open source SXA Sitecore module …. read on ….


I’m continually surprised by the number of new site launches that fail to implement Google recommendations for Page Speed. If you believe what Niel Patel has to say this score is vitally important to SEO and your search ranking. At Aceik it’s one of the key benchmarks we use to measure the projects we launch and the projects we inherit and have to fix.

The main issue is often a fairly low mobile score, desktop tends to be easier to cater for. In particular, pick out any SXA project that you know has launched recently and even with bundling properly turned on its unlikely to get over 70 / 100 (mobile score). The majority we tried came in somewhere around the 50 to 60 out 100 mark.

Getting that page score into the desired zone (which I would suggest is 90+) is not easy but here is a reasonable checklist to get close.

1) Introduce image lazy loading
2) Ensure a cache strategy is in place and verify its working.
3) Dianoga is your friend for image compression
4) Use responsive images (must serve up smaller images sizes for mobile)
5) Introduce Critical CSS and deferred CSS files
7) Javascript is not a page speed friend. Defer Defer Defer

The last two items are the main topics that I believe are the hardest to get right. These are the focus of our new module.

Critical_plus_defer

Check out the GitHub repository.

I have also done an installation and usage video.

So how will the module help you get critical and JS defer right?

Deferred Javascript Load

For Javascript, it uses a deferred loading technique mentioned here. I attempted a few different techniques before finding this blog and the script he provides (closer to the bottom of the article) seems to get the best results.  It essentially incorporates some clever tactics (as mentioned in the article) that defer script load without compromising load order.

I also added in one more technique that I have found useful and that is to use a cookie to detect a first or second-time visitor. Second-time visitors naturally will have all external resources cached locally, so we can, therefore, provide a completely different loading experience on the 2nd pass. It stands to reason that only on the very first-page load we need to provide a deferred experience.

Critical + Deferred CSS Load

For CSS we incorporated the Critical Viewport technique that has been recommended by Google for some time. This technique was mentioned in this previous blog post. Generating the Critical CSS is not something we want to be doing manually and there is an excellent gulp based package that does this for you.

It can require some intervention and tweaking of the Critical CSS once generated, but the Gulp scripts provided in the module do seek to address/automate this.

Our module has a button added into the Configure panel inside the Sitecore CMS. So Content Editors can trigger off the re-generation of the Critical CSS when ever needed.

Generate Critical button added to Configure.

Local vs Production Scores

It’s also important to remember that the scores you achieve via Lighthouse built into Chrome on localhost and your non-public development servers can be vastly different than production. In fact, it’s probably safest to assume that non-production boxes give false positives in the region of 10 to 20 points. So it’s best to assume that your score on production will be a little worse than expected.

Conclusion

It’s a fair statement that you can’t just install the module and expect Page Load to be perfect in under 10 minutes.  Achieving top Page Load Speed’s requires many technical things to work together. By ensuring that the previously mentioned checklists are done (Adequate Servers, Sitecore Cache, Image Loading techniques) you are partway over the line. By introducing the deferred load techniques in the module (as recommended by Google) you should then be a step closer to top score.

For more hints please see the Wiki on Github.

This module has been submitted to the Sitecore Marketplace and is awaiting approval.


Author: Thomas Tyack – Solutions Architect / Sitecore MVP 2019

Part 4: Instant profiling/personalisation

This is the last part in a four-part series on Customising the Experience Profile. This last part covers off Instant Personalisation.

You can view Part 1, Part 2, Part 3 via the respective links.

Essentially the story goes that as a marketer you sometimes already know things about your visitor before they reach your website. Posting an ad on Facebook or other social media channels is a great example of this.  By enabling a custom code pipeline we can achieve this via a parameter on the inbound link from social media.

The solution involves a custom analytics pipeline that is executed before the page is loaded.
  • It can be activated by adding a query string on the end of any URL.
  • For example http://www.scooterriders-r-us.com.au?pr=scooterType&pa=fast
  • This would look in the “scooterType” profile and assign the key values for the “fast” pattern card.
  • We set it up so that a user can potentially be added to three different profiles from an inbound link   ?pr=scooterType&pa=fast&pr2=safety&pa2=none&pr3=incomebracket&pa3=budget
  • Some of the logic used was derived from this StackOverflow ticket.

Now for the Technical Implementation:

Test it out by:

  • Creating a profile with a pattern card (or several combinations)
  • Setup a page with a personalised content block that will change based on different Profile Pattern matches.
  • Creating a new inbound link to your page using the link parameters explained above.
  • Open a new incognito window so that a new user is simulated.
  • You should now see the correct personalisation for that user on the very first-page load.
  • Confirm that pattern match inside the Experience Profile for that recent user.  Note that if Experience Profile is not set up to show Anonymous users this last step requires some configuration changes.  This is mentioned in part 1.

Conclusion:

By adding a very simple customisation to Sitecore we can give marketers that ability to leverage social media to achieve personalisation very early on.

Part 3: External Tracking via FXM and Google Client ID

In this third part of our Experience Profile customisation series, we look at how we might integrate FXM into a third party website.  For the purposes of this blog, we assume the third party website is not built with Sitecore.

You can view Part 1, Part 2 and  Part 4 via the respective links.

A great example of where you might want to do this is if you link off to a third party shopping cart or payment gateway. In this particular scenario, you can use FXM to solve a few marketing requirements.

  • Pages Viewed: Track the pages the user views on an external site.
  • Session Merge: Continue to build the user’s Experience Profile and timeline.
  • Personalise content blocks in the checkout process.  Great for cross promotion.
  • Fire off goals at each step of the checkout process.
  • Fire off goals and outcomes once a purchase occurs.
Note: In the examples that follow we also show what to do in each scenario for single page application. View the footnote for more details about how you might support these with regards to FXM.

So let’s now examine how each requirement can be solved.

Pages Viewed

Page views are a quick win, simply injecting the beacon will record the page view.

For a single page application, each time the screen changes you could use:

SCBeacon.trackEvent('Page visited')

Session Merge

If you inject the Beacon on page load you get some session merging functionality out of the box. If you have a look at the compatibility table for different browsers it’s worth noting that Safari support is limited.

Here is a potential workaround for this notable lack of Safari support:

  • Follow the instructions in Part 1 to identify a user via Google Client ID.
  • When linking to the external website pass through the Google Client ID (see part 1 for more details) as a URL parameter.
  • ?clientID=GA1.2.2129791670.1552388156
  • Initialise google analytics with the same Client ID.  This can also be achieved by setting the Client ID on the page load event in the GTM container.
  • function getUrlVars(){var n={};window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi,function(r,e,i){n[e]=i});return n}
    ga('create', 'UA-XXXXX-Y', {
      'clientId': getUrlVars()["clientID"]
    });
  • Inject the FXM beacon
  • Setup a custom Page Event called “updateGoogleCid” in Sitecore.
  • Hook up a custom FXM procesor that will merge the session.

The process above works for single page applications as well.

Trigger Goals

Out of the box triggering a goal is easily achieved by ‘page filter‘ or ‘capture a click action‘.

For single page applications, you can use the following API calls in your javascript.

SCBeacon.trackGoal('Checkout')

Trigger Goals and Outcomes on Purchase

Out of the box triggering an outcome is achieved via a ‘capture a click action‘.

For the purposes of checkout, you are likely to want to see the dollar value purchased for that particular user in the Experience Profile. In order to achieve this, you need to use the javascript API to pass through the dollar value.  Be sure to create an outcome in Sitecore called ‘Purchase Outcome’.

SCBeacon.trackOutcome("Purchase Outcome", 
{ 
monetaryValue: revenue, 
xTransactionId: transactionId
});

A great tip that we received from the SBOS team in Australia was to trigger goals at checkout that had engagement value staggered according to the amount spent.

So, for example, you may have some javascript logic that looks like this:

if(revenue <= 99)
{
     SCBeacon.trackGoal('lowvalue')
}else if(revenue >= 100 && revenue < 500)
{
     SCBeacon.trackGoal('midvalue')
}else if(revenue >= 500 && revenue < 1000)
{
     SCBeacon.trackGoal('highvalue')
}else{
     SCBeacon.trackGoal('veryhighvalue')
}

For single page applications, you will need to use the javascript API.


 

Conclusion: In order to use FXM on any external website not built on Sitecore you need access to insert the Beacon code. If the external website is not a Single Page Application (also note some other limitations) you can use the FXM Experience Editor to achieve much of the desired functionality.

For those external websites containing Single page applications, ideally, you can also get access to either the GTM container or get the external website to insert some javascript for you. Using some clever javascript coding you can still record marketing events using the FXM javascript API. 

To continue reading jump over to Part 4, where we cover off a handy way to get personalisation working on the very first-page load.


Footnote: Single Page Applications

It’s important to note that out of the box FXM does not support single page applications. Look a bit further under the hood however and you will realise that FXM includes a great Javascript API.  After mentioning that you might now be thinking that if its a third party website you’re unlikely to get access to the source in order to implement any API calls.  At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

This will likely place you in one of the following scenarios:

  1. Not a single page application, in which case you just need the external website to include the FXM beacon. (instructions)
    • This is by far the simplest scenario and happy days if your in this category.
  2. A single page application, with which you have access to make changes.
    • In this case, inject the FXM beacon on page load and use the Javascript API to trigger events, goals and outcomes.
  3. A single page application, with which you have no direct access to make changes, but can request changes to the GTM container.
    • In this case, a great backup is using the GTM container to inject the Beacon. You can then write custom javascript that uses javascript listeners to talk with the FXM API.
    • With some single page application frameworks (Angular, React, Vue) hooking into the existing javascript listeners will prove difficult. Your last remaining option may turn out to be inside the GTM container again. If the application is already sending back telemetry to Google Analytics, make good use of it. This could be achieved by either:
      • Writing a custom javascript snippet that looks for changes in Googles datalayer.
      • If events are configured directly in GTM, simply ask for changes to each event to include an FXM API call as well.
  4. If your unlucky and you have no access to make changes at all …. well …..
    •   shrug

 

 

 

Part 2: Experience Profile – Multi-site, Multi-domain tracking

This blog examines a way in which you can effectively track a user across multiple sites with different domains. It demonstrates how visitor data collected across different top-level domains can be merged together. Merged visitor data provides a much clearer picture of a users movements (experience profile timeline) when a brand consists of many different websites.

You can view Part 1, Part 3 and  Part 4 via the respective links.


Note:  Tracking across sub-domains is a much easier task in Sitecore and is solved using the setting “Analytics.CookieDomain”. The problem this blog is solving is about top level domains not being able to identify and merge visits. This is due to the fact that top-level domains cannot access each others analytics cookie (for very valid security reasons). The problem is summarised well here.


Why is this necessary?

To start off with let’s define the current problem that required this solution.

Problem Definition:

Given that two websites are hosted in Sitecore under unique top level domains. If a visitor goes to http://www.abc.com.au and then visits http://www.def.com.au the same visitor is not linked and the Experience Profile shows two different visitors.

Solution:

The solution Aceik came up with for this is best depicted with a simple diagram.

GlobalCookie

(download a bigger diagram)

The different technologies at play in the diagram include:

  1. Multiple Sitecore Sites
  2. Google Analytics or GTM
  3. IFrames – running javascript, using PostMessage, with domain security measures.
  4. Cookies – Top Level Domain

What is happening in Simple Terms?

Essentially a user visits any one of your Sitecore sites and we tell XDB how it can identify that user by using the assigned Google Client ID.

To do this we use a combination of IFrame message passing between a group of trusted sites and cookies that give us the ability to go back and get the original Client ID for reuse.

By reusing the same Google Client ID on all Sitecore sites we can produce a more complete user profile. One that will contain visit data from each website. The resulting timeline of events in the Experience Profile will now display a federated view across many domains.


A More Technical Explanation

This is a more detailed Technical explanation. Let’s break it down into the sequence of steps a user could trigger and the messages being passed around.

First visit to “def.com.au”:

  1. User navigates to “def.com.au”.
  2. Google analytics initializes on page load with a random Client ID (CID) assigned.
  3. Async JavaScript is used to inject an invisible IFrame into the page. The IFrame URL is pointed at “abc.com.au/cookie”.
  4. Once the IFrame has completed loading the URL JavaScript obtains the CID from Google and uses “PostMessage” to pass it through to the “Global CID Cookie”.
  5. The “Global CID Cookie” has no prior value set so is updated with the CID passed in.
  6. The cookie page responds using “PostMessage” and sends back the same CID passed in.
  7. JavaScript on the page “def.com.au” receives the message and stores the CID received in the “Domain CID Cookie”.
  8. JavaScript on the page triggers backend code that will identify the user against the CID and merge all visits for that user.

Later visit to “hij.com.au”

  1. User navigates to “hij.com.au”.
  2. Google analytics initialisers on page load with a random Client ID (CID) assigned.
  3. Async JavaScript is used to inject an invisible IFrame into the page. The IFrame URL is pointed at “abc.com.au/cookie”.
  4. Once the IFrame has completed loading the URL JavaScript obtains the CID from Google and uses “PostMessage” to pass it through to the “Global CID Cookie”.
  5. The “Global CID Cookie” has a prior value, so the CID passed in is not used or set.
  6. The cookie page responds using “PostMessage” and sends back the prior existing CID stored from the first visit to”def.com.au”.
  7. JavaScript on the page “hij.com.au” receives the message and stores the CID received in the “Domain CID Cookie”.
  8. JavaScript on the page triggers backend code that will identify the user against the CID.  The current visit data for  “hij.com.au” is merged with the previous visit data from “def.com.au”.
  9. JavasScript on the page updated the CID stored in Google Analytics so that no other actions use the newer CID generated by “hij.com.au”.
  10. When the page is refreshed the Google Analytics initialisation code checks the “Domain CID Cookie” and passes through the existing CID to google for continued use.

Last visit to “abc.com.au”

  1. Google analytics initialisers on page load, checks for the existence of the “Global CID Cookie” and passes through the existing CID to google for continued use.
  2. Javascript on the page notices that “Global CID Cookie” is set but the “Domain CID Cookie” is not.
  3. JavaScript on the page triggers backend code that will identify the visit against the CID.  The current visit data for  “abc.com.au” is merged with the previous visit data from “def.com.au” and “hij.com.au”.

Second-time visits are an easy win

Once each domain has completed a first pass of checking for a Global CID the code is smart enough that it doesn’t need to repeat all these steps again.

The first pass sets a local domain cookie and this is used for quick access on each subsequent page load. The first pass is also coded in an async way so that it will not have an impact on page load times.

We can also set it up so that the code initialising the google tracker is instantly provided with the multi-domain Client ID straight away.

https://www.useloom.com/share/0b8c9f0a2fbc41c0baa1ae4db5e9ae6b

This loom video shows exactly how we set up the global page view event with two variables. The final variable runs custom javascript that will read our cookie to grab the Client ID.

The javascript to paste into the GTM variable is:

function() {
var getGAPrior = function(name) { var value = "; " + document.cookie; var parts = value.split("; " + name + "="); if (parts.length == 2) return parts.pop().split(";").shift(); }
var localCookie = getGAPrior("LocalCidCookie"); if (typeof localCookie !== "undefined") { return localCookie; }
var globalCookie = getGAPrior("GlobalCidCookie"); if (typeof globalCookie !== "undefined") { return globalCookie; }
return null;
}

What about FXM ?

You could potentially solve this same problem using a customisation around FXM.  It may be possible for FXM to replace the need for IFrame communication.

In order to achieve this, you would need to write a customisation to the Beacon service that allowed a Google Client ID to be sent and received.

However, the main sticking point remains around the need to maintain a global storage area (Global Cookie) that is owned by the user.  Due to the browser limitations (noted by Sitecore) of passing cookies, I’m not entirely sure an FXM replacement will work.

Compare that with browser support for IFrame “PostMessage” and you can see why we travelled down one rabbit hole compared to another.

Reference: https://community.sitecore.net/developers/f/10/t/380


Conclusion

Every website visitor gets assigned a Google Analytics Client ID.  You can use this ID to identify a user in XDB very early on. For multi-site tracking, the Client ID supplied by Google comes in very handy.  By using some clever communication tactics between your sites you can merge all the users visit data into a single profile view. The resulting timeline of events in the Experience Profile should prove very useful to your marketing teams.

To continue reading, jump over to Part 3 where we cover off External Tracking via FXM and Google Client ID.

Top 5 Ways to Extend Sitecore HTML Cache

In 2017 I wrote a reasonably long post on all the different considerations a Sitecore Caching Strategy might cover.

Following on from that post its time to share some custom HTML cache extensions that we at Aceik may incorporate into our projects. This count down of custom settings has been collected from across the Sitecore community.

5) Vary By Personalisation

This one (in my opinion) is a must-have if your incorporating personalisation in your homepage.

I admit it will only be effective up to a certain number of content variations and even then needs to be used with caution. Still, if you can save your server and databases from getting hit and help keep Time to First Byte (TTFB) low, its always worth it.

Please note that if you’re displaying a customised message with data that is only relevant to that user, the number of variations may not make it worthwhile.  On the other hand, if your showing variations based on a handful of profile card match rules, we found it to be fairly effective.

Code Sample: ApplyVaryByPeronalizedDatasource.cs

Credits: Ahmed Okour

4) Vary By Resolution

Its a fairly common scenario that we want to display a different image based on the users screen size. So it stands to reason that we would need a way to differentiate this when it comes to caching our Renderings.

The particular implementation was used in combination with Scott Mulligan’s Sitecore Adaptive Image Library.

The Adaptive Image library stores the users screen resolution via a cookie in the front end razor/javavscript:

document.cookie = '@Sitecore.Configuration.Settings.GetSetting("resolutionCookieName") =' + Math.max(screen.width, screen.height) + '; path=/';
  • The first time around if no cookie is set it uses the default largest image size as the cache key.
  • If the cookie is set the cache incorporates the screen resolution.
args.CacheKey += "_#resolution:" + AdaptiveMediaProvider.GetScreenResolution();

Code 1:  ApplyVaryByResolution.cs

Code 2: AdaptiveMediaProvider.cs 

Credit:  Dadi Zhao

3) Vary By Timeout

This one’s a little different, it requires not only a new checkbox but also a new “Single-Line text” field that allows you to enter a timeout value.  The idea, as you might have guessed, is for the rendering cache to expire after a certain amount of time.

Code 1: ApplyVaryByTimeout.cs

Credit: Dylan Young

2) Vary By Url

An oldy but a goody. I’m a little surprised this one just hasn’t made it into the out of the box product. On the other hand, I can see how it could be overused if you don’t understand the context it applies to. Essentially you can take either the Context Item ID or the raw URL and make your rendering cache vary based on that key.

A good use case for this setting could be for navigation that requires the current page to always be highlighted.

Code 1: ApplyVaryByURLCaching.cs  (Context Item ID formula)

Code 2: ApplyVaryByRawUrlCaching.cs (Raw URL formula)

Credit:  The 10 other people that have blogged about this over the years.

1) Vary By Website

Given Sitecore is an Enterprise content management system we often see multi-site implementations launched on the platform. It makes sense then that you have an option to cache renderings that don’t change all that much on one site but have different content on another.

Example Usage: A global navigation used across all sites that requires some content for the context site to show differently.

Code: ApplyVaryByWebsite.cs

Credit: Younes van Ruth

That rounds out the count down of some of the top ways to extend Sitecore’s out of the box rendering cache. Your renderings will likely use a combination of these settings in order to achieve adequate caching coverage.

For a better idea on how you might add the top 5 above into Sitecore. Please see the technical footnote below. 



 

Technical Footnote:

All these extensions will add an extra checkbox in the Rendering cache tab within Sitecore.

cachesettings

In order for this check box to show up you need to add your custom checkbox fields to the template:

/sitecore/templates/System/Layout/Sections/Caching

You can achieve this in several ways. and there are a lot of other blogs “on the line” that describe how to add in these custom checkboxes so I won’t go into a deep dive here.

With regard to the Helix architecture lets outline one way you could set this up. Aceik has a module in the foundation layer that has all the custom cache checkboxes added to a single template (serialized in Unicorn within that module) . The system template above is then made to inherit from your custom template in order to inherit the custom caching fields.

custom

Part 1 – Experience Profile – Identify Users Early

This is the first part of a four-part blog series where I will introduce some XDB customisation that could be of use on your next Sitecore project. All these customisations relate back to the Experience Profile and identifying the user.

Part 1:  We introduce the concept of early profile identification, there is no such thing as the anonymous user.

Part 2: We dive into the world of multi-site, multi-domain tracking. How to implement a global-common cookie for all your brand’s sites.

Part 3: External Tracking via FXM and Google Client ID – How to continue tracking a user on another website (not hosted in Sitecore).  (Release TBC)

Part 4: How to achieve Instant personalisation on the very first page load. We can make use of the fact that inbound links from a social stream can already identify a users demographic.


 

Part 1:  Identify Users Early

Some of this blog has not been updated for Sitecore 9 yet, other parts have.

In part one I’m going to talk about identifying your visitors as early on in the visit.

By Identifying users I am referring to allowing users to show up in the Experience Profile.

ExperienceProfileButton

By default, Sitecore will not track every single anonymous user that reaches your site.  In order to get them to show up in the Experience Profile, a few quick changes are necessary.

One of the simplest ways to do this is using either WFFM or the new Forms components in Sitecore 9. In fact, the setup hasn’t changed all that much between the version for this particular use case.

Sitecore 9:  Setup forms save actions

Sitecore 8:  Setup save action in WFFM

You can use these save action with any forms on your website that collect personal details. The identification of a user should be happening when a user submits a form that contains personal details. This is a well documented OOTB forms feature that you can set up without any developer intervention.

Another way to identify a user is to do so programmatically by updating the contacts facet details and then calling the identify method on the tracker.

Sitecore 9:  (reference)

Sitecore.Analytics.Tracker.Current.Session.IdentifyAs("sitecoreextranet", "identifier");

Sitecore 8:

Sitecore.Analytics.Tracker.Current.Session.Identify("identifier")
A side by side code example is available here. 

Calling the above line of code with a string identifier associates the visit data with that identifier. When the user visits again on a different device or tracked website if you are able to call the same line of code with the exact same identifier the visitor’s data will be merged into a single Experience Profile record.

Taking the above concept a little bit further we can also track users across non-Sitecore based websites. By using some google smarts and injecting the FXM beacon onto a third party website we can continue to track the user including, page visits, goals, and outcomes.  (this is covered off more in Part 3)


 

No User is Anonymous

Given that we can choose when a user should be identified and displayed in the Experience Profile. Its time to introduce a concept that no user is anonymous. In fact, this is true for the majority of websites in existence, if they use Google Analytics.

Google assigns an identifier called the Client ID to each visitor that comes along to your website.  The Client ID is stored in the GA cookie and has an expiration date of 2 years after creation.

Note: Google also has a concept of User ID that is used to track sessions across devices. The difference is that each website must send this value to Google in order for it to be used. In reality, this is going to most relevant if you only want to identify users in Sitecore if they have performed Authentication. 

We can use Google’s Client ID to allow the user to show up in the Experience Profile as early as is necessary.

To do this setup the following:

  1. Read the Client ID via JavaScript
    • if (typeof ga !== "undefined"){
          cid = ga.getAll()[0].get("clientId");
      }
  2. Send the Client ID to XDB / XConnect via async javascript.  (Github Reference)
    • if (typeof cid !== "undefined") {	
      	var setEventPath = '/api/xdb/Analytics/TriggerEvent/Event/?eventName=updateGoogleCid&data=' + cid;
              $.ajax({
      		type: 'POST',
      		url: setEventPath,
      		dataType: 'json',
      		success: function (json) {
      		      setCookie(cookieName, cid, 1);
      		},
      		error: function () {
      		      console.warn("An error occurred triggering the event");
      		}
      	});
      }
    • The above code assumes a custom Controller was set up to trigger Goals via Ajax/Javascript.  (Github Reference)
  3. Identify the user (See code examples mentioned earlier or look at our example controller)

Note 1:  The above code only needs to be triggered once per visitor. To save this running multiple times you can assign a cookie to the user. By checking if the cookie has been set you can prevent the above process from running more times then necessary. 

Note 2:  In a single site environment you may choose to leave this identification until a certain amount of visit data has been collected.  For example, writing some logic to check that the user has achieved a certain amount of goals.  This will prevent users with little or no data showing the Experience Profile. 

Note 3: In a multi-site environment the opposite to note 2 becomes necessary. With visitors hitting multiple sites you need to identify them as early as possible. The main reason being that you want any visit data collected from the current website merged with visit data from any other site visits. The resulting merged data provides a great overview of a users movements. This will be discussed more in part 2 when we will look into multi-site XDB visitor identification using a Global common cookie.


Experience Profile – First Name, Last Name

This next step is optional. Given that you have identified the user it will now show in the Experience Profile.  At this point, you may not have a first and last name for that visitor. As an alternative, you could split the Client ID into two numbers and use them as the initial values for the first and last name. If the user completes a newsletter signup, logs in or makes an inquiry at a later time, that would be a good opportunity to update these to the correct values.

firstlast.png

(Github Reference)


 

Part 1: Conclusion

We have demonstrated above how you can identify a user so that they show up in the Experience Profile. As part of this, we have looked at how you could use the Client ID from Google to identify the user as early on as you like, potentially on the very first-page load.  In part 2 of this series on Experience Profile customisations we take a look at how to track users across multiple top level domains.

Sitecore Page Speed: Part 3: Eliminate JS Loading Time

In part 1 & part 2 of our Sitecore page speed blog, we covered off:

  • The Google Page Speed Insights tool.
  • We looked at a node tool called critical that could generate above the fold (critical viewport) CSS code that is minified.
  • We referenced the way in which Google recommends deferring CSS loading.
  • We showed a way to integrate “Above The Fold” CSS into a Helix based project and achieve a page free of render blocking CSS.

In this 3rd part of the series, we will introduce a way to defer the load of all external javascript assets (async).

A reminder that I have committed the sample code for this blog into a fork of the helix habitat example project. You can find the sample here. For a direct comparison of changes made to achieve these page load enhancements, view a side by side comparison here.

Dynamic JS loading Installation Steps:

  1. Inside Sitecore add a new view Rendering that reference the file /Views/Common/Assets/Scripts-3.2.1.cshtml
    • Note down the ID of this rendering and replace in the ID of the rendering in the next step.
  2. Update the Default.cshtml layout to include a new cached rendering.
  3.  @*Scripts Legacy Jquery jquery-3.2.1 *@
     @Html.Sitecore().CachedRendering("{B0DD36CE-EE4A-4D01-9986-7BEF114196DD}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_bottom_scripts" })
    • cacheKey = This variable is something unique that will identify the page. You could use the Sitecore context Item ID or path for example.

Explanation:

The rendering Scripts-3.2.1.cshtml will render out the following javascript onto the page:

var scriptsToLoad = ['//cdnjs.cloudflare.com/ajax/libs/modernizr/2.8.3/modernizr.min.js','//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js','/assets/js/slick.min.js','/assets/js/global.js','/assets/js/Script.js','/assets-legacy/js/lib/lazyload.min.js'];
src="/assets-legacy/js/lib/jquery-3.2.1.min.js" async defer>
  • First of all, it prints out a JS array of all the scripts that this page requires.
    • This is the array of JS files that comes from Themes and Page Assets inside the CMS. If you are familiar with Habitat Helix this list can be content managed inside the CMS.
  • It then instructs the jquery library to be loaded async (which will not block the network download of the page response).
  • Once jquery is loaded, this modified version of jquery contains some code at the end that will read in the list of scripts dynamically and apply them to the page.
    • This is achieved with fairly simple AJAX load calls to the script URLs.

Outcome:

Once integrated successfully you will end up with a page that does not contain any blocking JS network calls.  The Google Page Speed tool should give you a nice score boost for your achievement in reducing initial load time.


Hints and Tips:

Bootstrapping JQuery Code:

  • jquery Document.Ready() function calls may not fire inside dynamically loaded JS files. This is because the JS file is loaded after DOM is ready and it’s too late for the Document.Ready() event at this stage.
  • As a workaround, you could code your JS files to bootstrap on both the Document.Ready() or whenever $ is not undefined.
  • In the case of dynamic loading in this manner, because jquery was loaded first, $ should not be undefined and your code should be bootstrapped successfully.

Debugging in chrome:

  • When dynamically loading JS files they may strangely not appear in the chrome console debugger as you would normally expect.
  • The workaround for this is to add a comment to the top of each JS library
  • //# sourceURL=global.js
  • This will cause the chrome debugger to list the file in the source tab under the “(no domain)” heading.
  • You will then be able to debug the file as per normal.