1 comment on “Part 4: Instant profiling/personalisation”

Part 4: Instant profiling/personalisation

This is the last part in a four-part series on Customising the Experience Profile. This last part covers off Instant Personalisation.

You can view Part 1, Part 2, Part 3 via the respective links.

Essentially the story goes that as a marketer you sometimes already know things about your visitor before they reach your website. Posting an ad on Facebook or other social media channels is a great example of this.  By enabling a custom code pipeline we can achieve this via a parameter on the inbound link from social media.

The solution involves a custom analytics pipeline that is executed before the page is loaded.
  • It can be activated by adding a query string on the end of any URL.
  • For example http://www.scooterriders-r-us.com.au?pr=scooterType&pa=fast
  • This would look in the “scooterType” profile and assign the key values for the “fast” pattern card.
  • We set it up so that a user can potentially be added to three different profiles from an inbound link   ?pr=scooterType&pa=fast&pr2=safety&pa2=none&pr3=incomebracket&pa3=budget
  • Some of the logic used was derived from this StackOverflow ticket.

Now for the Technical Implementation:

Test it out by:

  • Creating a profile with a pattern card (or several combinations)
  • Setup a page with a personalised content block that will change based on different Profile Pattern matches.
  • Creating a new inbound link to your page using the link parameters explained above.
  • Open a new incognito window so that a new user is simulated.
  • You should now see the correct personalisation for that user on the very first-page load.
  • Confirm that pattern match inside the Experience Profile for that recent user.  Note that if Experience Profile is not set up to show Anonymous users this last step requires some configuration changes.  This is mentioned in part 1.

Conclusion:

By adding a very simple customisation to Sitecore we can give marketers that ability to leverage social media to achieve personalisation very early on.

1 comment on “Part 3: External Tracking via FXM and Google Client ID”

Part 3: External Tracking via FXM and Google Client ID

In this third part of our Experience Profile customisation series, we look at how we might integrate FXM into a third party website.  For the purposes of this blog, we assume the third party website is not built with Sitecore.

You can view Part 1, Part 2 and  Part 4 via the respective links.

A great example of where you might want to do this is if you link off to a third party shopping cart or payment gateway. In this particular scenario, you can use FXM to solve a few marketing requirements.

  • Pages Viewed: Track the pages the user views on an external site.
  • Session Merge: Continue to build the user’s Experience Profile and timeline.
  • Personalise content blocks in the checkout process.  Great for cross promotion.
  • Fire off goals at each step of the checkout process.
  • Fire off goals and outcomes once a purchase occurs.
Note: In the examples that follow we also show what to do in each scenario for single page application. View the footnote for more details about how you might support these with regards to FXM.

So let’s now examine how each requirement can be solved.

Pages Viewed

Page views are a quick win, simply injecting the beacon will record the page view.

For a single page application, each time the screen changes you could use:

SCBeacon.trackEvent('Page visited')

Session Merge

If you inject the Beacon on page load you get some session merging functionality out of the box. If you have a look at the compatibility table for different browsers it’s worth noting that Safari support is limited.

Here is a potential workaround for this notable lack of Safari support:

  • Follow the instructions in Part 1 to identify a user via Google Client ID.
  • When linking to the external website pass through the Google Client ID (see part 1 for more details) as a URL parameter.
  • ?clientID=GA1.2.2129791670.1552388156
  • Initialise google analytics with the same Client ID.  This can also be achieved by setting the Client ID on the page load event in the GTM container.
  • function getUrlVars(){var n={};window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi,function(r,e,i){n[e]=i});return n}
    ga('create', 'UA-XXXXX-Y', {
      'clientId': getUrlVars()["clientID"]
    });
  • Inject the FXM beacon
  • Setup a custom Page Event called “updateGoogleCid” in Sitecore.
  • Hook up a custom FXM procesor that will merge the session.

The process above works for single page applications as well.

Trigger Goals

Out of the box triggering a goal is easily achieved by ‘page filter‘ or ‘capture a click action‘.

For single page applications, you can use the following API calls in your javascript.

SCBeacon.trackGoal('Checkout')

Trigger Goals and Outcomes on Purchase

Out of the box triggering an outcome is achieved via a ‘capture a click action‘.

For the purposes of checkout, you are likely to want to see the dollar value purchased for that particular user in the Experience Profile. In order to achieve this, you need to use the javascript API to pass through the dollar value.  Be sure to create an outcome in Sitecore called ‘Purchase Outcome’.

SCBeacon.trackOutcome("Purchase Outcome", 
{ 
monetaryValue: revenue, 
xTransactionId: transactionId
});

A great tip that we received from the SBOS team in Australia was to trigger goals at checkout that had engagement value staggered according to the amount spent.

So, for example, you may have some javascript logic that looks like this:

if(revenue <= 99)
{
     SCBeacon.trackGoal('lowvalue')
}else if(revenue >= 100 && revenue < 500)
{
     SCBeacon.trackGoal('midvalue')
}else if(revenue >= 500 && revenue < 1000)
{
     SCBeacon.trackGoal('highvalue')
}else{
     SCBeacon.trackGoal('veryhighvalue')
}

For single page applications, you will need to use the javascript API.


 

Conclusion: In order to use FXM on any external website not built on Sitecore you need access to insert the Beacon code. If the external website is not a Single Page Application (also note some other limitations) you can use the FXM Experience Editor to achieve much of the desired functionality.

For those external websites containing Single page applications, ideally, you can also get access to either the GTM container or get the external website to insert some javascript for you. Using some clever javascript coding you can still record marketing events using the FXM javascript API. 

To continue reading jump over to Part 4, where we cover off a handy way to get personalisation working on the very first-page load.


Footnote: Single Page Applications

It’s important to note that out of the box FXM does not support single page applications. Look a bit further under the hood however and you will realise that FXM includes a great Javascript API.  After mentioning that you might now be thinking that if its a third party website you’re unlikely to get access to the source in order to implement any API calls.  At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

This will likely place you in one of the following scenarios:

  1. Not a single page application, in which case you just need the external website to include the FXM beacon. (instructions)
    • This is by far the simplest scenario and happy days if your in this category.
  2. A single page application, with which you have access to make changes.
    • In this case, inject the FXM beacon on page load and use the Javascript API to trigger events, goals and outcomes.
  3. A single page application, with which you have no direct access to make changes, but can request changes to the GTM container.
    • In this case, a great backup is using the GTM container to inject the Beacon. You can then write custom javascript that uses javascript listeners to talk with the FXM API.
    • With some single page application frameworks (Angular, React, Vue) hooking into the existing javascript listeners will prove difficult. Your last remaining option may turn out to be inside the GTM container again. If the application is already sending back telemetry to Google Analytics, make good use of it. This could be achieved by either:
      • Writing a custom javascript snippet that looks for changes in Googles datalayer.
      • If events are configured directly in GTM, simply ask for changes to each event to include an FXM API call as well.
  4. If your unlucky and you have no access to make changes at all …. well …..
    •   shrug

 

 

 

1 comment on “Part 2: Experience Profile – Multi-site, Multi-domain tracking”

Part 2: Experience Profile – Multi-site, Multi-domain tracking

This blog examines a way in which you can effectively track a user across multiple sites with different domains. It demonstrates how visitor data collected across different top-level domains can be merged together. Merged visitor data provides a much clearer picture of a users movements (experience profile timeline) when a brand consists of many different websites.

You can view Part 1, Part 3 and  Part 4 via the respective links.


Note:  Tracking across sub-domains is a much easier task in Sitecore and is solved using the setting “Analytics.CookieDomain”. The problem this blog is solving is about top level domains not being able to identify and merge visits. This is due to the fact that top-level domains cannot access each others analytics cookie (for very valid security reasons). The problem is summarised well here.


Why is this necessary?

To start off with let’s define the current problem that required this solution.

Problem Definition:

Given that two websites are hosted in Sitecore under unique top level domains. If a visitor goes to http://www.abc.com.au and then visits http://www.def.com.au the same visitor is not linked and the Experience Profile shows two different visitors.

Solution:

The solution Aceik came up with for this is best depicted with a simple diagram.

GlobalCookie

(download a bigger diagram)

The different technologies at play in the diagram include:

  1. Multiple Sitecore Sites
  2. Google Analytics or GTM
  3. IFrames – running javascript, using PostMessage, with domain security measures.
  4. Cookies – Top Level Domain

What is happening in Simple Terms?

Essentially a user visits any one of your Sitecore sites and we tell XDB how it can identify that user by using the assigned Google Client ID.

To do this we use a combination of IFrame message passing between a group of trusted sites and cookies that give us the ability to go back and get the original Client ID for reuse.

By reusing the same Google Client ID on all Sitecore sites we can produce a more complete user profile. One that will contain visit data from each website. The resulting timeline of events in the Experience Profile will now display a federated view across many domains.


A More Technical Explanation

This is a more detailed Technical explanation. Let’s break it down into the sequence of steps a user could trigger and the messages being passed around.

First visit to “def.com.au”:

  1. User navigates to “def.com.au”.
  2. Google analytics initializes on page load with a random Client ID (CID) assigned.
  3. Async JavaScript is used to inject an invisible IFrame into the page. The IFrame URL is pointed at “abc.com.au/cookie”.
  4. Once the IFrame has completed loading the URL JavaScript obtains the CID from Google and uses “PostMessage” to pass it through to the “Global CID Cookie”.
  5. The “Global CID Cookie” has no prior value set so is updated with the CID passed in.
  6. The cookie page responds using “PostMessage” and sends back the same CID passed in.
  7. JavaScript on the page “def.com.au” receives the message and stores the CID received in the “Domain CID Cookie”.
  8. JavaScript on the page triggers backend code that will identify the user against the CID and merge all visits for that user.

Later visit to “hij.com.au”

  1. User navigates to “hij.com.au”.
  2. Google analytics initialisers on page load with a random Client ID (CID) assigned.
  3. Async JavaScript is used to inject an invisible IFrame into the page. The IFrame URL is pointed at “abc.com.au/cookie”.
  4. Once the IFrame has completed loading the URL JavaScript obtains the CID from Google and uses “PostMessage” to pass it through to the “Global CID Cookie”.
  5. The “Global CID Cookie” has a prior value, so the CID passed in is not used or set.
  6. The cookie page responds using “PostMessage” and sends back the prior existing CID stored from the first visit to”def.com.au”.
  7. JavaScript on the page “hij.com.au” receives the message and stores the CID received in the “Domain CID Cookie”.
  8. JavaScript on the page triggers backend code that will identify the user against the CID.  The current visit data for  “hij.com.au” is merged with the previous visit data from “def.com.au”.
  9. JavasScript on the page updated the CID stored in Google Analytics so that no other actions use the newer CID generated by “hij.com.au”.
  10. When the page is refreshed the Google Analytics initialisation code checks the “Domain CID Cookie” and passes through the existing CID to google for continued use.

Last visit to “abc.com.au”

  1. Google analytics initialisers on page load, checks for the existence of the “Global CID Cookie” and passes through the existing CID to google for continued use.
  2. Javascript on the page notices that “Global CID Cookie” is set but the “Domain CID Cookie” is not.
  3. JavaScript on the page triggers backend code that will identify the visit against the CID.  The current visit data for  “abc.com.au” is merged with the previous visit data from “def.com.au” and “hij.com.au”.

Second-time visits are an easy win

Once each domain has completed a first pass of checking for a Global CID the code is smart enough that it doesn’t need to repeat all these steps again.

The first pass sets a local domain cookie and this is used for quick access on each subsequent page load. The first pass is also coded in an async way so that it will not have an impact on page load times.

We can also set it up so that the code initialising the google tracker is instantly provided with the multi-domain Client ID straight away.

https://www.useloom.com/share/0b8c9f0a2fbc41c0baa1ae4db5e9ae6b

This loom video shows exactly how we set up the global page view event with two variables. The final variable runs custom javascript that will read our cookie to grab the Client ID.

The javascript to paste into the GTM variable is:

function() {
var getGAPrior = function(name) { var value = "; " + document.cookie; var parts = value.split("; " + name + "="); if (parts.length == 2) return parts.pop().split(";").shift(); }
var localCookie = getGAPrior("LocalCidCookie"); if (typeof localCookie !== "undefined") { return localCookie; }
var globalCookie = getGAPrior("GlobalCidCookie"); if (typeof globalCookie !== "undefined") { return globalCookie; }
return null;
}

What about FXM ?

You could potentially solve this same problem using a customisation around FXM.  It may be possible for FXM to replace the need for IFrame communication.

In order to achieve this, you would need to write a customisation to the Beacon service that allowed a Google Client ID to be sent and received.

However, the main sticking point remains around the need to maintain a global storage area (Global Cookie) that is owned by the user.  Due to the browser limitations (noted by Sitecore) of passing cookies, I’m not entirely sure an FXM replacement will work.

Compare that with browser support for IFrame “PostMessage” and you can see why we travelled down one rabbit hole compared to another.

Reference: https://community.sitecore.net/developers/f/10/t/380


Conclusion

Every website visitor gets assigned a Google Analytics Client ID.  You can use this ID to identify a user in XDB very early on. For multi-site tracking, the Client ID supplied by Google comes in very handy.  By using some clever communication tactics between your sites you can merge all the users visit data into a single profile view. The resulting timeline of events in the Experience Profile should prove very useful to your marketing teams.

To continue reading, jump over to Part 3 where we cover off External Tracking via FXM and Google Client ID.

0 comments on “Top 5 Ways to Extend Sitecore HTML Cache”

Top 5 Ways to Extend Sitecore HTML Cache

In 2017 I wrote a reasonably long post on all the different considerations a Sitecore Caching Strategy might cover.

Following on from that post its time to share some custom HTML cache extensions that we at Aceik may incorporate into our projects. This count down of custom settings has been collected from across the Sitecore community.

5) Vary By Personalisation

This one (in my opinion) is a must-have if your incorporating personalisation in your homepage.

I admit it will only be effective up to a certain number of content variations and even then needs to be used with caution. Still, if you can save your server and databases from getting hit and help keep Time to First Byte (TTFB) low, its always worth it.

Please note that if you’re displaying a customised message with data that is only relevant to that user, the number of variations may not make it worthwhile.  On the other hand, if your showing variations based on a handful of profile card match rules, we found it to be fairly effective.

Code Sample: ApplyVaryByPeronalizedDatasource.cs

Credits: Ahmed Okour

4) Vary By Resolution

Its a fairly common scenario that we want to display a different image based on the users screen size. So it stands to reason that we would need a way to differentiate this when it comes to caching our Renderings.

The particular implementation was used in combination with Scott Mulligan’s Sitecore Adaptive Image Library.

The Adaptive Image library stores the users screen resolution via a cookie in the front end razor/javavscript:

document.cookie = '@Sitecore.Configuration.Settings.GetSetting("resolutionCookieName") =' + Math.max(screen.width, screen.height) + '; path=/';
  • The first time around if no cookie is set it uses the default largest image size as the cache key.
  • If the cookie is set the cache incorporates the screen resolution.
args.CacheKey += "_#resolution:" + AdaptiveMediaProvider.GetScreenResolution();

Code 1:  ApplyVaryByResolution.cs

Code 2: AdaptiveMediaProvider.cs 

Credit:  Dadi Zhao

3) Vary By Timeout

This one’s a little different, it requires not only a new checkbox but also a new “Single-Line text” field that allows you to enter a timeout value.  The idea, as you might have guessed, is for the rendering cache to expire after a certain amount of time.

Code 1: ApplyVaryByTimeout.cs

Credit: Dylan Young

2) Vary By Url

An oldy but a goody. I’m a little surprised this one just hasn’t made it into the out of the box product. On the other hand, I can see how it could be overused if you don’t understand the context it applies to. Essentially you can take either the Context Item ID or the raw URL and make your rendering cache vary based on that key.

A good use case for this setting could be for navigation that requires the current page to always be highlighted.

Code 1: ApplyVaryByURLCaching.cs  (Context Item ID formula)

Code 2: ApplyVaryByRawUrlCaching.cs (Raw URL formula)

Credit:  The 10 other people that have blogged about this over the years.

1) Vary By Website

Given Sitecore is an Enterprise content management system we often see multi-site implementations launched on the platform. It makes sense then that you have an option to cache renderings that don’t change all that much on one site but have different content on another.

Example Usage: A global navigation used across all sites that requires some content for the context site to show differently.

Code: ApplyVaryByWebsite.cs

Credit: Younes van Ruth

That rounds out the count down of some of the top ways to extend Sitecore’s out of the box rendering cache. Your renderings will likely use a combination of these settings in order to achieve adequate caching coverage.

For a better idea on how you might add the top 5 above into Sitecore. Please see the technical footnote below. 



 

Technical Footnote:

All these extensions will add an extra checkbox in the Rendering cache tab within Sitecore.

cachesettings

In order for this check box to show up you need to add your custom checkbox fields to the template:

/sitecore/templates/System/Layout/Sections/Caching

You can achieve this in several ways. and there are a lot of other blogs “on the line” that describe how to add in these custom checkboxes so I won’t go into a deep dive here.

With regard to the Helix architecture lets outline one way you could set this up. Aceik has a module in the foundation layer that has all the custom cache checkboxes added to a single template (serialized in Unicorn within that module) . The system template above is then made to inherit from your custom template in order to inherit the custom caching fields.

custom

1 comment on “Part 1 – Experience Profile – Identify Users Early”

Part 1 – Experience Profile – Identify Users Early

This is the first part of a four-part blog series where I will introduce some XDB customisation that could be of use on your next Sitecore project. All these customisations relate back to the Experience Profile and identifying the user.

Part 1:  We introduce the concept of early profile identification, there is no such thing as the anonymous user.

Part 2: We dive into the world of multi-site, multi-domain tracking. How to implement a global-common cookie for all your brand’s sites.

Part 3: External Tracking via FXM and Google Client ID – How to continue tracking a user on another website (not hosted in Sitecore).  (Release TBC)

Part 4: How to achieve Instant personalisation on the very first page load. We can make use of the fact that inbound links from a social stream can already identify a users demographic.


 

Part 1:  Identify Users Early

Some of this blog has not been updated for Sitecore 9 yet, other parts have.

In part one I’m going to talk about identifying your visitors as early on in the visit.

By Identifying users I am referring to allowing users to show up in the Experience Profile.

ExperienceProfileButton

By default, Sitecore will not track every single anonymous user that reaches your site.  In order to get them to show up in the Experience Profile, a few quick changes are necessary.

One of the simplest ways to do this is using either WFFM or the new Forms components in Sitecore 9. In fact, the setup hasn’t changed all that much between the version for this particular use case.

Sitecore 9:  Setup forms save actions

Sitecore 8:  Setup save action in WFFM

You can use these save action with any forms on your website that collect personal details. The identification of a user should be happening when a user submits a form that contains personal details. This is a well documented OOTB forms feature that you can set up without any developer intervention.

Another way to identify a user is to do so programmatically by updating the contacts facet details and then calling the identify method on the tracker.

Sitecore 9:  (reference)

Sitecore.Analytics.Tracker.Current.Session.IdentifyAs("sitecoreextranet", "identifier");

Sitecore 8:

Sitecore.Analytics.Tracker.Current.Session.Identify("identifier")
A side by side code example is available here. 

Calling the above line of code with a string identifier associates the visit data with that identifier. When the user visits again on a different device or tracked website if you are able to call the same line of code with the exact same identifier the visitor’s data will be merged into a single Experience Profile record.

Taking the above concept a little bit further we can also track users across non-Sitecore based websites. By using some google smarts and injecting the FXM beacon onto a third party website we can continue to track the user including, page visits, goals, and outcomes.  (this is covered off more in Part 3)


 

No User is Anonymous

Given that we can choose when a user should be identified and displayed in the Experience Profile. Its time to introduce a concept that no user is anonymous. In fact, this is true for the majority of websites in existence, if they use Google Analytics.

Google assigns an identifier called the Client ID to each visitor that comes along to your website.  The Client ID is stored in the GA cookie and has an expiration date of 2 years after creation.

Note: Google also has a concept of User ID that is used to track sessions across devices. The difference is that each website must send this value to Google in order for it to be used. In reality, this is going to most relevant if you only want to identify users in Sitecore if they have performed Authentication. 

We can use Google’s Client ID to allow the user to show up in the Experience Profile as early as is necessary.

To do this setup the following:

  1. Read the Client ID via JavaScript
    • if (typeof ga !== "undefined"){
          cid = ga.getAll()[0].get("clientId");
      }
  2. Send the Client ID to XDB / XConnect via async javascript.  (Github Reference)
    • if (typeof cid !== "undefined") {	
      	var setEventPath = '/api/xdb/Analytics/TriggerEvent/Event/?eventName=updateGoogleCid&data=' + cid;
              $.ajax({
      		type: 'POST',
      		url: setEventPath,
      		dataType: 'json',
      		success: function (json) {
      		      setCookie(cookieName, cid, 1);
      		},
      		error: function () {
      		      console.warn("An error occurred triggering the event");
      		}
      	});
      }
    • The above code assumes a custom Controller was set up to trigger Goals via Ajax/Javascript.  (Github Reference)
  3. Identify the user (See code examples mentioned earlier or look at our example controller)

Note 1:  The above code only needs to be triggered once per visitor. To save this running multiple times you can assign a cookie to the user. By checking if the cookie has been set you can prevent the above process from running more times then necessary. 

Note 2:  In a single site environment you may choose to leave this identification until a certain amount of visit data has been collected.  For example, writing some logic to check that the user has achieved a certain amount of goals.  This will prevent users with little or no data showing the Experience Profile. 

Note 3: In a multi-site environment the opposite to note 2 becomes necessary. With visitors hitting multiple sites you need to identify them as early as possible. The main reason being that you want any visit data collected from the current website merged with visit data from any other site visits. The resulting merged data provides a great overview of a users movements. This will be discussed more in part 2 when we will look into multi-site XDB visitor identification using a Global common cookie.


Experience Profile – First Name, Last Name

This next step is optional. Given that you have identified the user it will now show in the Experience Profile.  At this point, you may not have a first and last name for that visitor. As an alternative, you could split the Client ID into two numbers and use them as the initial values for the first and last name. If the user completes a newsletter signup, logs in or makes an inquiry at a later time, that would be a good opportunity to update these to the correct values.

firstlast.png

(Github Reference)


 

Part 1: Conclusion

We have demonstrated above how you can identify a user so that they show up in the Experience Profile. As part of this, we have looked at how you could use the Client ID from Google to identify the user as early on as you like, potentially on the very first-page load.  In part 2 of this series on Experience Profile customisations we take a look at how to track users across multiple top level domains.

0 comments on “Sitecore Page Speed: Part 3: Eliminate JS Loading Time”

Sitecore Page Speed: Part 3: Eliminate JS Loading Time

In part 1 & part 2 of our Sitecore page speed blog, we covered off:

  • The Google Page Speed Insights tool.
  • We looked at a node tool called critical that could generate above the fold (critical viewport) CSS code that is minified.
  • We referenced the way in which Google recommends deferring CSS loading.
  • We showed a way to integrate “Above The Fold” CSS into a Helix based project and achieve a page free of render blocking CSS.

In this 3rd part of the series, we will introduce a way to defer the load of all external javascript assets (async).

A reminder that I have committed the sample code for this blog into a fork of the helix habitat example project. You can find the sample here. For a direct comparison of changes made to achieve these page load enhancements, view a side by side comparison here.

Dynamic JS loading Installation Steps:

  1. Inside Sitecore add a new view Rendering that reference the file /Views/Common/Assets/Scripts-3.2.1.cshtml
    • Note down the ID of this rendering and replace in the ID of the rendering in the next step.
  2. Update the Default.cshtml layout to include a new cached rendering.
  3.  @*Scripts Legacy Jquery jquery-3.2.1 *@
     @Html.Sitecore().CachedRendering("{B0DD36CE-EE4A-4D01-9986-7BEF114196DD}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_bottom_scripts" })
    • cacheKey = This variable is something unique that will identify the page. You could use the Sitecore context Item ID or path for example.

Explanation:

The rendering Scripts-3.2.1.cshtml will render out the following javascript onto the page:

var scriptsToLoad = ['//cdnjs.cloudflare.com/ajax/libs/modernizr/2.8.3/modernizr.min.js','//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js','/assets/js/slick.min.js','/assets/js/global.js','/assets/js/Script.js','/assets-legacy/js/lib/lazyload.min.js'];
src="/assets-legacy/js/lib/jquery-3.2.1.min.js" async defer>
  • First of all, it prints out a JS array of all the scripts that this page requires.
    • This is the array of JS files that comes from Themes and Page Assets inside the CMS. If you are familiar with Habitat Helix this list can be content managed inside the CMS.
  • It then instructs the jquery library to be loaded async (which will not block the network download of the page response).
  • Once jquery is loaded, this modified version of jquery contains some code at the end that will read in the list of scripts dynamically and apply them to the page.
    • This is achieved with fairly simple AJAX load calls to the script URLs.

Outcome:

Once integrated successfully you will end up with a page that does not contain any blocking JS network calls.  The Google Page Speed tool should give you a nice score boost for your achievement in reducing initial load time.


Hints and Tips:

Bootstrapping JQuery Code:

  • jquery Document.Ready() function calls may not fire inside dynamically loaded JS files. This is because the JS file is loaded after DOM is ready and it’s too late for the Document.Ready() event at this stage.
  • As a workaround, you could code your JS files to bootstrap on both the Document.Ready() or whenever $ is not undefined.
  • In the case of dynamic loading in this manner, because jquery was loaded first, $ should not be undefined and your code should be bootstrapped successfully.

Debugging in chrome:

  • When dynamically loading JS files they may strangely not appear in the chrome console debugger as you would normally expect.
  • The workaround for this is to add a comment to the top of each JS library
  • //# sourceURL=global.js
  • This will cause the chrome debugger to list the file in the source tab under the “(no domain)” heading.
  • You will then be able to debug the file as per normal.
0 comments on “Sitecore Page Speed: Part 2: Inlining CSS into Helix”

Sitecore Page Speed: Part 2: Inlining CSS into Helix

In part 1 of this Sitecore page speed blog, we covered off:

  • The Google Page Speed Insights tool.
  • We looked at a node tool called critical that could generate above the fold (critical viewport) CSS code that is minified.
  • We referenced the way in which Google recommends deferring CSS loading.

In this second part of the Sitecore Page Speed series, I am going to cover off how I would go about achieving this in my Sitecore layout.

Before we dive in I have committed the sample code for this blog into a fork of the helix habitat example project. You can find the sample here. For a direct comparison of changes made to achieve these page load enhancements, view a side by side comparison here.

Installation Steps:

MinimisedCode

  1. For each page that you want to render above the fold CSS we take the minimised code (we generated in the first blog) and put it on the page within the CMS.
  2. Inside the CMS we also create some new renderings in the common project.
    • These renderings are used in the default.cshtml layout.
    • They point to the CSS rendering code.
    • This wrapping technique provides the ability to cache the rendering so that the code in RenderAssetsService.cs does not need to be executed on every single page load.
    • Take note of the IDs of each rendering you will need to copy them over to the CachedRendering IDs shown in step 3 below.
  3. Update the default.cshtml layout with two key renderings.
    • One in the <head> tag that points to /Views/Common/Assets/InlineStyles.cshtml
    •  @*Inline Styles Rendering*@
       @Html.Sitecore().CachedRendering("{B14DA82E-F844-4945-8F31-4577A52861E1}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_critical_styles" })
    • One just before the </body> closing tag that points to /Views/Common/Assets/StylesDeferred.cshtml
    •  @*Styles Rendering Deferred Styles *@
       @Html.Sitecore().CachedRendering("{F04C562A-CBF9-40CF-8CA9-8CE83FDF0BFA}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_bottom_styles" })

StylesDeferred.cshtml  contains logic that will check for inline CSS on every page. If the page contains Inline CSS then the main CSS files will have their network download deferred until later. On the other hand, if the page does not contain any inline CSS then the main CSS files will be loaded as blocking assets. Doing so ensures that the page displays normally in both situations.

  • The cacheKey variable passed to our CachedRendering is simply something to identify the page as unique.  You could use the Sitecore context item ID or path for example.

If done correctly you should end up with pages that look normal even with the main CSS files deleted (only do this as a test). The CSS will no longer load via another network that blocks the page and your Google Page Speed rank should recieve a boost.

0 comments on “Sitecore Page Speed: Part 1 : Above The Fold Content”

Sitecore Page Speed: Part 1 : Above The Fold Content

In this series of blogs, I am going to run through ways in which you can increase your score on the Google Page Speed Insights tool.

If you’re not familiar with PageSpeed Insights head over to the page hosted by Google and enter the URL of a Sitecore project you have recently worked on.  It will give you a rank out of 100 and then advise on what is wrong with the way your page HTML, JavaScript and CSS assets are loaded. It will also provide feedback on the way your server is setup and how assets are cached.

If you’re getting a score below 50 on Desktop or Mobile I would suggest it’s time to look at ways you can improve your layouts and renderings.  A score above 80 and you’re really doing pretty well.

If you got a score of 100 … you should probably be writing this blog instead of me.  🙂

Topic 1: Above the fold content / Critical CSS

One of Google’s recommendations is to use a technique called:

  • Above the Fold
  • Critical CSS

As you can imagine this refers to the CSS required to render only the visible part of the page.

In order to do this, you render the critical CSS inline within the head tag. The rest of your CSS is then loaded using a deferred script block approach.  This is all demonstrated in a simple example on this page.

The way this works is the inline minified CSS is delivered over the network as part of the page payload. It is not an external asset and will not block the page load via another network request. The page can, therefore, display the visible section (above the fold) immediately after the page is downloaded.

The bulk of CSS required by the page can be loaded a few moments later via deferred network requests.

What is the best way to construct the minimised block of Inline CSS you ask?

Lucky for us a very handy node tool called “critical” is available for download.

You can really easily spin up a gulp script that will generate all the critical CSS for the main pages across your Sitecore website.

A full example of a gulp critical script can be found here.  Download it and simply run:

  • npm install
  • gulp

This will generate a series of CSS files that contain critical viewport CSS.


In the next blog post, we will cover off how to integrate this inline CSS (above the fold) into our Sitecore layouts.


References:

 

0 comments on “Sitecore Azure Search: Top 10 Tips”

Sitecore Azure Search: Top 10 Tips

Its been a while since I first wrote about Azure Search and we have a few more tips and tricks on how to optimise Azure Search implementations.

Before proceeding if you missed our previous posts check out some tools we created for Azure Search Helix setup and Geo-Spatial Searching.

Also, check out the slides from our presentation at last years Melbourne Sitecore User Group.

Ok let us jump into the top 10 tips:

Tip 1) Create custom indexes for targeted searching

The default out of the box indexes will attempt to cover just about everything in your Sitecore databases. They do so to support Sitecore CMS UI searches out of the box.  It’s not a problem if you want to use the default indexes (web, master) to search with, however for optimal searches and faster re-indexing time a custom index will help performance.

By stepping back and looking at the different search requirements across the site you can map out your custom indexes and the data that each will require.

Consider also that if the custom indexes need to be used across multiple Feature Helix modules the configuration files and search repositories may need to live in an appropriate Foundation module. More about feature vs foundation can be found here.

Tip 2) Keep your indexes lean

This tip follows on from the first Tip.

Essentially the default Azure Search configuration out of the box will have:

<indexAllFields>true</indexAllFields>

This can include a lot of fields and your probably not going to need every single Sitecore field in order to present the user with meaningful data on the front end interfaces.

The other option is to specify only the fields that you need in your indexes:

<include hint="list:IncludeField"> 
<Text>{A60ACD61-A6DB-4182-8329-C957982CEC74}</Text> 
</include>

The end result will limit the amount of JSON payload that needs to be sent across the network and also the amount of payload that the Sitecore Azure Search Provider needs to process.

Particularly if you are returning thousands of search results you can see what happens when “IndexAllFields” is on via Fiddler.

This screenshot is via a local development machine and Azure Search instance at the Microsoft hosting centre.

Fiddler Index

JSONFIelds

  • So for a single query “IndexAllFields” can result in:
    • 2 MB plus JSON payload size.
    • Document results with all Sitecore metadata included. That could be around 100 fields.

If your query results in Document counts in the thousands obviously the payload will grow rapidly. By reducing the fields in your indexes (removing un-necessary data)  you can speed up query, transfer and processing times and get the data displayed quicker.

Tip 3) Make use of direct azure connections

Sitecore has done a lot of the heavy lifting for you in the Sitecore Azure Search Provider. It’s a bit like a wrapper that does all the hard work for you. In some cases however you may find that writing your own queries that connect via the Azure Search DLL gives you better performance.

Tip 4) Monitor performance via Azure Search Portal

It’s really important to monitor your Azure Search Instance via Azure Portal. This will give you critical clues as to whether your scaling settings are appropriate.

In particular look out for high latency times as this will indicate that your search queries are getting throttled. As a result, you may need to scale up your Azure Search Instance.

In order to monitor your latency times go to:

  1. Login to Azure Portal
  2. Navigate to your Azure Search Instance.
  3. Click on metrics in the left-hand navigation
    • metrics
  4. Select the “Search Latency” checkbox and scan over the last week.
    • graph
  5. You will see some peaks these usually indicate heavy periods of re-indexing. During re-indexing, the Azure Search instance is under heavy load. As long as your peaks under 0.5-second mark your ok.  If you see Search Latency up into the 2-second timeframe you probably need to either adjust how your indexes are used (caching and re-indexing) or scale up to avoid the flow on effects of slow search.

Tip 5) Cache Wrappers

In the code that uses Azure Search, it would be advisable to use cache wrappers around the searches when possible. For your most common searches, this should prevent Azure Search getting hit repeatedly with the same query.

For a full example of cache wrapper checkout the section titled Sitecore.Caching.CustomCache in my previous blog post.

Tip 6) Disable Indexing on CD

This is a hot tip that we got from Sitecore Support when we started to encounter high search latency during re-indexing.

Most likely in your production setup, you will have a single Azure Search instance shared between CM and CD environments.

You need to factor in that CM should be the server that controls the re-indexing (writing) and CD will most likely be the server doing the queries (reading).

Re-indexing is triggered via the event queue and every server subscribes and reacts to these events. Each server with the out of the box search configuration will cause the Azure Search indexes to be updated.  In a shared Azure Search (or SOLR instance) this only needs to be updated by a single server. Each additional re-index is overkill and just doubling up on re-indexing workload.

You can, therefore, adjust the configuration on the CD servers so that it does not cause re-indexing to happen.

The trick is in your index configuration files to use Configuration Roles to specify the indexing strategy on each server.

 <strategies hint="list:AddStrategy">
 <!--
 NOTE: order of these is controls the execution order 
 -->
 <strategy role:require="Standalone OR ContentManagement" ref="contentSearch/indexConfigurations/indexUpdateStrategies/onPublishEndAsync"/>
 <strategy role:require="ContentDelivery" ref="contentSearch/indexConfigurations/indexUpdateStrategies/manual"/>
 </strategies>

Setting the index update strategy to manual on your CD servers will take a big load off your remote indexes.

Particularly if you have multiple CD servers using the same indexes. Each additional CD server would cause additional updates to the index without the above setting.

Tip 7) Rigid Indexes – Have a deployment plan

If your deployment includes additions and changes to the indexes and you need 100% availability of search data, a deployment plan for re-indexing will be required.

Grant chatted about the problem in his post here. To get around this you could consider using the blue / green paradigm during deployments.

  • This would mean having a set blue indexes and a set of green indexes.
  • Using slot swaps for your deployments.
    • One slot points to green in configuration.
    • One slot (production) points to blue in configuration.
  • To save on costs you could decommission the staging slot between deployments.

Tip 8) HttpClient should be a singleton or static

The basic idea here is that you should keep the number of HttpClient instances in your code to an absolute minimum if you want optimal performance.

The Sitecore Azure Search provider actually spins up 2 x HttpClient connections for every single index. This in itself is not ideal and unfortunately, there is not a lot you can do about this code in the core product itself.

In your own connections to other APIs, however, HttpClient SendAsync is perfectly thread safe.

By using HttpClient singletons you stand to gain big in the performance stakes. One great blog article worth reading runs you through the performance benefits. 

It’s also worth noting that in the Azure Search documentation Microsoft themselves say you should treat HttpClient as a singleton.

Tip 9) Monitor your resources

In Azure web apps you have finite resources with your app server plans. Opening multiple connections with HttpClient and not disposing of them properly can have severe consequences.

For instance, we found a bug in the core Sitecore product that was caused by the connection retryer. It held open ports forever whenever we hit out Azure Search plan usage limits.  The result was that we hit outbound open connection limits for sockets and this caused our Sitecore instance to ground to a slow halt.

Sitecore has since resolved the issue mentioned above after a lengthy investigation working alongside the Aceik team. This was tracked under reference number 203909.

To monitor the number of sockets in Azure we found a nice page on the MSDN site.

Tip 10) Make use of OData Expressions

This tip relates strongly to tip 3.  Azure search has some really powerful OData Expressions that you can make use of by a direct connection.  Once you have had a play with direct connections it is surprisingly easy to spin up really fast queries.

Operators include:

  • OrderBy, Filter (by field), Search
  • Logical operators (and, or, not).
  • Comparison expressions (eq, ne, gt, lt, ge, le).
  • any with no parameters. This tests whether a field of type Collection(Edm.String) contains any elements.
  • any and all with limited lambda expression support.
  • Geospatial functions geo.distance and geo.intersects. The geo.distance function returns the distance in kilometres between two points.

See the complete list here.


 

Q&A

Q) Anything on multiple region setups? Or latency considerations?

A) Multi-region setups:   Although I can’t comment from experience the configuration documentation does state that you can specify multiple Azure Search instances using a pipe separator in the connection string.

<add name="cloud.search" connectionString="serviceUrl=https://searchservice1.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey1|serviceUrl=https://searchservice2.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey2" /> 

Unfortunately, the documentation does not go into much detail. It simply states that “Sitecore supports a Search service with geo-replicated scenarios” which one would hope means under the hood it has all the smarts to take care of this.

I’m curious about this as well and opened a stack overflow ticket. Let’s see if anyone else in the community can answer this for us.

Search Latency: 

Search latency can be directly improved by adding more replicas via the scaling setting in Azure Portal

replicas

Two replicas should be your starting point for an Azure Search instance to support Sitecore. Once you launch your site you will need to follow the instruction in tip 4 above monitor search latency.  If the latency graph is showing consistent spikes and high latency times above 0.5 seconds it’s probably time to add some more replicas.

0 comments on “Dynamic Placeholders and Sitecore Powershell”

Dynamic Placeholders and Sitecore Powershell

This post gives a detailed example of a Sitecore PowerShell script that automates the process of inserting content components (renderings) with particular content into a number of items.

Scenario:

Recently a client of ours wanted to roll out a promotion content row to all items of a particular template.

  • These items also happened to be bucket items so there were quite a few of them.
  • The components were usually added to the page via the Experience Editor.
  • The components were used in a building blocks manner:
    • Add a full-width Row
    • Add a Two column component inside the row
    • Add an image in the left-hand column
    • Add a rich text component in the right-hand column
  • The components were added via Dynamic Placeholders.

You could achieve this in certain cases by adding to the Standard Values however if your items already vary drastically from the Standard Values you may not get good enough coverage.

So here is the script with lots of comments so that you can follow along.

View the full script in our GitHub Repo here.

Key Take Aways

  1. We found you need to use Add-Rendering and then Get-Rendering in order to get the correct UniqueID of the rendering just added.
    • In order to construct the Dynamic Placeholder name correctly, you will need the correct UniqueID of the parent just added to the page.
    • The easiest way to facilitate that lookup in Get-Rendering was to give the rendering a unique parameter to identify it.
       (-Parameter @{"InNewPromotionRow"=1;
    • Without this, you may get a list of all renderings that match the non-specific lookup you just used in Get-Rendering
  2. Because of the way Dynamic Placeholders are added to the page this worked best in the Final Layout.
     -FinalLayout
  3. If things go really wrong in your script its best practice to spin up a new Item Version so that you can roll these changes back via another script.
     Add-ItemVersion -Path $i.Paths.Path -IfExist Append -Language "en"
  4. If you have two dynamic placeholders with the same base name then an additional postfix is used to target the second one. So for instance in our case, we had a left and a right-hand column with dynamic placeholders that use the same base placeholder name. In order to target the right-hand side you use “_1” on the end:
     $colWidePlaceholderRight = "/page-body-meeting/row_$rowId/col-wide_$($twoColumnRenderingId)_1"
  5. Nested Dynamic Placeholders: The line of code in point for above also shows how to construct the placeholder key of a nested dynamic placeholder.  In this case, we have a Column inside a Row. The PowerShell script you use to build your components needs to keep track of the placeholder hierarchy so that you can continue to construct nested dynamic placeholders.