0 comments on “Sitecore Page Speed: Part 3: Eliminate JS Loading Time”

Sitecore Page Speed: Part 3: Eliminate JS Loading Time

In part 1 & part 2 of our Sitecore page speed blog, we covered off:

  • The Google Page Speed Insights tool.
  • We looked at a node tool called critical that could generate above the fold (critical viewport) CSS code that is minified.
  • We referenced the way in which Google recommends deferring CSS loading.
  • We showed a way to integrate “Above The Fold” CSS into a Helix based project and achieve a page free of render blocking CSS.

In this 3rd part of the series, we will introduce a way to defer the load of all external javascript assets (async).

A reminder that I have committed the sample code for this blog into a fork of the helix habitat example project. You can find the sample here. For a direct comparison of changes made to achieve these page load enhancements, view a side by side comparison here.

Dynamic JS loading Installation Steps:

  1. Inside Sitecore add a new view Rendering that reference the file /Views/Common/Assets/Scripts-3.2.1.cshtml
    • Note down the ID of this rendering and replace in the ID of the rendering in the next step.
  2. Update the Default.cshtml layout to include a new cached rendering.
  3.  @*Scripts Legacy Jquery jquery-3.2.1 *@
     @Html.Sitecore().CachedRendering("{B0DD36CE-EE4A-4D01-9986-7BEF114196DD}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_bottom_scripts" })
    • cacheKey = This variable is something unique that will identify the page. You could use the Sitecore context Item ID or path for example.

Explanation:

The rendering Scripts-3.2.1.cshtml will render out the following javascript onto the page:

var scriptsToLoad = ['//cdnjs.cloudflare.com/ajax/libs/modernizr/2.8.3/modernizr.min.js','//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js','/assets/js/slick.min.js','/assets/js/global.js','/assets/js/Script.js','/assets-legacy/js/lib/lazyload.min.js'];
src="/assets-legacy/js/lib/jquery-3.2.1.min.js" async defer>
  • First of all, it prints out a JS array of all the scripts that this page requires.
    • This is the array of JS files that comes from Themes and Page Assets inside the CMS. If you are familiar with Habitat Helix this list can be content managed inside the CMS.
  • It then instructs the jquery library to be loaded async (which will not block the network download of the page response).
  • Once jquery is loaded, this modified version of jquery contains some code at the end that will read in the list of scripts dynamically and apply them to the page.
    • This is achieved with fairly simple AJAX load calls to the script URLs.

Outcome:

Once integrated successfully you will end up with a page that does not contain any blocking JS network calls.  The Google Page Speed tool should give you a nice score boost for your achievement in reducing initial load time.


Hints and Tips:

Bootstrapping JQuery Code:

  • jquery Document.Ready() function calls may not fire inside dynamically loaded JS files. This is because the JS file is loaded after DOM is ready and it’s too late for the Document.Ready() event at this stage.
  • As a workaround, you could code your JS files to bootstrap on both the Document.Ready() or whenever $ is not undefined.
  • In the case of dynamic loading in this manner, because jquery was loaded first, $ should not be undefined and your code should be bootstrapped successfully.

Debugging in chrome:

  • When dynamically loading JS files they may strangely not appear in the chrome console debugger as you would normally expect.
  • The workaround for this is to add a comment to the top of each JS library
  • //# sourceURL=global.js
  • This will cause the chrome debugger to list the file in the source tab under the “(no domain)” heading.
  • You will then be able to debug the file as per normal.
0 comments on “Sitecore Page Speed: Part 2: Inlining CSS into Helix”

Sitecore Page Speed: Part 2: Inlining CSS into Helix

In part 1 of this Sitecore page speed blog, we covered off:

  • The Google Page Speed Insights tool.
  • We looked at a node tool called critical that could generate above the fold (critical viewport) CSS code that is minified.
  • We referenced the way in which Google recommends deferring CSS loading.

In this second part of the Sitecore Page Speed series, I am going to cover off how I would go about achieving this in my Sitecore layout.

Before we dive in I have committed the sample code for this blog into a fork of the helix habitat example project. You can find the sample here. For a direct comparison of changes made to achieve these page load enhancements, view a side by side comparison here.

Installation Steps:

MinimisedCode

  1. For each page that you want to render above the fold CSS we take the minimised code (we generated in the first blog) and put it on the page within the CMS.
  2. Inside the CMS we also create some new renderings in the common project.
    • These renderings are used in the default.cshtml layout.
    • They point to the CSS rendering code.
    • This wrapping technique provides the ability to cache the rendering so that the code in RenderAssetsService.cs does not need to be executed on every single page load.
    • Take note of the IDs of each rendering you will need to copy them over to the CachedRendering IDs shown in step 3 below.
  3. Update the default.cshtml layout with two key renderings.
    • One in the <head> tag that points to /Views/Common/Assets/InlineStyles.cshtml
    •  @*Inline Styles Rendering*@
       @Html.Sitecore().CachedRendering("{B14DA82E-F844-4945-8F31-4577A52861E1}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_critical_styles" })
    • One just before the </body> closing tag that points to /Views/Common/Assets/StylesDeferred.cshtml
    •  @*Styles Rendering Deferred Styles *@
       @Html.Sitecore().CachedRendering("{F04C562A-CBF9-40CF-8CA9-8CE83FDF0BFA}", new RenderingCachingSettings { Cacheable = true, CacheKey = cacheKey + "_bottom_styles" })

StylesDeferred.cshtml  contains logic that will check for inline CSS on every page. If the page contains Inline CSS then the main CSS files will have their network download deferred until later. On the other hand, if the page does not contain any inline CSS then the main CSS files will be loaded as blocking assets. Doing so ensures that the page displays normally in both situations.

  • The cacheKey variable passed to our CachedRendering is simply something to identify the page as unique.  You could use the Sitecore context item ID or path for example.

If done correctly you should end up with pages that look normal even with the main CSS files deleted (only do this as a test). The CSS will no longer load via another network that blocks the page and your Google Page Speed rank should recieve a boost.

0 comments on “Sitecore Page Speed: Part 1 : Above The Fold Content”

Sitecore Page Speed: Part 1 : Above The Fold Content

In this series of blogs, I am going to run through ways in which you can increase your score on the Google Page Speed Insights tool.

If you’re not familiar with PageSpeed Insights head over to the page hosted by Google and enter the URL of a Sitecore project you have recently worked on.  It will give you a rank out of 100 and then advise on what is wrong with the way your page HTML, JavaScript and CSS assets are loaded. It will also provide feedback on the way your server is setup and how assets are cached.

If you’re getting a score below 50 on Desktop or Mobile I would suggest it’s time to look at ways you can improve your layouts and renderings.  A score above 80 and you’re really doing pretty well.

If you got a score of 100 … you should probably be writing this blog instead of me.  🙂

Topic 1: Above the fold content / Critical CSS

One of Google’s recommendations is to use a technique called:

  • Above the Fold
  • Critical CSS

As you can imagine this refers to the CSS required to render only the visible part of the page.

In order to do this, you render the critical CSS inline within the head tag. The rest of your CSS is then loaded using a deferred script block approach.  This is all demonstrated in a simple example on this page.

The way this works is the inline minified CSS is delivered over the network as part of the page payload. It is not an external asset and will not block the page load via another network request. The page can, therefore, display the visible section (above the fold) immediately after the page is downloaded.

The bulk of CSS required by the page can be loaded a few moments later via deferred network requests.

What is the best way to construct the minimised block of Inline CSS you ask?

Lucky for us a very handy node tool called “critical” is available for download.

You can really easily spin up a gulp script that will generate all the critical CSS for the main pages across your Sitecore website.

A full example of a gulp critical script can be found here.  Download it and simply run:

  • npm install
  • gulp

This will generate a series of CSS files that contain critical viewport CSS.


In the next blog post, we will cover off how to integrate this inline CSS (above the fold) into our Sitecore layouts.


References:

 

0 comments on “Sitecore Azure Search: Top 10 Tips”

Sitecore Azure Search: Top 10 Tips

Its been a while since I first wrote about Azure Search and we have a few more tips and tricks on how to optimise Azure Search implementations.

Before proceeding if you missed our previous posts check out some tools we created for Azure Search Helix setup and Geo-Spatial Searching.

Also, check out the slides from our presentation at last years Melbourne Sitecore User Group.

Ok let us jump into the top 10 tips:

Tip 1) Create custom indexes for targeted searching

The default out of the box indexes will attempt to cover just about everything in your Sitecore databases. They do so to support Sitecore CMS UI searches out of the box.  It’s not a problem if you want to use the default indexes (web, master) to search with, however for optimal searches and faster re-indexing time a custom index will help performance.

By stepping back and looking at the different search requirements across the site you can map out your custom indexes and the data that each will require.

Consider also that if the custom indexes need to be used across multiple Feature Helix modules the configuration files and search repositories may need to live in an appropriate Foundation module. More about feature vs foundation can be found here.

Tip 2) Keep your indexes lean

This tip follows on from the first Tip.

Essentially the default Azure Search configuration out of the box will have:

<indexAllFields>true</indexAllFields>

This can include a lot of fields and your probably not going to need every single Sitecore field in order to present the user with meaningful data on the front end interfaces.

The other option is to specify only the fields that you need in your indexes:

<include hint="list:IncludeField"> 
<Text>{A60ACD61-A6DB-4182-8329-C957982CEC74}</Text> 
</include>

The end result will limit the amount of JSON payload that needs to be sent across the network and also the amount of payload that the Sitecore Azure Search Provider needs to process.

Particularly if you are returning thousands of search results you can see what happens when “IndexAllFields” is on via Fiddler.

This screenshot is via a local development machine and Azure Search instance at the Microsoft hosting centre.

Fiddler Index

JSONFIelds

  • So for a single query “IndexAllFields” can result in:
    • 2 MB plus JSON payload size.
    • Document results with all Sitecore metadata included. That could be around 100 fields.

If your query results in Document counts in the thousands obviously the payload will grow rapidly. By reducing the fields in your indexes (removing un-necessary data)  you can speed up query, transfer and processing times and get the data displayed quicker.

Tip 3) Make use of direct azure connections

Sitecore has done a lot of the heavy lifting for you in the Sitecore Azure Search Provider. It’s a bit like a wrapper that does all the hard work for you. In some cases however you may find that writing your own queries that connect via the Azure Search DLL gives you better performance.

Tip 4) Monitor performance via Azure Search Portal

It’s really important to monitor your Azure Search Instance via Azure Portal. This will give you critical clues as to whether your scaling settings are appropriate.

In particular look out for high latency times as this will indicate that your search queries are getting throttled. As a result, you may need to scale up your Azure Search Instance.

In order to monitor your latency times go to:

  1. Login to Azure Portal
  2. Navigate to your Azure Search Instance.
  3. Click on metrics in the left-hand navigation
    • metrics
  4. Select the “Search Latency” checkbox and scan over the last week.
    • graph
  5. You will see some peaks these usually indicate heavy periods of re-indexing. During re-indexing, the Azure Search instance is under heavy load. As long as your peaks under 0.5-second mark your ok.  If you see Search Latency up into the 2-second timeframe you probably need to either adjust how your indexes are used (caching and re-indexing) or scale up to avoid the flow on effects of slow search.

Tip 5) Cache Wrappers

In the code that uses Azure Search, it would be advisable to use cache wrappers around the searches when possible. For your most common searches, this should prevent Azure Search getting hit repeatedly with the same query.

For a full example of cache wrapper checkout the section titled Sitecore.Caching.CustomCache in my previous blog post.

Tip 6) Disable Indexing on CD

This is a hot tip that we got from Sitecore Support when we started to encounter high search latency during re-indexing.

Most likely in your production setup, you will have a single Azure Search instance shared between CM and CD environments.

You need to factor in that CM should be the server that controls the re-indexing (writing) and CD will most likely be the server doing the queries (reading).

Re-indexing is triggered via the event queue and every server subscribes and reacts to these events. Each server with the out of the box search configuration will cause the Azure Search indexes to be updated.  In a shared Azure Search (or SOLR instance) this only needs to be updated by a single server. Each additional re-index is overkill and just doubling up on re-indexing workload.

You can, therefore, adjust the configuration on the CD servers so that it does not cause re-indexing to happen.

The trick is in your index configuration files to use Configuration Roles to specify the indexing strategy on each server.

 <strategies hint="list:AddStrategy">
 <!--
 NOTE: order of these is controls the execution order 
 -->
 <strategy role:require="Standalone OR ContentManagement" ref="contentSearch/indexConfigurations/indexUpdateStrategies/onPublishEndAsync"/>
 <strategy role:require="ContentDelivery" ref="contentSearch/indexConfigurations/indexUpdateStrategies/manual"/>
 </strategies>

Setting the index update strategy to manual on your CD servers will take a big load off your remote indexes.

Particularly if you have multiple CD servers using the same indexes. Each additional CD server would cause additional updates to the index without the above setting.

Tip 7) Rigid Indexes – Have a deployment plan

If your deployment includes additions and changes to the indexes and you need 100% availability of search data, a deployment plan for re-indexing will be required.

Grant chatted about the problem in his post here. To get around this you could consider using the blue / green paradigm during deployments.

  • This would mean having a set blue indexes and a set of green indexes.
  • Using slot swaps for your deployments.
    • One slot points to green in configuration.
    • One slot (production) points to blue in configuration.
  • To save on costs you could decommission the staging slot between deployments.

Tip 8) HttpClient should be a singleton or static

The basic idea here is that you should keep the number of HttpClient instances in your code to an absolute minimum if you want optimal performance.

The Sitecore Azure Search provider actually spins up 2 x HttpClient connections for every single index. This in itself is not ideal and unfortunately, there is not a lot you can do about this code in the core product itself.

In your own connections to other APIs, however, HttpClient SendAsync is perfectly thread safe.

By using HttpClient singletons you stand to gain big in the performance stakes. One great blog article worth reading runs you through the performance benefits. 

It’s also worth noting that in the Azure Search documentation Microsoft themselves say you should treat HttpClient as a singleton.

Tip 9) Monitor your resources

In Azure web apps you have finite resources with your app server plans. Opening multiple connections with HttpClient and not disposing of them properly can have severe consequences.

For instance, we found a bug in the core Sitecore product that was caused by the connection retryer. It held open ports forever whenever we hit out Azure Search plan usage limits.  The result was that we hit outbound open connection limits for sockets and this caused our Sitecore instance to ground to a slow halt.

Sitecore has since resolved the issue mentioned above after a lengthy investigation working alongside the Aceik team. This was tracked under reference number 203909.

To monitor the number of sockets in Azure we found a nice page on the MSDN site.

Tip 10) Make use of OData Expressions

This tip relates strongly to tip 3.  Azure search has some really powerful OData Expressions that you can make use of by a direct connection.  Once you have had a play with direct connections it is surprisingly easy to spin up really fast queries.

Operators include:

  • OrderBy, Filter (by field), Search
  • Logical operators (and, or, not).
  • Comparison expressions (eq, ne, gt, lt, ge, le).
  • any with no parameters. This tests whether a field of type Collection(Edm.String) contains any elements.
  • any and all with limited lambda expression support.
  • Geospatial functions geo.distance and geo.intersects. The geo.distance function returns the distance in kilometres between two points.

See the complete list here.


 

Q&A

Q) Anything on multiple region setups? Or latency considerations?

A) Multi-region setups:   Although I can’t comment from experience the configuration documentation does state that you can specify multiple Azure Search instances using a pipe separator in the connection string.

<add name="cloud.search" connectionString="serviceUrl=https://searchservice1.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey1|serviceUrl=https://searchservice2.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey2" /> 

Unfortunately, the documentation does not go into much detail. It simply states that “Sitecore supports a Search service with geo-replicated scenarios” which one would hope means under the hood it has all the smarts to take care of this.

I’m curious about this as well and opened a stack overflow ticket. Let’s see if anyone else in the community can answer this for us.

Search Latency: 

Search latency can be directly improved by adding more replicas via the scaling setting in Azure Portal

replicas

Two replicas should be your starting point for an Azure Search instance to support Sitecore. Once you launch your site you will need to follow the instruction in tip 4 above monitor search latency.  If the latency graph is showing consistent spikes and high latency times above 0.5 seconds it’s probably time to add some more replicas.

0 comments on “Dynamic Placeholders and Sitecore Powershell”

Dynamic Placeholders and Sitecore Powershell

This post gives a detailed example of a Sitecore PowerShell script that automates the process of inserting content components (renderings) with particular content into a number of items.

Scenario:

Recently a client of ours wanted to roll out a promotion content row to all items of a particular template.

  • These items also happened to be bucket items so there were quite a few of them.
  • The components were usually added to the page via the Experience Editor.
  • The components were used in a building blocks manner:
    • Add a full-width Row
    • Add a Two column component inside the row
    • Add an image in the left-hand column
    • Add a rich text component in the right-hand column
  • The components were added via Dynamic Placeholders.

You could achieve this in certain cases by adding to the Standard Values however if your items already vary drastically from the Standard Values you may not get good enough coverage.

So here is the script with lots of comments so that you can follow along.

View the full script in our GitHub Repo here.

Key Take Aways

  1. We found you need to use Add-Rendering and then Get-Rendering in order to get the correct UniqueID of the rendering just added.
    • In order to construct the Dynamic Placeholder name correctly, you will need the correct UniqueID of the parent just added to the page.
    • The easiest way to facilitate that lookup in Get-Rendering was to give the rendering a unique parameter to identify it.
       (-Parameter @{"InNewPromotionRow"=1;
    • Without this, you may get a list of all renderings that match the non-specific lookup you just used in Get-Rendering
  2. Because of the way Dynamic Placeholders are added to the page this worked best in the Final Layout.
     -FinalLayout
  3. If things go really wrong in your script its best practice to spin up a new Item Version so that you can roll these changes back via another script.
     Add-ItemVersion -Path $i.Paths.Path -IfExist Append -Language "en"
  4. If you have two dynamic placeholders with the same base name then an additional postfix is used to target the second one. So for instance in our case, we had a left and a right-hand column with dynamic placeholders that use the same base placeholder name. In order to target the right-hand side you use “_1” on the end:
     $colWidePlaceholderRight = "/page-body-meeting/row_$rowId/col-wide_$($twoColumnRenderingId)_1"
  5. Nested Dynamic Placeholders: The line of code in point for above also shows how to construct the placeholder key of a nested dynamic placeholder.  In this case, we have a Column inside a Row. The PowerShell script you use to build your components needs to keep track of the placeholder hierarchy so that you can continue to construct nested dynamic placeholders.

 

0 comments on “Don’t Forget your Sitecore Caching Strategy”

Don’t Forget your Sitecore Caching Strategy

Releasing a scalable Sitecore instance requires an in-depth knowledge of Sitecore’s multi-layered caching architecture. Here is a run through of what you will need to pull your projects Sitecore caching strategy together. Including Tips, tricks and pitfalls.

HTML/Rendering Cache Settings

HTML caching settings have been part of the core Sitecore product for many versions now. It’s worth chatting about these every now and again as they are critically important to the performance of your Sitecore instance.

CacheSettings

Indeed one of the first things we look for when reviewing a project that has performance complaints is to see if the Sitecore HTML cache settings have been done at all. The difference that properly setup cache settings can have (compared to a site without any) really shouldn’t be underestimated.

There are a lot of blog posts that define the above settings. Here is a good one to get you up to speed. We have also put some information on the various other layers of Sitecore cache at the bottom of this page.

Sample Caching Strategy Document

For the projects I run, I find it useful to have an overall caching strategy page that summarises the settings for every single rendering. This gives us a nice reference point whenever these settings need adjusting to see what might be affected.

cachingstrategy

 

Failure to Cache

In our experience performance, problems are usually reported by clients who have no caching settings turned on at all. This can cause the website to react very slowly or even bring the site down in times of heavy traffic.

Sitecore does have other layers of cache that will kick in (data, item and pre-fetch cache) if you fail to enable HTML caching. The first line of defence is the HTML caching and when properly configured really takes the pressure off all these other areas of caching and prevents the database from getting hit.

Imagine the following scenario for our made up Sitecore client “Bikes R Us”:

  • A page that has a large extended navigation displaying links to 50 other sub-pages across the site.
  • The content of the page contains several rendering components that also contain links to a number of products across various categories.
  • The code to construct this page traverses not only the tree to build the navigation but also numerous product sub-categories to gather all the links.
  • Developer A – has had no proper exposure to caching strategies before and marks the page as done without any HTML caching settings enabled.
  • The site goes live a month later.
  • “Bikes R Us” marketing team starts advertising via EDMs a month later and things go really well. The campaign also goes viral on Social Media with a bike offer too good to refuse.
  • The page that developer A built experiences more traffic than ever expected.
  • Unfortunately, with no caching the code to construct the page is hit again and again.
  • Data layer and Item caching do assist to a point, however, Developer A never increased the default cache limits so calls to other pages are reducing the effectiveness of these layers overall.
  • After a few hours, traffic to the website increases to the point that the server runs out of CPU capacity and starts sending back 500 errors instead of serving up pages.

The scenario above is entirely avoidable when a proper caching strategy is completed as part of the development. Ideally, the caching strategy should be completed as each component of the website is developed and then tested to save on double handling. The caching strategy should then be reviewed, double checked and fully in place before a full performance test is done on the website.

Unfortunately, what often happens on big site builds is the deadline looms and the caching strategy which should be verified before go-live gets forgotten about. Failure to do so causes severe performance issues and leads to the client asking questions a few weeks/months later.

Incorrectly Configured Cache

On the opposite side of the coin, an incorrectly tuned cache can also cause havoc with some areas of the site. Examples of this include Web Forms for Marketeers and member portals. The caching of forms or components that contain data related to you members will:

  • Cause forms to behave with unexpected behaviour
  • Potentially show sensitive user data belonging to one user to many other users
  • XDB personalised components may behave in an unexpected manner.

 

XDB and Caching

In general, it’s fairly difficult to turn on caching for those components that need to react to personalisation on a per-user basis.  The problem is if your entire homepage is making use of personalisation you may not be able to cache certain components on that page at all. The inability to cache those components properly means the specifications of the server will need to be ramped up to deal with the additional processing that occurs with each page hit.

The “Vary By User” rendering setting is probably going to help you on personalised components up to a point.

Caching and Performance Testing

Caching is closely related to performance testing and your overall caching strategy will affect the outcome of these tests. The aim of the performance test is to benchmark what amount of traffic your production environment can handle during this process.

If your hosting in the cloud why not setup your servers to autoscale when needed.

An often-forgotten point is that performance testing should be complemented by stress testing above and beyond your expected traffic requirements. The main aim of this stress test is to identify the breaking point of your productions environments so that you have this knowledge for the future. This will help your team to prepare for those extra-ordinary traffic events.

When it comes to performance/stress testing there is little point running the test from a single source or development computer. You will be limited by a single network connections capacity and this is not a true test particularly for those making use of cloud hosting.

We always recommend using a service like blazemeter or Azure load tests.

** Thanks to Derek Aceik’s resident DevOps extraordinaire for helping me with the above recommendations.

An additional cache setting

It’s worth getting to know each of the HTML/Rendering cache settings well as you will need to have a detailed knowledge of each of these when looking at your strategy overall. One particular setting we found was missing that we tend to use regularly was the ability to only have a variation based on “Vary By URL”. A member of the team (Jose D) was kind enough to hook this up for us on a recent project. We are happy to share this with the wider community in hope that you also find it useful for your projects.

Increase the default cache limits

Outrageously this is also an often-overlooked part of getting your Sitecore project onto production. The performance tuning guide pretty much spells this out for you. You need to increase the default caching sizes that come out of the box with a Sitecore vanilla install. The caching limits provided are appropriate for developer machines but grossly inadequate for production environments which really need a healthy cache size to be responsive. For instance, out of the box, the HTML cache size is 50MB while on a reasonable production server this should start at 100MB as a baseline. That’s 20 times increase.

Take a look at Sitecore’s performance tuning document in order to get these settings correct. Section 4.1

Fine Tuning

Configuring the cache correctly for your production server can take some time to get right. You will need to monitor the /sitecore/admin/cache.aspx page.

In order to get these settings right have a good look at Sitecore’s performance tuning document. Section 4.2 is very important and give you a guide as to how cache tuning should be performed.

Prefetch Cache

Remember that fine-tuning your site will involve adjusting the items that Sitecore prefetches on startup. Once again the performance tuning document has all the details on how to do this. It’s another important step to get things running smoothly. See the references at the bottom of this article to see how the Pre-fetch cache fits into the overall caching architecture.

Sitecore.Caching.CustomCache

By implementing caching within your code to wrap complex logic you can save your server a lot of processing effort. Particularly around I/O intensive code where a lot of data to be shifted/filtered/searched it really is a great idea and worth adding to your Sitecore coding arsenal.

To get up to speed on how to build a custom cache we recommend reading this document.

The main way to achieve your custom cache is to write an implementation of Sitecore.Caching.CustomCache. You can then wrap your logic with the custom cache to prevent the same code being hit every time.

var cacheKey = string.Concat(
string.Format("MyCustomKey-{0}", Sitecore.Context.Language.Name), ":", filterParam);

var result = this.sitecoreCacheService.GetOrAddToCache(cacheKey, () =>
{ 
 ... 
 return "MyDataResult"
});

return result;

 

Cloudflare / Akamai considerations

Many sites rely on a third-party service provider to sit in front of their website to add an additional layer of caching. This is great and helps sites scale to meet demand. It shouldn’t be used as an excuse not to do a caching strategy at all on the Sitecore side.

Remember that pages are likely to sit in the 3rd party cache only for a certain period of time. So, if your site has 1000s of content pages that are each only accessed semi-regularly the user will bi-pass the third-party cache altogether. In these cases, the Sitecore cache becomes the next line of defence.

With regard to caching and Cloudflare. The cache will only kick in on the media library and your Web API endpoints if the Cache-Control header is set to public and given a valid MaxAge.

  1. For your WEB API endpoints, we found it handy to use the attribute mentioned in this stack overflow page.  See CacheControlAttribute.cs
  2. For media library URLs you need to enable:
<!--  MEDIA RESPONSE - CACHEABILITY The HttpCacheability is used to set media response headers. Possible values: NoCache, Private, Public, Server, ServerAndNoCache, ServerAndPrivate Default value: public--> <setting name="MediaResponse.Cacheability" value="public" />

 

Disable Caching on CM, Enable on CD

Remember to disable HTML caching on CM environments as it may cause issues with the Experience Explorer and Preview modes.

  • Set cacheHtml=”false”  on your CM servers <site> node.

You can also disable the media cache on CM Servers so that content editor never get cached images:

<?xml version="1.0"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/" xmlns:role="http://www.sitecore.net/xmlconfig/role/">
 <sitecore role:require="Standalone OR ContentDelivery OR ContentManagement OR Processing">
 <settings>
 <!--
 CACHING ENABLED
 Determines if caching should be enabled at all
 Specify 'true' to enable caching and 'false' to disable all caching
 -->
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="Standalone OR ContentManagement" name="Media.CachingEnabled" value="false" />
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="ContentDelivery" name="Media.CachingEnabled" value="true" />
 </settings>
 </sitecore>
</configuration>

 

Note: 

  • Don’t change the setting called “Caching.Enabled” on CM servers.

Reference Material:

Understanding the cache layers

The following is taken from http://learnsitecore.cmsuniverse.net/Developers/Articles/2009/07/CachingOverview.aspx

wcCcM

Definitions:

These definitions are described in the following stack overflow post:

Prefetch cache

This is item data pulled out from the database when the site starts up – from the Sitecore docs:

“Each database prefetch cache entry represents an item in a database. Database prefetch cache entries include all field values for all versions of that item, and information about the parent and children of the item.

Populating the prefetch cache results in smoother user experiences immediately after application restarts. Excessive use of prefetch caches can affect the time required for application initialization.”

Data cache

This cache is to minimise the round trips to the database, it again pulls item information from Sitecore but the difference being it does it when the item is requested (rather than start-up of the site); it will pull the data from the pre-fetch cache if it’s there or go back to the database if not.

Item cache

This cache has objects of type Sitecore.Data.Items.Item which would be used in code; when an item is requested in code it will look in the Item cache, then back up the data cache and up again to pre fetch cache and finally to the database.

HTML cache

This output caches the HTML from sublayouts and renderings, there are a nice level of configuration to only cache the HTML based on querystrings, different data etc.

 

0 comments on “JS & CSS Minification an Alternative Helix Approach”

JS & CSS Minification an Alternative Helix Approach

On a recent project, we were looking at a way to meet a number of criteria to improve page load times.

With regard to CSS and JS requests these criteria were:

  • Reduce the number of CSS and JS HTTP requests on page load.
  • Reduce the size of CSS and JS files (minification)

At the same time, we were running a Sitecore Helix project and we needed this to seamlessly fit into our project and CI builds.

In the past, I had also done some Umbraco development and was familiar with a nice little package called “ClientDependency Framework (CDF) by Shannon Deminick”  that comes out of the box with that particular CMS.

CDF will cut down your server requests in a few ways:

  • Combining  (It combines multiple JS files into a single server call on the fly)
  • compressing
  • minifying output
  • Composition
  • Caching (Processing of files into a composite file it cached)
  • Persisting the combined composite files for increased performance when applications restart or when the Cache expires

It also has a development mode were asset includes are not touched so that you can debug as usual.

The problem we had to solve was how to make the above package work in a way that it could plug and play with the Helix way of doing things.

The habitat (helix example project)  provides a way to include assets file via a global theme and also on an idividual rendering level.

Global themes should be used to load assets that are to be applied across a whole website.

Rendering level assets should be used to load css and javascript that only apply directly to that particular rendering.

The current habitat solution contains a module called “Assets” that lives in the foundation layer. This module will cleverly round up all the assets that need to be rendered on the page by the Helix Architecture.

In order to use ClientDependency Framework (CDF) and seamlessly plug it into the Assets module, we have provided two new layout tags for use:

Usage in Layouts
- @CompositeAssetFileService.Current.RenderStyles()
- @CompositeAssetFileService.Current.RenderScript(ScriptLocation.Body)

The above integration is currently on a pull request (waiting approval) on the main Helix Habitat GitHub  repository:

See: https://github.com/Sitecore/Habitat/pull/349

It will remain available on Aceik’s fork of the habitat project. 

Setup Steps: 

To use this service you will need to rename the following and run the gulp build:
/// – “App_Config/ClientDependency.config.disabled”. to App_Config/ClientDependency.config
/// – “\src\Foundation\Assets\code\Web.config.transform.ClientDependency.minification.example” to Web.config.transform

Once running (not in debug mode) you will see the following asset includes in the html source.

/DependencyHandler.axd/bdc200f5bb6df7e817066f4b98499322/12/js

A note about Cloudflare and minification: 

If your using cloudflare in front of your website,  JS and CSS minification can be turned on as feature in Cloudflare.  You can also compact the number of server requests made by using a feature called Rocket Loader. 

0 comments on “Understanding Form Submission Tracking (WFFM and xDB)”

Understanding Form Submission Tracking (WFFM and xDB)

This blog is targeted at a marketing audience that may be wondering how to interpret the WFFM tracking metrics, as shown in the form reports.

Definitions of metrics as proposed by Sitecore:

  • Visits – the total number of visitors who visited the page containing the web form.
  • Submission attempts – the total number of times that visitors clicked the submit button.
  • Dropouts – the total number of times that visitors filled in form fields but did not submit the form.
  • Successful submissions – the total number of times that visitors successfully submitted the web form and data was collected.

FormTracking

If you’re trying to test the above values by doing form submissions yourself and you don’t fully understand what is happening in the background, it will get confusing very quickly. Indeed some clients have asked me to investigate the tallies above as they believed the report was broken when some of the numbers started declining.

You really need to understand that just closing your browser when trying to affect the above metrics doesn’t fool the xDB into thinking your a different user.

Thanks to xDB cookies the values shown next to the metrics above can actually be reassigned.  What I mean by this is if a user is recorded as a dropout and they then return a few hours later to re-submit the form. The tally next to dropouts will actually reduce by one and that value will be added to successful submissions.  If you’re trying to test the tally for correctness a drop in some of these counts will leave you scratching your head.

The only guaranteed way for the user to be treated and a unique visitor is to clear your cookies and browser temp data.

A good way to perform these tests is to use “Incognito mode in chrome“. As this will prevent cookies from being stored.


This provides us with an explanation as to why you might see some of the tallies go into the negative. Which makes sense when you realise that dropouts can be converted to successful submissions.  How then do we explain when submission attempts drop in number?

Running a few tests on this reveals that when a user is converted from a drop-out to a successful submission, the submission attempts recorded against this user are also adjusted down.


The main takeaway from this is that xDB is clever and knows when the same user returns to a form.  It adjusts the tallies in the form report accordingly.

If your marketing department doesn’t care about how well a user is tracked and these tallies confuse them. Let’s say it wants to track the exact number of times a form was submitted (same user or not). You could achieve this independently just by tracking the number of times the thank-you page is loaded.

 

0 comments on “Sitecore Helix: Lets Talk Layers”

Sitecore Helix: Lets Talk Layers

Here are some notes from the decision-making process our team uses with regard to what goes where and in which layer. Of course, the helix documentation does go over the guidelines but it’s not until you start working with the architecture that things begin to become clear.

Project Layer

Definition: The Project layer provides the context of the solution. This means the actual cohesive website or channel output from the implementation, such as the page types, layout and graphical design. It is on this layer that all the features of the solution are stitched together into a cohesive solution that fits the requirements.

Comment: The project layer is probably the most straightforward layer to understand. In our project, the modules in this layer remained lightweight and mostly contain razor view files that allow content editors to build up the HTML structure of the pages.

The website content (under home), page templates and component templates are serialized by Unicorn and also live in this layer.

It’s also worth mentioning one particular gotcha you may hit in development to do with template inheritance and you can read more in this blog post.

Feature or Foundation?

Feature Definition: The Feature layer contains concrete features of the solution as understood by the business owners and editors of the solution, for example news, articles, promotions, website search, etc. The features are expressed as seen in the business domain of the solution and not by technology, which means that the responsibility of a Feature layer module is defined by the intent of the module as seen by a business user and not by the underlying technology. Therefore, the module’s responsibility and naming should never be decided by specific technologies but rather by the module’s business value or business responsibility.

Discussion: For our feature modules, we aimed for single concrete features that are independent of each other. They may contain views, templates, controllers, renderings, configuration changes and related business logic code to tie it all together. The point is to always stick to the rule: “Classes that change together are packaged together”.

When building feature modules, it’s also very handy to think about the feature modules removal as you build it. Keep asking yourself how easy would it be to roll back this module and what would I need to do. Doing so will help you to keep those dependencies under control.

Foundation Definition: The lowest level layer in Helix is the Foundation layer, which as the name suggests forms the foundation of your solution. When a change occurs in one of these modules it can impact many other modules in the solution. This mean that these modules should be the most stable in your solution in term of the Stable Dependencies Principle.

 

Discussion: We found that our foundation modules usually consist of frameworks or code that provide a structural functionality to support the web application as a whole. Each foundation module may be used by multiple feature modules to provide them with the support they need to run properly. Our foundation modules contain API calls, configuration, ORM structures (Glass Mapper), initialisation code, interfaces and abstract base classes.

An important point is that unlike feature layer modules, the foundation layer modules can have dependencies to other foundation layer modules. If this was not the case it would be very difficult to construct the foundation layer in the first place.

For the most part, the team can make some fairly quick decisions about what goes where in the initial project planning. And what goes where is fairly obvious after you get familiar with the habitat example project. The main dilemma you’re going to encounter is around where your repositories and services (key business logic) might need to sit.

Business LogicWhat goes where! Help!

Let’s consider the definitions above, they seem straightforward enough. However, in agile projects where things may change rapidly or requirements are not immediately clear (which happens a lot), you’re inevitably going to need to make some judgment calls.

What am I talking about with the above statement, well let’s say one developer codes up a feature module at the beginning of that project. At first, it seems like that particular portion of code is only required by that particular feature. Down the track a requirement surfaces whereby the same business logic needs to be used in another Feature module. Helix rules dictate:

  • Classes that change together are packaged together.
  • No dependencies between feature projects should exist.

A lesser developer may be tempted at this point to duplicate the code in both feature modules to get the job done quickly. This, however, breaks some fairly important fundamental coding standards that many of us try to stick to. Step back to consider the technical debt that duplicate code leave behind vs dependencies between your Helix feature modules.

The solution to this dilemma; it’s time to refactor that feature logic and move it into the foundation layer. After which any feature modules that needs to reference the same code can reference it in the foundation layer.

Remember “with great power comes great responsibility” and this is especially true when touching code in the foundation layer. The code you touch in the foundation layer should be highly stable and well tested as the helix guidelines suggest.

Was the original decision a mistake?

On the flip side of the coin its worth considering that it wasn’t a mistake to put that piece of code in the feature layer to start with. Technically if no one else needed to use the code at the time and it was reasonably unforeseen that anyone else would need to use it, then it probably was the correct call.

Accept that things may change

The team members on your helix project will need to be flexible and accepting of change. I think it’s worth being prepared for some open discussions within your team about what goes in the foundation layer and what goes in the feature layer. It’s certainly going to be open to interpretation and a topic of debate. A team that can work together and be open to a change of direction within their code structure will help the code base stay within the helix guidelines as the project evolves.

Good luck!

 

0 comments on “Helix Template Inheritance”

Helix Template Inheritance

During Sitecore Helix development it’s important to understand the role that template inheritance plays in keeping your dependencies in check.

The current conventions shown in the Habitat example site is to:

  • Create your template fields in modules within the foundation and feature layers with templates starting with a ‘_’.
  • featureinheritance
  • Inside the project layer, create appropriate templates that inherit from the feature and foundation layer templates. inheritanceproject

Each project layer template should inherit from one or more feature / foundation layer templates.

We believe the benefits of sticking to this approach are as follows:

  • Your project layer templates can be composed of template fields from multiple modules in other layers. With the potential for page templates to contain the functionality from multiple modules if need be.
  • Your content tree does not directly rely on feature and foundation layer templates making their removal down the track easier if need be.

Data Source Locations

Helix introduces the concept of Data Source Locations which are documented on this page.

datasourcelocations

Data source locations are very useful for supporting multi-tenant, multi-site solutions. You don’t have to use them, however using them does future proof your Helix solution with the ability to add additional tenants down the track.

The datasource location and template resolution have been extended in the Habitat project. This means that it is also possible to define datasource templates and locations for each site, in addition to on the rendering itself. This is done through an extension of the getRenderingDatasource pipeline and the addition of a site: prefix to the Datasource Location field.

Site:prefix

datasource-location

The above IFrame rendering uses the syntax site:iframe which via the getRenderingDatasource pipeline will look for a data source location within the site called ‘iframe’.

This is great for multi-tenant situations but it also means that the helix dependency rules are not broken. Pointing the data source location field of your rendering to a folder within a website technically breaks the Helix dependency rules. This is a very important point and one that might be missed on a first pass through the Helix documentation.

The trap of using Feature Templates

Developers that are new to helix may not realise that any feature layer modules should have a matching template in the website layer that inherits from the feature layer.

datasource-location2

A follow-on effect from this is that you may skip the creation of a project layer template altogether and simply use the feature layer template in the “Datasource Template” field.

Technically the above dependency is not incorrect. Habitat has one more surprise install for you, however. Once you start to add content blocks to your page based on the data source locations, your insert options for the Local data sources folder start to get automatically populated.

InsertOptions

So for each unique component, you add to the page via Experience Explorer your going to get the template of that component in the insert options.

once again this dependency is not technically incorrect, although it will probably have your testers asking why strangely named templates are showing up in the insert options. The major drawback we could think (as mentioned earlier in the article) is that your content items will be highly dependent on that particular feature module.

  • You will have multiple dependencies between content items and the feature modules.
  • Attempting to remove the feature layer will disrupt all of those content items directly.

On the flip side using the project template instead, means we have a single (or reduced) point of removal for that feature module within the project templates.

The main takeaway we took by running into the above mistake (using the feature template in our data source locations) was that we should be pointing the data source locations at a project template instead.

  • Always create a project layer template that will inherit from a feature/foundation module template.
  • Ideally content items shouldn’t reference feature layer templates directly, even though this doesn’t break the helix dependency rules.