Blog

0 comments on “Sitecore Azure Search: Top 10 Tips”

Sitecore Azure Search: Top 10 Tips

Its been a while since I first wrote about Azure Search and we have a few more tips and tricks on how to optimise Azure Search implementations.

Before proceeding if you missed our previous posts check out some tools we created for Azure Search Helix setup and Geo-Spatial Searching.

Also, check out the slides from our presentation at last years Melbourne Sitecore User Group.

Ok let us jump into the top 10 tips:

Tip 1) Create custom indexes for targeted searching

The default out of the box indexes will attempt to cover just about everything in your Sitecore databases. They do so to support Sitecore CMS UI searches out of the box.  It’s not a problem if you want to use the default indexes (web, master) to search with, however for optimal searches and faster re-indexing time a custom index will help performance.

By stepping back and looking at the different search requirements across the site you can map out your custom indexes and the data that each will require.

Consider also that if the custom indexes need to be used across multiple Feature Helix modules the configuration files and search repositories may need to live in an appropriate Foundation module. More about feature vs foundation can be found here.

Tip 2) Keep your indexes lean

This tip follows on from the first Tip.

Essentially the default Azure Search configuration out of the box will have:

<indexAllFields>true</indexAllFields>

This can include a lot of fields and your probably not going to need every single Sitecore field in order to present the user with meaningful data on the front end interfaces.

The other option is to specify only the fields that you need in your indexes:

<include hint="list:IncludeField"> 
<Text>{A60ACD61-A6DB-4182-8329-C957982CEC74}</Text> 
</include>

The end result will limit the amount of JSON payload that needs to be sent across the network and also the amount of payload that the Sitecore Azure Search Provider needs to process.

Particularly if you are returning thousands of search results you can see what happens when “IndexAllFields” is on via Fiddler.

This screenshot is via a local development machine and Azure Search instance at the Microsoft hosting centre.

Fiddler Index

JSONFIelds

  • So for a single query “IndexAllFields” can result in:
    • 2 MB plus JSON payload size.
    • Document results with all Sitecore metadata included. That could be around 100 fields.

If your query results in Document counts in the thousands obviously the payload will grow rapidly. By reducing the fields in your indexes (removing un-necessary data)  you can speed up query, transfer and processing times and get the data displayed quicker.

Tip 3) Make use of direct azure connections

Sitecore has done a lot of the heavy lifting for you in the Sitecore Azure Search Provider. It’s a bit like a wrapper that does all the hard work for you. In some cases however you may find that writing your own queries that connect via the Azure Search DLL gives you better performance.

Tip 4) Monitor performance via Azure Search Portal

It’s really important to monitor your Azure Search Instance via Azure Portal. This will give you critical clues as to whether your scaling settings are appropriate.

In particular look out for high latency times as this will indicate that your search queries are getting throttled. As a result, you may need to scale up your Azure Search Instance.

In order to monitor your latency times go to:

  1. Login to Azure Portal
  2. Navigate to your Azure Search Instance.
  3. Click on metrics in the left-hand navigation
    • metrics
  4. Select the “Search Latency” checkbox and scan over the last week.
    • graph
  5. You will see some peaks these usually indicate heavy periods of re-indexing. During re-indexing, the Azure Search instance is under heavy load. As long as your peaks under 0.5-second mark your ok.  If you see Search Latency up into the 2-second timeframe you probably need to either adjust how your indexes are used (caching and re-indexing) or scale up to avoid the flow on effects of slow search.

Tip 5) Cache Wrappers

In the code that uses Azure Search, it would be advisable to use cache wrappers around the searches when possible. For your most common searches, this should prevent Azure Search getting hit repeatedly with the same query.

For a full example of cache wrapper checkout the section titled Sitecore.Caching.CustomCache in my previous blog post.

Tip 6) Disable Indexing on CD

This is a hot tip that we got from Sitecore Support when we started to encounter high search latency during re-indexing.

Most likely in your production setup, you will have a single Azure Search instance shared between CM and CD environments.

You need to factor in that CM should be the server that controls the re-indexing (writing) and CD will most likely be the server doing the queries (reading).

Re-indexing is triggered via the event queue and every server subscribes and reacts to these events. Each server with the out of the box search configuration will cause the Azure Search indexes to be updated.  In a shared Azure Search (or SOLR instance) this only needs to be updated by a single server. Each additional re-index is overkill and just doubling up on re-indexing workload.

You can, therefore, adjust the configuration on the CD servers so that it does not cause re-indexing to happen.

The trick is in your index configuration files to use Configuration Roles to specify the indexing strategy on each server.

 <strategies hint="list:AddStrategy">
 <!--
 NOTE: order of these is controls the execution order 
 -->
 <strategy role:require="Standalone OR ContentManagement" ref="contentSearch/indexConfigurations/indexUpdateStrategies/onPublishEndAsync"/>
 <strategy role:require="ContentDelivery" ref="contentSearch/indexConfigurations/indexUpdateStrategies/manual"/>
 </strategies>

Setting the index update strategy to manual on your CD servers will take a big load off your remote indexes.

Particularly if you have multiple CD servers using the same indexes. Each additional CD server would cause additional updates to the index without the above setting.

Tip 7) Rigid Indexes – Have a deployment plan

If your deployment includes additions and changes to the indexes and you need 100% availability of search data, a deployment plan for re-indexing will be required.

Grant chatted about the problem in his post here. To get around this you could consider using the blue / green paradigm during deployments.

  • This would mean having a set blue indexes and a set of green indexes.
  • Using slot swaps for your deployments.
    • One slot points to green in configuration.
    • One slot (production) points to blue in configuration.
  • To save on costs you could decommission the staging slot between deployments.

Tip 8) HttpClient should be a singleton or static

The basic idea here is that you should keep the number of HttpClient instances in your code to an absolute minimum if you want optimal performance.

The Sitecore Azure Search provider actually spins up 2 x HttpClient connections for every single index. This in itself is not ideal and unfortunately, there is not a lot you can do about this code in the core product itself.

In your own connections to other APIs, however, HttpClient SendAsync is perfectly thread safe.

By using HttpClient singletons you stand to gain big in the performance stakes. One great blog article worth reading runs you through the performance benefits. 

It’s also worth noting that in the Azure Search documentation Microsoft themselves say you should treat HttpClient as a singleton.

Tip 9) Monitor your resources

In Azure web apps you have finite resources with your app server plans. Opening multiple connections with HttpClient and not disposing of them properly can have severe consequences.

For instance, we found a bug in the core Sitecore product that was caused by the connection retryer. It held open ports forever whenever we hit out Azure Search plan usage limits.  The result was that we hit outbound open connection limits for sockets and this caused our Sitecore instance to ground to a slow halt.

Sitecore has since resolved the issue mentioned above after a lengthy investigation working alongside the Aceik team. This was tracked under reference number 203909.

To monitor the number of sockets in Azure we found a nice page on the MSDN site.

Tip 10) Make use of OData Expressions

This tip relates strongly to tip 3.  Azure search has some really powerful OData Expressions that you can make use of by a direct connection.  Once you have had a play with direct connections it is surprisingly easy to spin up really fast queries.

Operators include:

  • OrderBy, Filter (by field), Search
  • Logical operators (and, or, not).
  • Comparison expressions (eq, ne, gt, lt, ge, le).
  • any with no parameters. This tests whether a field of type Collection(Edm.String) contains any elements.
  • any and all with limited lambda expression support.
  • Geospatial functions geo.distance and geo.intersects. The geo.distance function returns the distance in kilometres between two points.

See the complete list here.


 

Q&A

Q) Anything on multiple region setups? Or latency considerations?

A) Multi-region setups:   Although I can’t comment from experience the configuration documentation does state that you can specify multiple Azure Search instances using a pipe separator in the connection string.

<add name="cloud.search" connectionString="serviceUrl=https://searchservice1.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey1|serviceUrl=https://searchservice2.search.windows.net;apiVersion=2015-02-28;apiKey=AdminKey2" /> 

Unfortunately, the documentation does not go into much detail. It simply states that “Sitecore supports a Search service with geo-replicated scenarios” which one would hope means under the hood it has all the smarts to take care of this.

I’m curious about this as well and opened a stack overflow ticket. Let’s see if anyone else in the community can answer this for us.

Search Latency: 

Search latency can be directly improved by adding more replicas via the scaling setting in Azure Portal

replicas

Two replicas should be your starting point for an Azure Search instance to support Sitecore. Once you launch your site you will need to follow the instruction in tip 4 above monitor search latency.  If the latency graph is showing consistent spikes and high latency times above 0.5 seconds it’s probably time to add some more replicas.

0 comments on “Dynamic Placeholders and Sitecore Powershell”

Dynamic Placeholders and Sitecore Powershell

This post gives a detailed example of a Sitecore PowerShell script that automates the process of inserting content components (renderings) with particular content into a number of items.

Scenario:

Recently a client of ours wanted to roll out a promotion content row to all items of a particular template.

  • These items also happened to be bucket items so there were quite a few of them.
  • The components were usually added to the page via the Experience Editor.
  • The components were used in a building blocks manner:
    • Add a full-width Row
    • Add a Two column component inside the row
    • Add an image in the left-hand column
    • Add a rich text component in the right-hand column
  • The components were added via Dynamic Placeholders.

You could achieve this in certain cases by adding to the Standard Values however if your items already vary drastically from the Standard Values you may not get good enough coverage.

So here is the script with lots of comments so that you can follow along.

View the full script in our GitHub Repo here.

Key Take Aways

  1. We found you need to use Add-Rendering and then Get-Rendering in order to get the correct UniqueID of the rendering just added.
    • In order to construct the Dynamic Placeholder name correctly, you will need the correct UniqueID of the parent just added to the page.
    • The easiest way to facilitate that lookup in Get-Rendering was to give the rendering a unique parameter to identify it.
       (-Parameter @{"InNewPromotionRow"=1;
    • Without this, you may get a list of all renderings that match the non-specific lookup you just used in Get-Rendering
  2. Because of the way Dynamic Placeholders are added to the page this worked best in the Final Layout.
     -FinalLayout
  3. If things go really wrong in your script its best practice to spin up a new Item Version so that you can roll these changes back via another script.
     Add-ItemVersion -Path $i.Paths.Path -IfExist Append -Language "en"
  4. If you have two dynamic placeholders with the same base name then an additional postfix is used to target the second one. So for instance in our case, we had a left and a right-hand column with dynamic placeholders that use the same base placeholder name. In order to target the right-hand side you use “_1” on the end:
     $colWidePlaceholderRight = "/page-body-meeting/row_$rowId/col-wide_$($twoColumnRenderingId)_1"
  5. Nested Dynamic Placeholders: The line of code in point for above also shows how to construct the placeholder key of a nested dynamic placeholder.  In this case, we have a Column inside a Row. The PowerShell script you use to build your components needs to keep track of the placeholder hierarchy so that you can continue to construct nested dynamic placeholders.

 

0 comments on “Aceik’s Jason Horne Wins Sitecore “Most Valuable Professional” Award 2018”

Aceik’s Jason Horne Wins Sitecore “Most Valuable Professional” Award 2018

Sitecore_MVP_logo_Technology_2018.jpgElite distinction awarded for exceptional contributions to the Sitecore ecosystem

Melbourne, Victoria, Australia — January 31, 2018 — Aceik – Sitecore Specialists, today announced that Jason Horne, Director and Chief Architect has been named a “Most Valuable Professional (MVP)” in Technology by Sitecore®, the global leader in experience management software. Jason Horne was one of only 208 Technology MVPs worldwide to be named a Sitecore MVP this year.

Now in its 12th year, Sitecore’s MVP program recognizes individual technology, strategy, and commerce advocates who share their Sitecore passion and expertise to offer positive customer experiences that drive business results. The Sitecore MVP Award recognizes the most active Sitecore experts from around the world who participate in online and offline communities to share their knowledge with other Sitecore partners and customers.

Aceik has a very close and strong relationship with Sitecore. We are very honored to be given this award in recognition of our expertise and contributions to Sitecore. Aceik is highly active in the Sitecore community providing thought leadership, presentations, workshops, open source modules, blog posts, co-founding and managing the Sitecore Melbourne user group and more. Through our deep engagement in the Sitecore community, Aceik stays on the cutting edge allowing us to provide the best advice and outcomes for our clients.

Aceik was founded in 2013 by Jason Horne as a Sitecore specialist consultancy (About Us, Services). Aceik is a one of a kind Australian Sitecore partner. We focus on Sitecore, .Net, Azure, and Agile, keeping this dedicated and deliberate focus allows us to stay on top of a very fast-paced industry. We strive to provide excellence in every Sitecore project we undertake.

Some things that make us unique Include:

  • 100% of our work is Sitecore related.
  • We guarantee Sitecore experts on every project.
  • Speak directly with our technical team.
  • We care about all our clients, they are our biggest advocates
  • We are responsive and enthusiastic about all projects big or small
  • We deliver quality solutions at the highest level, honestly and transparently

“The Sitecore MVP awards recognize and honor those individuals who make substantial contributions to our loyal community of partners and customers,” said Pieter Brinkman, Sitecore Senior Director of Technical Marketing. “MVPs consistently set a standard of excellence by delivering technical chops, enthusiasm, and a commitment to giving back to the Sitecore community. They truly understand and deliver on the power of the Sitecore Experience Platform to create personalized brand experiences for their consumers, driving revenue and customer loyalty.”

The Sitecore Experience Platform™ combines web content management, omnichannel digital delivery, insights into customer activity and engagement, and strategic digital marketing tools into a single, unified platform. Sitecore Experience Commerce™ 9, released in January 2018, is the only cloud-enabled platform that natively integrates content and commerce so brands can fully personalize and individualize the end-to-end shopping experience before, during, and after the transaction. Both platforms capture in real time every minute interaction—and intention—that customers and prospects have with a brand across digital and offline channels. The result is that Sitecore customers are able to use the platform to engage with prospects and customers in a highly personalized manner, earning long-term customer loyalty.

 Aceiks mission is to deliver quality solutions honestly and transparently. Through our deep engagement in the Sitecore community and singular focus we provide a substantial return on investment to our clients with every engagement. We believe that clients and vendors can work together as a single team to deliver outstanding results as true partners in a common goal.

Please feel free to get in touch:
Jason Horne, Founder, and Sitecore specialist
Aceik
jasonhorne@aceik.com.au
0426971867

More information can be found about the MVP Program on the Sitecore MVP site: http://www.sitecore.com/mvp

0 comments on “Downgrading Helix modules from Sitecore 8.2 to 7.2”

Downgrading Helix modules from Sitecore 8.2 to 7.2

Recently I had to update a Sitecore 7.2 site to use Helix architecture and bring some foundation modules in from a Sitecore 8.2 site.  There were several challenges with this process.  Here’s a few highlights:

Dependency Injection using Castle Windsor

For this site, Castle Windsor was being used for dependency injection.  In the Sitecore 8.2 site, dependencies are registered using a configurator in a .config patch.  This is not supported in Sitecore 7.2, so instead we have to do this in the Application_Start event in global.asax.cs.

In order to be able to use the same DI container throughout the code, I created a singleton container object (ContainerManager.Container):

public static class ContainerManager
 {
   private static IWindsorContainer _container;
   public static IWindsorContainer Container
   {
     get
     {
        if (_container != null) return _container;
        _container = new WindsorContainer();
        _container.Install(FromAssembly.This());
        return _container;
     }
   }
}

Then in global.asax.cs, I used that container to register the dependencies:

protected void Application_Start(object sender, EventArgs e)
 {
    _container = ContainerManager.Container;
    _container.Install(new RegisterGlassDependencies());
    ...

An example of the installer class for registering dependencies is shown below:

 public class RegisterGlassDependencies : IWindsorInstaller
 {
   public void Install(IWindsorContainer container, IConfigurationStore store)
   {
     container.Register(Component.For<ISitecoreContext>().ImplementedBy<SitecoreContext>());
     container.Register(Component.For<IGlassHtml>().ImplementedBy<GlassHtmlTemp>());
     container.Register(Component.For<IGlassFactory>().ImplementedBy<GlassFactory>());
   }
 }

In order for injected constructor parameters to resolve for Controllers, it is necessary to use a custom controller factory which uses the DI container to resolve the controller. There’s a fair bit of overriding going on, so here it comes.

Code for the controller factory shown below:

public class WindsorControllerFactory : DefaultControllerFactory
   {
       private readonly IWindsorContainer _container;
 
       public WindsorControllerFactory(IWindsorContainer container)
       {
           _container = container;
       }
 
       public override void ReleaseController(IController controller)
       {
           _container.Release(controller);
       }
 
       public override IController CreateController(RequestContext requestContext,string controllerName)
       {
           Assert.ArgumentNotNull(requestContext, "requestContext");
           Assert.ArgumentNotNull(controllerName, "controllerName");
           Type controllerType = null;
 
           if (TypeHelper.LooksLikeTypeName(controllerName))
           {
               controllerType = TypeHelper.GetType(controllerName);
           }
 
           if (controllerType == null)
           {
               controllerType = GetControllerType(
                   requestContext,
                   controllerName);
           }
 
           if (controllerType != null)
           {
               return (IController)_container.Resolve(controllerType);
           }
 
           return base.CreateController(requestContext, controllerName);
       }
   }

A pipeline was added with an “instead” patch for Sitecore.Mvc.Pipelines.Loader.InitializeControllerFactory.  This pipleine scans all assemblies and gets all classes based on IController then registers them with the Containermanager.Container.  Code shown below:

public class InitializeWindsorControllerFactory
   {
       public virtual void Process(ScapiPipelineArgs args)
       {
           SetupControllerFactory(args);
       }
 
       protected virtual void SetupControllerFactory(ScapiPipelineArgs args)
       {
           var container = ContainerManager.Container;
 
           //TODO: don't use hard-coded filter string
           var assemblies = GetAssemblies.GetByFilter("MyAssembly.*").Where(n => !n.FullName.StartsWith("MyAssembly.Service"))
               .Where(a => GetTypes.GetTypesImplementing<IController>(a).Any(x => x.Namespace != null && x.Namespace.StartsWith("MyNamespace")));
 
           foreach (var assembly in assemblies)
           {
               container.Register(Classes.FromAssembly(assembly).BasedOn<IController>().LifestyleTransient());
           }
 
           var controllerFactory = new WindsorControllerFactory(container);
 
           var scapiSitecoreControllerFactory = new
               ScapiSitecoreControllerFactory(controllerFactory);
 
           ControllerBuilder.Current.SetControllerFactory(scapiSitecoreControllerFactory);
       }
   }

Another pipeline was added with an “instead” patch for Sitecore.Mvc.Pipelines.Response.GetRenderer.GetControllerRenderer.  This pipeline allows us to use a custom controller renderer, as shown below:

public class GetControllerRenderer : Sitecore.Mvc.Pipelines.Response.GetRenderer.GetControllerRenderer
   {
       protected override Renderer GetRenderer(Rendering rendering, Sitecore.Mvc.Pipelines.Response.GetRenderer.GetRendererArgs args)
       {
           var renderer = base.GetRenderer(rendering, args);
           return !(renderer is ControllerRenderer) ? renderer : new CustomControllerRenderer(renderer as ControllerRenderer);
       }
   }

The custom controller renderer then allows us to use a custom controller runner as shown below:

public sealed class CustomControllerRenderer : ControllerRenderer
   {
       public CustomControllerRenderer(ControllerRenderer renderer)
       {
           ControllerName = renderer.ControllerName;
           ActionName = renderer.ActionName;
       }
 
       public override void Render(System.IO.TextWriter writer)
       {
           var controllerName = ControllerName;
           var actionName = ActionName;
           if (controllerName.IsWhiteSpaceOrNull() || actionName.IsWhiteSpaceOrNull())
           {
               return;
           }
           var controllerRunner = new CustomControllerRunner(controllerName, actionName);
           var value = controllerRunner.Execute();
           if (value.IsEmptyOrNull())
           {
               return;
           }
           writer.Write(value);
       }
 
   }

The custom controller runner then creates the controller using our custom controller factory with our DI container as a parameter.  This is our goal. Custom controller runner shown below:

public class CustomControllerRunner : ControllerRunner
   {
       public CustomControllerRunner(string controllerName, string actionName)
           : base(controllerName, actionName)
       { }
 
       protected override IController CreateController()
       {
           return CreateControllerUsingFactory();
       }
 
       private IController CreateControllerUsingFactory()
       {
           NeedRelease = true;
 
           var controllerFactory = new WindsorControllerFactory(ContainerManager.Container);
           return controllerFactory.CreateController(PageContext.Current.RequestContext, ControllerName);
       }
   }

There’s a few layers to it, but we got there in the end.

XUnit Tests

The tests that I moved over from the Sitecore 8.2 site used XUnit.  After moving them over, one of the errors I got in the Sitecore 7.2 site was:

Could not resolve type name: Sitecore.Data.DefaultDatabase, Sitecore.Kernel

To fix this,  for Sitecore versions prior to 8.2 all instances of ‘Sitecore.Data.DefaultDatabase, Sitecore.Kernel’ in the config files should be changed to ‘Sitecore.Data.Database, Sitecore.Kernel’.

One bit I got stuck on for a while was that there was a config file with this string in it that had not yet been added to the solution, but was there on the file system, and that was enough for this error to still be thrown.  It took a file system search for the string to track it down.

 

 

0 comments on “Don’t Forget your Sitecore Caching Strategy”

Don’t Forget your Sitecore Caching Strategy

Releasing a scalable Sitecore instance requires an in-depth knowledge of Sitecore’s multi-layered caching architecture. Here is a run through of what you will need to pull your projects Sitecore caching strategy together. Including Tips, tricks and pitfalls.

HTML/Rendering Cache Settings

HTML caching settings have been part of the core Sitecore product for many versions now. It’s worth chatting about these every now and again as they are critically important to the performance of your Sitecore instance.

CacheSettings

Indeed one of the first things we look for when reviewing a project that has performance complaints is to see if the Sitecore HTML cache settings have been done at all. The difference that properly setup cache settings can have (compared to a site without any) really shouldn’t be underestimated.

There are a lot of blog posts that define the above settings. Here is a good one to get you up to speed. We have also put some information on the various other layers of Sitecore cache at the bottom of this page.

Sample Caching Strategy Document

For the projects I run, I find it useful to have an overall caching strategy page that summarises the settings for every single rendering. This gives us a nice reference point whenever these settings need adjusting to see what might be affected.

cachingstrategy

 

Failure to Cache

In our experience performance, problems are usually reported by clients who have no caching settings turned on at all. This can cause the website to react very slowly or even bring the site down in times of heavy traffic.

Sitecore does have other layers of cache that will kick in (data, item and pre-fetch cache) if you fail to enable HTML caching. The first line of defence is the HTML caching and when properly configured really takes the pressure off all these other areas of caching and prevents the database from getting hit.

Imagine the following scenario for our made up Sitecore client “Bikes R Us”:

  • A page that has a large extended navigation displaying links to 50 other sub-pages across the site.
  • The content of the page contains several rendering components that also contain links to a number of products across various categories.
  • The code to construct this page traverses not only the tree to build the navigation but also numerous product sub-categories to gather all the links.
  • Developer A – has had no proper exposure to caching strategies before and marks the page as done without any HTML caching settings enabled.
  • The site goes live a month later.
  • “Bikes R Us” marketing team starts advertising via EDMs a month later and things go really well. The campaign also goes viral on Social Media with a bike offer too good to refuse.
  • The page that developer A built experiences more traffic than ever expected.
  • Unfortunately, with no caching the code to construct the page is hit again and again.
  • Data layer and Item caching do assist to a point, however, Developer A never increased the default cache limits so calls to other pages are reducing the effectiveness of these layers overall.
  • After a few hours, traffic to the website increases to the point that the server runs out of CPU capacity and starts sending back 500 errors instead of serving up pages.

The scenario above is entirely avoidable when a proper caching strategy is completed as part of the development. Ideally, the caching strategy should be completed as each component of the website is developed and then tested to save on double handling. The caching strategy should then be reviewed, double checked and fully in place before a full performance test is done on the website.

Unfortunately, what often happens on big site builds is the deadline looms and the caching strategy which should be verified before go-live gets forgotten about. Failure to do so causes severe performance issues and leads to the client asking questions a few weeks/months later.

Incorrectly Configured Cache

On the opposite side of the coin, an incorrectly tuned cache can also cause havoc with some areas of the site. Examples of this include Web Forms for Marketeers and member portals. The caching of forms or components that contain data related to you members will:

  • Cause forms to behave with unexpected behaviour
  • Potentially show sensitive user data belonging to one user to many other users
  • XDB personalised components may behave in an unexpected manner.

 

XDB and Caching

In general, it’s fairly difficult to turn on caching for those components that need to react to personalisation on a per-user basis.  The problem is if your entire homepage is making use of personalisation you may not be able to cache certain components on that page at all. The inability to cache those components properly means the specifications of the server will need to be ramped up to deal with the additional processing that occurs with each page hit.

The “Vary By User” rendering setting is probably going to help you on personalised components up to a point.

Caching and Performance Testing

Caching is closely related to performance testing and your overall caching strategy will affect the outcome of these tests. The aim of the performance test is to benchmark what amount of traffic your production environment can handle during this process.

If your hosting in the cloud why not setup your servers to autoscale when needed.

An often-forgotten point is that performance testing should be complemented by stress testing above and beyond your expected traffic requirements. The main aim of this stress test is to identify the breaking point of your productions environments so that you have this knowledge for the future. This will help your team to prepare for those extra-ordinary traffic events.

When it comes to performance/stress testing there is little point running the test from a single source or development computer. You will be limited by a single network connections capacity and this is not a true test particularly for those making use of cloud hosting.

We always recommend using a service like blazemeter or Azure load tests.

** Thanks to Derek Aceik’s resident DevOps extraordinaire for helping me with the above recommendations.

An additional cache setting

It’s worth getting to know each of the HTML/Rendering cache settings well as you will need to have a detailed knowledge of each of these when looking at your strategy overall. One particular setting we found was missing that we tend to use regularly was the ability to only have a variation based on “Vary By URL”. A member of the team (Jose D) was kind enough to hook this up for us on a recent project. We are happy to share this with the wider community in hope that you also find it useful for your projects.

Increase the default cache limits

Outrageously this is also an often-overlooked part of getting your Sitecore project onto production. The performance tuning guide pretty much spells this out for you. You need to increase the default caching sizes that come out of the box with a Sitecore vanilla install. The caching limits provided are appropriate for developer machines but grossly inadequate for production environments which really need a healthy cache size to be responsive. For instance, out of the box, the HTML cache size is 50MB while on a reasonable production server this should start at 100MB as a baseline. That’s 20 times increase.

Take a look at Sitecore’s performance tuning document in order to get these settings correct. Section 4.1

Fine Tuning

Configuring the cache correctly for your production server can take some time to get right. You will need to monitor the /sitecore/admin/cache.aspx page.

In order to get these settings right have a good look at Sitecore’s performance tuning document. Section 4.2 is very important and give you a guide as to how cache tuning should be performed.

Prefetch Cache

Remember that fine-tuning your site will involve adjusting the items that Sitecore prefetches on startup. Once again the performance tuning document has all the details on how to do this. It’s another important step to get things running smoothly. See the references at the bottom of this article to see how the Pre-fetch cache fits into the overall caching architecture.

Sitecore.Caching.CustomCache

By implementing caching within your code to wrap complex logic you can save your server a lot of processing effort. Particularly around I/O intensive code where a lot of data to be shifted/filtered/searched it really is a great idea and worth adding to your Sitecore coding arsenal.

To get up to speed on how to build a custom cache we recommend reading this document.

The main way to achieve your custom cache is to write an implementation of Sitecore.Caching.CustomCache. You can then wrap your logic with the custom cache to prevent the same code being hit every time.

var cacheKey = string.Concat(
string.Format("MyCustomKey-{0}", Sitecore.Context.Language.Name), ":", filterParam);

var result = this.sitecoreCacheService.GetOrAddToCache(cacheKey, () =>
{ 
 ... 
 return "MyDataResult"
});

return result;

 

Cloudflare / Akamai considerations

Many sites rely on a third-party service provider to sit in front of their website to add an additional layer of caching. This is great and helps sites scale to meet demand. It shouldn’t be used as an excuse not to do a caching strategy at all on the Sitecore side.

Remember that pages are likely to sit in the 3rd party cache only for a certain period of time. So, if your site has 1000s of content pages that are each only accessed semi-regularly the user will bi-pass the third-party cache altogether. In these cases, the Sitecore cache becomes the next line of defence.

With regard to caching and Cloudflare. The cache will only kick in on the media library and your Web API endpoints if the Cache-Control header is set to public and given a valid MaxAge.

  1. For your WEB API endpoints, we found it handy to use the attribute mentioned in this stack overflow page.  See CacheControlAttribute.cs
  2. For media library URLs you need to enable:
<!--  MEDIA RESPONSE - CACHEABILITY The HttpCacheability is used to set media response headers. Possible values: NoCache, Private, Public, Server, ServerAndNoCache, ServerAndPrivate Default value: public--> <setting name="MediaResponse.Cacheability" value="public" />

 

Disable Caching on CM, Enable on CD

Remember to disable HTML caching on CM environments as it may cause issues with the Experience Explorer and Preview modes.

  • Set cacheHtml=”false”  on your CM servers <site> node.

You can also disable the media cache on CM Servers so that content editor never get cached images:

<?xml version="1.0"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/" xmlns:role="http://www.sitecore.net/xmlconfig/role/">
 <sitecore role:require="Standalone OR ContentDelivery OR ContentManagement OR Processing">
 <settings>
 <!--
 CACHING ENABLED
 Determines if caching should be enabled at all
 Specify 'true' to enable caching and 'false' to disable all caching
 -->
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="Standalone OR ContentManagement" name="Media.CachingEnabled" value="false" />
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="ContentDelivery" name="Media.CachingEnabled" value="true" />
 </settings>
 </sitecore>
</configuration>

 

Note: 

  • Don’t change the setting called “Caching.Enabled” on CM servers.

Reference Material:

Understanding the cache layers

The following is taken from http://learnsitecore.cmsuniverse.net/Developers/Articles/2009/07/CachingOverview.aspx

wcCcM

Definitions:

These definitions are described in the following stack overflow post:

Prefetch cache

This is item data pulled out from the database when the site starts up – from the Sitecore docs:

“Each database prefetch cache entry represents an item in a database. Database prefetch cache entries include all field values for all versions of that item, and information about the parent and children of the item.

Populating the prefetch cache results in smoother user experiences immediately after application restarts. Excessive use of prefetch caches can affect the time required for application initialization.”

Data cache

This cache is to minimise the round trips to the database, it again pulls item information from Sitecore but the difference being it does it when the item is requested (rather than start-up of the site); it will pull the data from the pre-fetch cache if it’s there or go back to the database if not.

Item cache

This cache has objects of type Sitecore.Data.Items.Item which would be used in code; when an item is requested in code it will look in the Item cache, then back up the data cache and up again to pre fetch cache and finally to the database.

HTML cache

This output caches the HTML from sublayouts and renderings, there are a nice level of configuration to only cache the HTML based on querystrings, different data etc.

 

0 comments on “JS & CSS Minification an Alternative Helix Approach”

JS & CSS Minification an Alternative Helix Approach

On a recent project, we were looking at a way to meet a number of criteria to improve page load times.

With regard to CSS and JS requests these criteria were:

  • Reduce the number of CSS and JS HTTP requests on page load.
  • Reduce the size of CSS and JS files (minification)

At the same time, we were running a Sitecore Helix project and we needed this to seamlessly fit into our project and CI builds.

In the past, I had also done some Umbraco development and was familiar with a nice little package called “ClientDependency Framework (CDF) by Shannon Deminick”  that comes out of the box with that particular CMS.

CDF will cut down your server requests in a few ways:

  • Combining  (It combines multiple JS files into a single server call on the fly)
  • compressing
  • minifying output
  • Composition
  • Caching (Processing of files into a composite file it cached)
  • Persisting the combined composite files for increased performance when applications restart or when the Cache expires

It also has a development mode were asset includes are not touched so that you can debug as usual.

The problem we had to solve was how to make the above package work in a way that it could plug and play with the Helix way of doing things.

The habitat (helix example project)  provides a way to include assets file via a global theme and also on an idividual rendering level.

Global themes should be used to load assets that are to be applied across a whole website.

Rendering level assets should be used to load css and javascript that only apply directly to that particular rendering.

The current habitat solution contains a module called “Assets” that lives in the foundation layer. This module will cleverly round up all the assets that need to be rendered on the page by the Helix Architecture.

In order to use ClientDependency Framework (CDF) and seamlessly plug it into the Assets module, we have provided two new layout tags for use:

Usage in Layouts
- @CompositeAssetFileService.Current.RenderStyles()
- @CompositeAssetFileService.Current.RenderScript(ScriptLocation.Body)

The above integration is currently on a pull request (waiting approval) on the main Helix Habitat GitHub  repository:

See: https://github.com/Sitecore/Habitat/pull/349

It will remain available on Aceik’s fork of the habitat project. 

Setup Steps: 

To use this service you will need to rename the following and run the gulp build:
/// – “App_Config/ClientDependency.config.disabled”. to App_Config/ClientDependency.config
/// – “\src\Foundation\Assets\code\Web.config.transform.ClientDependency.minification.example” to Web.config.transform

Once running (not in debug mode) you will see the following asset includes in the html source.

/DependencyHandler.axd/bdc200f5bb6df7e817066f4b98499322/12/js

A note about Cloudflare and minification: 

If your using cloudflare in front of your website,  JS and CSS minification can be turned on as feature in Cloudflare.  You can also compact the number of server requests made by using a feature called Rocket Loader. 

0 comments on “Understanding Form Submission Tracking (WFFM and xDB)”

Understanding Form Submission Tracking (WFFM and xDB)

This blog is targeted at a marketing audience that may be wondering how to interpret the WFFM tracking metrics, as shown in the form reports.

Definitions of metrics as proposed by Sitecore:

  • Visits – the total number of visitors who visited the page containing the web form.
  • Submission attempts – the total number of times that visitors clicked the submit button.
  • Dropouts – the total number of times that visitors filled in form fields but did not submit the form.
  • Successful submissions – the total number of times that visitors successfully submitted the web form and data was collected.

FormTracking

If you’re trying to test the above values by doing form submissions yourself and you don’t fully understand what is happening in the background, it will get confusing very quickly. Indeed some clients have asked me to investigate the tallies above as they believed the report was broken when some of the numbers started declining.

You really need to understand that just closing your browser when trying to affect the above metrics doesn’t fool the xDB into thinking your a different user.

Thanks to xDB cookies the values shown next to the metrics above can actually be reassigned.  What I mean by this is if a user is recorded as a dropout and they then return a few hours later to re-submit the form. The tally next to dropouts will actually reduce by one and that value will be added to successful submissions.  If you’re trying to test the tally for correctness a drop in some of these counts will leave you scratching your head.

The only guaranteed way for the user to be treated and a unique visitor is to clear your cookies and browser temp data.

A good way to perform these tests is to use “Incognito mode in chrome“. As this will prevent cookies from being stored.


This provides us with an explanation as to why you might see some of the tallies go into the negative. Which makes sense when you realise that dropouts can be converted to successful submissions.  How then do we explain when submission attempts drop in number?

Running a few tests on this reveals that when a user is converted from a drop-out to a successful submission, the submission attempts recorded against this user are also adjusted down.


The main takeaway from this is that xDB is clever and knows when the same user returns to a form.  It adjusts the tallies in the form report accordingly.

If your marketing department doesn’t care about how well a user is tracked and these tallies confuse them. Let’s say it wants to track the exact number of times a form was submitted (same user or not). You could achieve this independently just by tracking the number of times the thank-you page is loaded.

 

0 comments on “Advanced Scheduled Publishing”

Advanced Scheduled Publishing

Why do we want to schedule publishing?

  1. We want to schedule publishing to ensure that content scheduled to go live will go live when expected. By default, if a scheduled publish isn’t setup any content scheduled to go live will have to wait until the next manual publish after the scheduled date/time.
  2. We want to remove the publish option from editors so that publishing isn’t over utilised.
  3. Reducing the amount of publishes in a day reduces the number of times the Sitecore HTML cache is cleared and therefore increase site performance.

The out of the box scheduled publishing agent provided by Sitecore solves point 1 and 2 above but the issue is publishing at times when a publish is not required. The default publishing agent will publish every x interval all day. This is not efficient because it is publishing when possibly no changes have occurred and that will still cause the HTML cache to clear.

Our Advanced Scheduled Publishing Module provides the option to schedule a publish between a start and finish date and set the interval. For example 9-5 and every 60 minutes.

It also provides the option to set up one-off scheduled publishes at a specific time. So, for example, one publish at 2pm and one at 2am.

These two options can be used individually or in combination. In combination you can allow for a common scenario where we want to publish every 60mins from 9am – 6pm and then a one-off publish at 12am so that any content scheduled to be published will do so ready for the new day.

The configuration of these intervals are managed via a configuration item you create and manage in Sitecore. The documentation found here. The last publish time is updated here also, allowing editors to have more visibility on when the last publish occurred.

Our module is built on top of the Sitecore scheduled tasks so it will check for a time and interval within the range of the scheduled task frequency so, therefore, could occur slightly earlier or later depending on your frequency value. This is a well-documented limitation as the Sitecore scheduled tasks run within the context of a web application which could go down, or be taken down at any time.

Links

Sitecore Marketplace

Github repo

 

0 comments on “Sitecore Helix: Lets Talk Layers”

Sitecore Helix: Lets Talk Layers

Here are some notes from the decision-making process our team uses with regard to what goes where and in which layer. Of course, the helix documentation does go over the guidelines but it’s not until you start working with the architecture that things begin to become clear.

Project Layer

Definition: The Project layer provides the context of the solution. This means the actual cohesive website or channel output from the implementation, such as the page types, layout and graphical design. It is on this layer that all the features of the solution are stitched together into a cohesive solution that fits the requirements.

Comment: The project layer is probably the most straightforward layer to understand. In our project, the modules in this layer remained lightweight and mostly contain razor view files that allow content editors to build up the HTML structure of the pages.

The website content (under home), page templates and component templates are serialized by Unicorn and also live in this layer.

It’s also worth mentioning one particular gotcha you may hit in development to do with template inheritance and you can read more in this blog post.

Feature or Foundation?

Feature Definition: The Feature layer contains concrete features of the solution as understood by the business owners and editors of the solution, for example news, articles, promotions, website search, etc. The features are expressed as seen in the business domain of the solution and not by technology, which means that the responsibility of a Feature layer module is defined by the intent of the module as seen by a business user and not by the underlying technology. Therefore, the module’s responsibility and naming should never be decided by specific technologies but rather by the module’s business value or business responsibility.

Discussion: For our feature modules, we aimed for single concrete features that are independent of each other. They may contain views, templates, controllers, renderings, configuration changes and related business logic code to tie it all together. The point is to always stick to the rule: “Classes that change together are packaged together”.

When building feature modules, it’s also very handy to think about the feature modules removal as you build it. Keep asking yourself how easy would it be to roll back this module and what would I need to do. Doing so will help you to keep those dependencies under control.

Foundation Definition: The lowest level layer in Helix is the Foundation layer, which as the name suggests forms the foundation of your solution. When a change occurs in one of these modules it can impact many other modules in the solution. This mean that these modules should be the most stable in your solution in term of the Stable Dependencies Principle.

 

Discussion: We found that our foundation modules usually consist of frameworks or code that provide a structural functionality to support the web application as a whole. Each foundation module may be used by multiple feature modules to provide them with the support they need to run properly. Our foundation modules contain API calls, configuration, ORM structures (Glass Mapper), initialisation code, interfaces and abstract base classes.

An important point is that unlike feature layer modules, the foundation layer modules can have dependencies to other foundation layer modules. If this was not the case it would be very difficult to construct the foundation layer in the first place.

For the most part, the team can make some fairly quick decisions about what goes where in the initial project planning. And what goes where is fairly obvious after you get familiar with the habitat example project. The main dilemma you’re going to encounter is around where your repositories and services (key business logic) might need to sit.

Business LogicWhat goes where! Help!

Let’s consider the definitions above, they seem straightforward enough. However, in agile projects where things may change rapidly or requirements are not immediately clear (which happens a lot), you’re inevitably going to need to make some judgment calls.

What am I talking about with the above statement, well let’s say one developer codes up a feature module at the beginning of that project. At first, it seems like that particular portion of code is only required by that particular feature. Down the track a requirement surfaces whereby the same business logic needs to be used in another Feature module. Helix rules dictate:

  • Classes that change together are packaged together.
  • No dependencies between feature projects should exist.

A lesser developer may be tempted at this point to duplicate the code in both feature modules to get the job done quickly. This, however, breaks some fairly important fundamental coding standards that many of us try to stick to. Step back to consider the technical debt that duplicate code leave behind vs dependencies between your Helix feature modules.

The solution to this dilemma; it’s time to refactor that feature logic and move it into the foundation layer. After which any feature modules that needs to reference the same code can reference it in the foundation layer.

Remember “with great power comes great responsibility” and this is especially true when touching code in the foundation layer. The code you touch in the foundation layer should be highly stable and well tested as the helix guidelines suggest.

Was the original decision a mistake?

On the flip side of the coin its worth considering that it wasn’t a mistake to put that piece of code in the feature layer to start with. Technically if no one else needed to use the code at the time and it was reasonably unforeseen that anyone else would need to use it, then it probably was the correct call.

Accept that things may change

The team members on your helix project will need to be flexible and accepting of change. I think it’s worth being prepared for some open discussions within your team about what goes in the foundation layer and what goes in the feature layer. It’s certainly going to be open to interpretation and a topic of debate. A team that can work together and be open to a change of direction within their code structure will help the code base stay within the helix guidelines as the project evolves.

Good luck!

 

0 comments on “Helix Template Inheritance”

Helix Template Inheritance

During Sitecore Helix development it’s important to understand the role that template inheritance plays in keeping your dependencies in check.

The current conventions shown in the Habitat example site is to:

  • Create your template fields in modules within the foundation and feature layers with templates starting with a ‘_’.
  • featureinheritance
  • Inside the project layer, create appropriate templates that inherit from the feature and foundation layer templates. inheritanceproject

Each project layer template should inherit from one or more feature / foundation layer templates.

We believe the benefits of sticking to this approach are as follows:

  • Your project layer templates can be composed of template fields from multiple modules in other layers. With the potential for page templates to contain the functionality from multiple modules if need be.
  • Your content tree does not directly rely on feature and foundation layer templates making their removal down the track easier if need be.

Data Source Locations

Helix introduces the concept of Data Source Locations which are documented on this page.

datasourcelocations

Data source locations are very useful for supporting multi-tenant, multi-site solutions. You don’t have to use them, however using them does future proof your Helix solution with the ability to add additional tenants down the track.

The datasource location and template resolution have been extended in the Habitat project. This means that it is also possible to define datasource templates and locations for each site, in addition to on the rendering itself. This is done through an extension of the getRenderingDatasource pipeline and the addition of a site: prefix to the Datasource Location field.

Site:prefix

datasource-location

The above IFrame rendering uses the syntax site:iframe which via the getRenderingDatasource pipeline will look for a data source location within the site called ‘iframe’.

This is great for multi-tenant situations but it also means that the helix dependency rules are not broken. Pointing the data source location field of your rendering to a folder within a website technically breaks the Helix dependency rules. This is a very important point and one that might be missed on a first pass through the Helix documentation.

The trap of using Feature Templates

Developers that are new to helix may not realise that any feature layer modules should have a matching template in the website layer that inherits from the feature layer.

datasource-location2

A follow-on effect from this is that you may skip the creation of a project layer template altogether and simply use the feature layer template in the “Datasource Template” field.

Technically the above dependency is not incorrect. Habitat has one more surprise install for you, however. Once you start to add content blocks to your page based on the data source locations, your insert options for the Local data sources folder start to get automatically populated.

InsertOptions

So for each unique component, you add to the page via Experience Explorer your going to get the template of that component in the insert options.

once again this dependency is not technically incorrect, although it will probably have your testers asking why strangely named templates are showing up in the insert options. The major drawback we could think (as mentioned earlier in the article) is that your content items will be highly dependent on that particular feature module.

  • You will have multiple dependencies between content items and the feature modules.
  • Attempting to remove the feature layer will disrupt all of those content items directly.

On the flip side using the project template instead, means we have a single (or reduced) point of removal for that feature module within the project templates.

The main takeaway we took by running into the above mistake (using the feature template in our data source locations) was that we should be pointing the data source locations at a project template instead.

  • Always create a project layer template that will inherit from a feature/foundation module template.
  • Ideally content items shouldn’t reference feature layer templates directly, even though this doesn’t break the helix dependency rules.