Introduce YML Linting to your JSS Apps

Intro: This post shares how and why you might like to introduce a YML linter into the build process for your next Sitecore JSS project. Particularly if you are relying on the YML import process when building a new application. Shout out to David Huby (Solutions Architect) for introducing team Aceik to yaml-lint.

Why would you want to do this on a JSS project?

When running the JSS application and testing latest changes, we sometimes discovered some strange behaviour with dictionary items, or a page might not load properly.

A 404 page displayed after an invalid YML change was made

This can be caused by a small (incorrect) change in YML breaking individual routes. For example, lets say you have an incorrect tab or character in the wrong place. The YML syntax requires correct spacing and line returns to be valid but this is not always so obvious when done incorrectly. Sometimes only after you run the JSS application and test out the changes do you discover some strange behaviours or the page not loading properly.

To avoid this we found it handy to introduce a YML linter into the JSS build process. This solves the issue of someone making a small change to the YML files and breaking individual routes.

Here are the steps needed to introduce a YML linter into a node-based JSS project:

  1. Install yaml-lint (https://www.npmjs.com/package/yaml-lint)
  2. In the application root create the file .yaml-lint.json
  3. Update the package.json
    • Create a new script entry called yamllint
      • “yamllint”: “node ./scripts/yaml-lint.js”
    • Update the script called ‘build’
      • “build”: “npm-run-all yamllint –serial bootstrap:connected build:client build:server”,
  4. Download the following scripts file and place it in the /scripts folder
    1. https://github.com/TomTyack/jss/blob/feature/YAML-Linter/samples/react/scripts/yaml-lint.js

You can also see the pull request with the above changes at:

https://github.com/Sitecore/jss/pull/385/files

Demo

One Performance Blog to Rule them all – Combining the 6 Pillars of Speed

I have done a number of posts and talks at user groups on Page Speed and performance over the last few years. I have split the various topics into individual blog posts for the most part as performance is dependent on many factors. What has really been missing is a complete demo of how all the different techniques come together to give your site a really good score. So that’s what I intend to demo here is the combination of the 6 pillars of page speed in one Sitecore instance. To recap here are the 6 pillars of page speed performance in my opinion:

1) Introduce image lazy loading

2) Ensure a cache strategy is in place and verify its working. (must have adequately sized production servers)

3) Deploy image compression techniques

4) Use responsive images (must serve up smaller images sizes for mobile)

5) Introduce Critical CSS and deferred CSS files

6) Javascript is not a page speed friend. Defer Defer Defer

I have shown a subset of these previously but crucially three critical pillars to do with imaging were hard to achieve at the time. This is now possible due to being able to support Next Gen image compression (webp), which I wrote about in my previous blog. With a little more time and investigation Image Lazy Loading, responsiveness and image compression to give a more complete picture of how each pillar impacts page speed.

Here are the tools and blogs I will use to achieve each of these:

1) Image Lazy Loading – Blog post by MVP Sitecore SAM and https://github.com/thinker3197/progressively

2) SXA Cache Settings – SXA official documentation

3) Next Image (WEBP) Image Compression – https://github.com/Aceik/ImageCompression

4) SXA Responsive Images – SXA official documentation

5) Introduce Critical CSS and deferred CSS files – https://github.com/Aceik/Sitecore-Speedy

6) Javascript is not a page speed friend. Defer Defer Defer – https://github.com/Aceik/Sitecore-Speedy

Alternatives: Mark Gibbons (MVP) recently upgraded the Dianoga image library to support WEBP. Worth a look if you don’t want to use a third party API. It also supports a CDN. Also Vincent Lui (MVP) also pointed out in his recent SUGCON talk, you can achieve both image compression and image lazy loading via some of the modern CDN’s. That is a great (easy) option if you are retro fitting these techniques to a live website.

I’m not going to dive deep into exactly how to setup each of these things as I think the individual links have sufficient instructions. I will show in the Demo videos how each pillar impacts the HTML rendered. For the most part I am keen to demonstrate the impact of each of these line items and how each one will benefit your page speed score.

Before we begin its important to understand that the algorithm (Lighthouse) behind Google’s Page Speed insight doesn’t work in an exactly linear fashion. If you improve your score by ticking off one of the above, don’t expect ticking off another issue will have the same benefit. The last 20 points out of 100 (on the mobile scoring system) is that hardest to achieve based on what I have seen.

Live Demo Video Series that accompanies this blog:


Test Outline:

Google Page Speed Insights — Scores can fluctuate widely based on network latency. At time you will experience score fluctuations at different times of the day on the same site.

In general this is a guide

Here is the general outline of the VM that hosted the IIS instance for testing. I also put the VM under some basic load while running the tests.

  • All the test below used Sitecore 9.3 and the SXA habitat example site.
  • Test used the live Google Page Speed insights tool via the url: https://developers.google.com/speed/pagespeed/insights/
  • Sitecore was setup on an Azure VM with the specifications:undefined
  • The test was run 5 times, to get an average score.
  • The test page was the homepage of the Habitat site and the page was requested before running the test 5 times so that the instance could be considered warm.
  • EXM and XDB were not running on these test instances.
  • Test results are Mobile Page Speed scores only – This is the most important metric in today environment and good desktop scores are not really a challenge.
  • The default Habitat cache rendering for Navigation was left on for all tests. (without this the site fails under basic load altogether)
  • All tests were conducted under load in an attempt to replicate a production environment. For this I used a node package called loadtest.
  • SXA CSS/Javascript optimisations were turned on, but as I have mentioned before this has a minimal performance boost.

loadtest -c 10 –rps 10 http://baselinecd.dev.local/

10 requests per second with a concurrency of 10

Baseline Score

The Baseline score encompasses the habitat site installed with no modifications.

Result: 48 / 38 / 40 / 34 / 38 = 39.6/100 Average

Observation: Heavily penalised for CSS and Javscript loading times.


Image Lazy Loading

All images on the homepage were converted to be Lazily loaded. A single large blurred image was used as the placeholder for all images.

Result: 57 / 55 / 61 / 52 / 63 = 57.6/100 Average

Observation: Around the mid point of the scale, image lazy loading has around a 15 – 20 point impact.


Rendering Cache Strategy

I have blogged extensively about this in the past but setting up cache settings properly is so critical and has a major impact. Its also one of the easiest things to fix for a poorly performing Sitecore site. Also note the only way to accurately demonstrate the impact that Rendering cache has on a site is to test it under load.

This test was run with higher user per second: loadtest -c 10 –rps 30 http://baselinecd.dev.local/

With Cache Enabled:

49 / 56 / 41 / 54 / 54 = 50.8

Without Cache Enabled:

ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT = You get the point 🙂

Observation: Rendering cache settings are critical and should be the first step in Page Load Speed refinement for a Sitecore site. 10 Point benefit observed once a site is stable under load.


Image Compression

Result: 60 / 58 / 61 / 62 / 62 = 60.6/100 Average

Observation: Around the mid point of the scale, image lazy loading has around a 20 point impact.


Critical CSS

Result: 74 / 78 / 79 / 81 / 81 = 78.6/100 Average

Observation: The combination of critical CSS in the head and Deferred styles provides a meaningful page speed boost. 25 Point observed benefit.


Deferred Javascript

Result: 92 / 94 / 93 / 94 / 94 = 93.4/100 Average

Observation: Javascript has a massive impact, reducing it drastically in the initial payload provides massive page speed improvements. 40 Point observed benefit.

You might think, hey I will just do Deferred Javascript and it will be all good. While this particular PIllar/Criteria does have the biggest impact. Every site is different and as mentioned earlier scores fluctuate. The upper part of the scoring system is the hardest to reach. So while this is a great starting point, ignore the other speed pillars at your peril.


Responsive Images

Result: 56 / 54 / 59 / 56 / 60 = 57/100 Average

Observation: Around the mid point of the scale converting images to be responsive (srcset support) has about a 10 point impact.


Results Summary

CriteriaAverage ScoreObserved Benefit
No Change (SXA Habitat Home OOTB)41.8 / 100
Image Lazy Loading57.6 / 10015 Points
Sitecore Rendering/HTML Cache Settings50.8 / 10010 Points
Image Compression (webp)60.6 / 10020 Points
Critical CSS78.6 / 10025 Points
Deferred Javascript93.4 / 10040 Points
Responsive Images57 / 10010 Points

The Pillars Combined

In isolation we can see the rough results of what each of the pillars might do to our Page Speed. The real question is what does combining all these pillars produce.

Result: 100 / 100 / 100 / 100 / 100 = 100/100 Average

Observation: Do I expect this on an actual production site realistically ? That is certainly the dream, but in reality you should be over the moon if you make it into the 90s and pat your self on the back if you get into the 80s as well. For any Sitecore site if you make it into the 90’s for mobile, your doing an amazing job.

Admittedly for the combined demo I skipped the responsive image pillar. SXA supports Responsive Images but not in combination with data attributes. It was going to be a bunch of work to write a custom SXA handler to support both lazy loading and responsiveness at the same time. That is not to say its not possible. Either way the impact was minimal.

Conclusion

Page speed is so critical to SEO and visitor conversion. A slow site instantly turns away users on mobile and tablet devices. Admittedly the final result shown above and in the video have required that all the right tools be available to the Sitecore community. Which up until recently you likely needed to bake your own solutions in order to get that over the line.

I think its now becoming possible to aim fairly high (90/100 on mobile) with our Page Speed scores, but it does require getting most if not all of the Architecture Pillars above working together. Its worth learning each of these and understanding the pitfalls and limitations if you want really great page speed. Good luck and feel free to get in touch with any questions.

Footnote

The combined pillars can produce great results but you still need to load test before going live. Checkout the video below where I search for the breaking point using the loadtest tool. Please note that this node based load test tool should just be used for a guide. Before go live I recommend using a hosted load tool solution that has multiple geographic locations. Tests done based on one network location or device will result in a network bottle neck and give you false positives.

Bonus Video: https://www.youtube.com/watch?v=96YcxyhYh0U

Next Gen Image Compression in Sitecore

Spoiler: This post is not a post about Dianoga, I take a deep dive into Tiny PNG and Kraken.IO integrations into Sitecore. The results are worth checking out at the bottom.


At the start of the year, I’ve picked up where I left off, on page speed. Last year I took a deep dive into attempting to improve the page speed on Sitecore SXA sites by using some of Google’s recommended techniques to structure the page. If you haven’t already seen it, head on over the Sitecore Speedy and see some of the results we achieved.

I’ll be the first to admit that getting really good page speed scores isn’t easy. It takes a lot of different factors to come together. Just as a reminder, here is the main list that I would consider you need to check off to be winning at this game.

1) Introduce image lazy loading

2) Ensure a cache strategy is in place and verify its working.

3) Dianoga is your friend for image compression

4) Use responsive images (must serve up smaller images sizes for mobile)

5) Introduce Critical CSS and deferred CSS files

6) Javascript is not a page speed friend. Defer Defer Defer

For this post, i’m going to look at an alternative to Dianoga. I’m a big fan of Dianoga and have used it over the years to crunch loads of oversized images introduced by Content Editors. I will, however, say that it can add complexity to deployments and CI/CD pipelines and while some claim to have had success in Azure Apps, others have not.

On the flip side, content editors love Tiny PNG, which is one of the most popular image compression website utilities going around. Tiny PNG also has a developer API, so we have used this to build in a compression tool that can be used directly from your Sitecore toolbar.

The button below is hooked up to chat to Tiny PNG API. It will send across your image data and receive a compressed image back for storage.


Full disclosure, I’m not the first person to hook up Tiny PNG to the image library. I could find two other implementations

One will allow you to run a powershell script to connect to the Tiny PNG API and the other is a module to connect to the API on upload.


This implementation of the Tiny PNG API introduces the following variances:

  • A button in the CMS to crunch any single image.
  • A scheduled task that will process any image not already processed.
  • Error handling for when the API limits are reached
  • Logging that outlines which images were processed.
  • Before and After compression information stored in any Image field of choice.
  • A feature toggle to turn the whole feature on/off

All the source code is available at: https://github.com/Aceik/ImageCompression

Now let’s jump in have a look at the results just from crunching a few images down:

Without image compression:

Click to Enlarge Image

To compress the images on the page, we head on over to the “Compress” button in the Media tab that we have introduced.

Click to enlarge

A few examples of compression results taken from homepage images:

Before: 158.4 KB | After: 110.6 KB

Before: 197.8 KB | After: 135.3 KB

Before: 640.0 KB | After: 120.7 KB

After compressing all the images on the page the saving can be seen below.

Click to enlarge

So our total image size saving is 2.4MB – 1.3MB = 1.1MB

A pretty decent saving from just pressing the compress button on 27 homepage images. Also, consider that the user won’t notice any difference in image quality as this method uses lossless compression.


The compression achieved is great for helping us tick off one of the requirements for fast pages with Google. But as we are about to find out Google will likely still complain about two other criteria. When it comes to Google Page Speed insights a page that does not have properly processed images will bring up the following three recommendations:

Here is a break down of how we address each one:

  1. Serve image in next-gen formats – Image formats like JPEG 2000, JPEG XR, and WebP often provide better compression than PNG or JPEG, which means faster downloads and less data consumption. Learn more.
  2. Properly Size images – Your CSS layouts should be responsive and use modern image retrieval techniques that adapt the image size requested based on screen size. Read More
  3. Efficiently encode images – The Tiny PNG integration above will take care of this. This is all about compressing the image to as small as it can get without a visible loss of quality.

So assuming you have already achieved number three using the Tiny PNG integration or another source, let us look at how we can solve the next-gen image requirement.

As a quick side note the testing I did after converting the images to next-gen also ticked item number two above. I don't think this should be relied on however and its best to incorporate responsive images into your projects from the beginning.  

When looking into how to convert images to a next-gen format I opted to target webp. Google has a nice little page explaining the format here.

WebP is natively supported in Google Chrome, Firefox, Edge, the Opera browser, and by many other tools and software libraries.

Once again I opted to look for an API that would provide the conversion for me so that Sitecore could easily connect, send the image and then store the result. All without any extra hosting requirements. I opted to go with Kraken.IO image APIs as they have a free 100MB trial offer and well free is a good price when building proof of concepts. The integration is all available on Aceik’s github repository. Just signup for your own API keys add them to the module settings (in the CMS) and start converting.

To test out just how much this would impact the image payload size for the whole page, I once again converted all the images on the SXA habitat homepage.

Here are the results:

Click to enlarge

So our total image size saving is now 2.4MB – 0.79MB = 1.61MB

The reduction in size from a non-compressed image to a webp formatted image is truly impressive.

A few examples:


Conclusion

I can only conclude by saying that if page speed is really an important factor for your Sitecore project take a look at Tiny PNG. If you want to go next level with your image formats and achieve great compression try out the Kraken.IO API integration as it could be well worth the small subscription fee.


Results

CompressionTotal Image SizeSaving
None2.4 MB
Tiny PNG1.3 MB1.1MB
Kraken.IO (webp)0.79MB1.61MB

Notes:

The module and code mentioned in this blog post are available on Aceik’s Github account. This also contains installations instructions.

GitHub: https://github.com/Aceik/ImageCompression

After installation, your content editors will simply be able to compress and convert images as needed from within the CMS.

Click to enlarge

The Github Readme contains a run down and the standard settings inside Sitecore as shown below:

Accessing the JSS Dictionary in C#

This is a quick post to guide developers through gaining access to the JSS Dictionary in the backend C# code.

Why would you want to be able to do this?

The reason we originally had to do this was that our JSS Angular application had editable content from the dictionary that we also wanted to access in C#. In our particular case, it was to inject the content into an email template that would be sent to the user. To save duplicating content it made sense for both the front end and C# to have access to the same dictionary.

Where do we start?

The following assumes you have a Sitecore instance with JSS installed and a JSS application you are working on. Grab your favourite de-compilation tool (I use ILSpy) and locate the following DLL in the bin folder of your running Sitecore instance:

Sitecore.JavaScriptServices.Globalization.dll

Once you have that open in ILSpy you want to have a search for DictionaryServiceController

public class DictionaryServiceController : ApiController

The following method is what we want to use in our C# code:

public DictionaryServiceResult GetDictionary(string appName, string language)

It takes the unique application name (that belongs to your application) and the language (“en”) as a parameter. As a result, you will get back a dictionary object that you can use to lookup up your content.

This is the Controller that would normally be called via an API on the front end. So how do we call it from normal C# service for instance?

Firstly, the controller has a constructor that has three parameters that are injected via DI (Dependency Injection).

IConfigurationResolver configurationResolver, 
BaseLanguageManager languageManager, 
IApplicationDictionaryReader appDictionaryReader

Using ILSpy once again you can find that the above three parameters are all set up in the DI container via RegisterDependencies.cs in various JSS assemblies. The Controller itself is already registered in the DI Container as well, which is very handy.

If you have a look at showconfig.aspx in the admin tools you can see that a lot of the dependencies are registered via RegisterDependencies.cs

For example:

<configurator type="Sitecore.JavaScriptServices.AppServices.RegisterDependencies, Sitecore.JavaScriptServices.AppServices" patch:source="Sitecore.JavaScriptServices.AppServices.config"/>

Dependency injection is a whole other topic so I will leave that to your personal preference as to how you achieve it. For the purposes of the following complete code example, I have used the Services Attribute style setup. If you want to keep consistency with Sitecore you could also setup via the RegisterDependencies.cs class of your own and use a patch file to kick it off.


Example Service:

using Sitecore.Foundation.DependencyInjection; // Borrowed from habitat
using Sitecore.Diagnostics;
using Sitecore.JavaScriptServices.Globalization.Controllers;

namespace Sitecore.Foundation.JSS.Services
{
    public interface ITranslationService
    {
        string TranslateKey(string key);
    }

    [Service(typeof(ITranslationService), Lifetime = Lifetime.Transient)] 
    public class TranslationService : ITranslationService
    {
        private readonly DictionaryServiceController _controller;
        
        public TranslationService(DictionaryServiceController controller)
        {
            this._controller = controller;
        }

        public string TranslateKey(string key)
        {
            var dictionary = GetDictionary();
            if (dictionary.phrases.ContainsKey(key))
                return dictionary.phrases[key];
            Log.Error("Dictionary key {key} not found", this);
            return string.Empty;
        }

        private DictionaryServiceResult GetDictionary(string appName = "myAppName", string language = "en")
        {
           return _controller.GetDictionary(appName, "en");
        }
    }
}

Above is a simple service that can be used from just about anywhere in your C# code.

Simply change the appName and language as required to access the correct JSS dictionary. Also, remember to publish your app dictionary to the web database or you may get no results.


There we have it, accessing the JSS dictionary from C# in a nutshell. I hope this helps some other folks get this done quickly on JSS builds.

An Introduction to Sitecore Pipelines

What is a Sitecore Pipeline?

In Sitecore pipelines describe a series of discrete steps that are taken to achieve some objective.  If you think about writing code to handle an HTTP request for example you could create a monolithic class that does the job from end to end; however the pipeline approach is a number of classes that can be invoked in order.  First do this and then do that etc. 

There are many Pipelines in Sitecore.  You can view existing pipelines and the processors that they call using Sitecore Rocks in Visual Studio.  Right click on the site connection and choose Manage. Click on the Pipelines tab in the Visual Studio edit pane.  Click on one of the listed pipelines to see all the processors that are executed as part of the pipeline.  There are pipelines with a single processor at one end of the scale to the httpRequestBegin pipeline with 45 distinct steps at the other.

Why are they useful?

Thinking about the monolithic class above it would be very difficult to maintain or modify, It would also be truly massive.  So modularising it would make it much easier to maintain. 

It also makes it much easier to customise.  For example, if we consider a pipeline that has three processors it might be drawn something like

To add to the existing functionality we could include a new step like

Where we add some custom function after Step 1 and before Step 2. 

We could also replace an existing step completely

How are they defined?

Pipelines are defined using XML in sitecore.config.  In the example below we can see the httpRequestEnd pipeline definition.  The three processors are called in the order in which they are listed.  A parameters object is passed between them to provide continuity.  The final processor is also receiving four additional parameters from the config file.

<?xml version="1.0" encoding="utf-8"?>
<sitecore database="SqlServer" xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:security="http://www.sitecore.net/xmlconfig/security/">
…
<pipelines>   
… 
<httpRequestEnd>
      <processor type="Sitecore.Pipelines.PreprocessRequest.CheckIgnoreFlag, Sitecore.Kernel" />
      <processor type="Sitecore.Pipelines.HttpRequest.EndDiagnostics, Sitecore.Kernel" role:require="Standalone or ContentManagement" />
      <!--<processor type="Sitecore.Pipelines.HttpRequest.ResizePicture, Sitecore.Kernel"/>-->
      <processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel">
        <ShowThresholdWarnings>false</ShowThresholdWarnings>
        <TimingThreshold desc="Milliseconds">1000</TimingThreshold>
        <ItemThreshold desc="Item count">1000</ItemThreshold>
        <MemoryThreshold desc="KB">10000</MemoryThreshold>
      </processor>
    </httpRequestEnd>
…
    </pipelines>
…
  </sitecore>

How can I work with Sitecore Pipelines?

Existing Sitecore pipelines can be customised as outlined above and it is also possible to create a brand new pipeline from scratch

Customise existing Pipelines

First thing to do is to create a configuration patch to add the new processor class into the pipeline at the desired location.  As you can see from the code it is possible to pass variables to the processor.  Here we are adding a processor called NewsArticleLogEntryProcessor into the httpRequestBegin pipeline after the ItemResolver

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <httpRequestBegin>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin.NewsArticleLogEntryProcessor,Fourbyclub.CustomCode" patch:after="processor[@type='Sitecore.Pipelines.HttpRequest.ItemResolver, Sitecore.Kernel']">
          <NewsArticleTemplateID>{B871115E-609F-44BB-91A4-A37F5E881CA6}</NewsArticleTemplateID>
        </processor> 
      </httpRequestBegin>
    </pipelines>
  </sitecore>
</configuration>

Then we need to create the processor.  Inherit the HttpRequestProcessor and Implement the Process method.  All we are doing here is writing to the log if the requested item is a NewsArticle.

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin
{
    using Sitecore.Pipelines.HttpRequest;
    using Sitecore.Diagnostics;

    // TODO: \App_Config\include\NewsArticleLogEntryProcessor.config created automatically when creating NewsArticleLogEntryProcessor class.

    public class NewsArticleLogEntryProcessor : HttpRequestProcessor
    {
        
        // Declare a property of type string:
        private string _newsArticleTemplateID;
        public string NewsArticleTemplateID { get { return _newsArticleTemplateID; } set { _newsArticleTemplateID = value; } }

        public override void Process(HttpRequestArgs args)
        {
            Assert.ArgumentNotNull(args, "args");
            if ((Sitecore.Context.Item != null) && (!string.IsNullOrEmpty(_newsArticleTemplateID)))
            {
                Assert.IsNotNull(Sitecore.Context.Item, "No item in parameters");
                // use util to get id from string property
                if (Sitecore.Context.Item.TemplateID == Sitecore.MainUtil.GetID(_newsArticleTemplateID))
                {
                    // view in log file later, so add FourbyclubCustomCode
                    Log.Info(string.Format("FourbyclubCustomCode: News Article requested is {0} and the item path is {1}", Sitecore.Context.Item.DisplayName, Sitecore.Context.Item.Paths.FullPath), this);
                }
            }
        }

    }
}

Create a new Pipleline

To create and new pipeline is a little more work but still very simple.  The first thing to do is to declare the pipeline with a configuration patch

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <logWriter>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter.logWriterProcessor,Fourbyclub.CustomCode" />
      </logWriter>
    </pipelines>
  </sitecore>
</configuration>

The XML above will create a pipeline called logWriter that has a single processor called logWriterProcessor, which will be in the Fourbyclub.CustomCode.dll.

Pipelines must pass a PipelineArgs object to each processor as it is called so that needs to be defined

using Sitecore.Pipelines;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class LogWriterPipelineArgs : PipelineArgs
    {
        public string LogMessage { get; set; }
    }
}

At least one processor is needed to do the work of our pipeline

using Sitecore.Diagnostics;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class logWriterProcessor
    {
        public void Process(LogWriterPipelineArgs args)
        {
            Log.Info(string.Format("FourbyclubCustomCode: The message was {0}", args.LogMessage), this);
        }
    }
}

Finally we need to invoke the pipeline in our code somewhere.  Instantiate the LogWriterPipelineArgs and set the LogMessage.  Then call CorePipeline.Run and pass it the name of the pipeline and the args.object

var pipelineargs = new LogWriterPipelineArgs();
pipelineargs.LogMessage = "Requested item is not a News Article";
CorePipeline.Run("logWriter", pipelineargs);

Conclusion

Thank you for reading and I hope that this short introduction to Sitecore Pipelines has shown the power of pipelines to customise Sitecore and help to build maintainable code.  However we should always check if there is a way we can implement something using existing Sitecore functionality rather than going for a pipeline as a first resort. As with any customisation does each pipeline or processor added have a potential to increase the challenge of Sitecore upgrades?

Training for your Sitecore project

To mis-quote Benjamin Franklin ‘By failing to train you are training to fail”.  You can provide the best tools in the world but unless people know how to use them, they are useless.  And the more complex the tool the more apparent that becomes.  I think Sitecore is a simple, intuitive tool but then I have been delivering training on Sitecore for eight years.  I’m now familiar with the system. But I do know that at first sight it can appear overwhelming.

So, how best to use training to mitigate risk in your Sitecore project, or any other for that matter? 

I recommend a three-phase approach starting as early as possible in the project.  If it is possible start before the vendor is decided.  Phase one is pretraining.  Phase two shortly before Go Live or UAT. And finally phase three is ongoing maintenance to allow for staff churn, new features etc.

Let’s look at each phase in a bit more detail. 

Phase One/Pretraining

Phase one is the first tranche of training.  As stated above it should be undertaken early in the project.  Assuming you are starting the project by selecting the vendor for a software project; once you have a short list consider sending key stakeholders to classroom training on each of your shortlisted vendors.  While it may be considered an unnecessary cost it will provide a good insight into the real-world use of the various offerings.  And also consider that the cost of pretraining is insignificant compared to the cost of a failed project.

Once you have decided on a vendor, if they have not already, key business and technical staff should attend appropriate training for their role.  Selection should be based upon who will be working with the implementation partner.  Training at this early stage will equip the team with the skills to work with the implementer.  It also shows attendees the capabilities of the chosen platform; they know what to ask for. 

If an implementation partner hasn’t been selected yet training this early can also be useful in their selection.

Delivery for this tranche should be public classroom training, unless you have the numbers to make a private course worthwhile (hint 6+ staff is the tipping point from a public class to a private one).  Classroom training allows for questions to be asked and a good trainer can adapt the delivery to ensure the learning outcomes are achieved.

Phase two/Go Live

Phase two is where the bulk of your users receive training – this is because you are about to go live with your new website.  By now it should be possible to train on, or at least reference, your application as it will be implemented.  Various options are available.  eLearning, although to be fair I am not a fan of eLearning, it is too easy to click through and be distracted by other incoming tasks and emails without absorbing the information.  However, it is cheap and repeatable.  For business users, you could even consider bespoke eLearning.  While the initial cost is high you own the content and there are negligible ongoing costs plus it covers maintenance training or phase three.  Classroom training such as public courses will be generic and provide a good understanding of the features of the chosen application – it will probably be useful to have an internal follow up to familiarise business users with their application.  Developers should be fine with just a public course and some time to get to know the code.  It is, again, possible to get customized or completely bespoke training delivered in a classroom – where your lesson is about the application you are implementing within your organisation.  Delivery costs should be approximately the same as attending public courses but there will be development costs involved; how much will depend on the degree of customisation required

Phase three/Maintenance

Once your website is live and all the users and developers are trained, we move to phase three or maintenance.  Additional training maybe required if you upgrade the version of the application or introduce new features (depending on the enormity of the changes). 

Phase three training is primarily needed for staff churn.  Developers should start by attending the public courses and then learn peer to peer, consider pair programming here.  For business users it is beneficial to attend public classroom training to get the solid foundation to build on.  Just peer to peer onboarding, while tempting, can perpetuate bad habits and often leaves knowledge gaps in the real understanding of the application.  At the very least you need a set of learning objectives that must be ticked off preferably by your in-house super user.

Aceik is a Sitecore training provider.  We teach public courses around the country.  You can view the public schedule here: Upcoming Courses New courses are being worked on constantly so if you do not see what you want please enquire here or email David Newman at the link below.  We also provide custom or private training email David Newman in the first instance or if you are an Aceik customer already contact your account manager.

Sitecore PaaS and Ansible

Sitecore PaaS and Azure is a good match and the idea is to blend in Ansible for Sitecore PaaS infrastructure set up on Azure and vanilla Site deployment.

Why would you use Ansible? Using Powershell scripts with parameter files is the common approach. Ansible is a very valid alternate approach for organisations who have Ansible in their tech stack already or for those that prefer it over Powershell.

Let’s start with a brief overview of Ansible. Ansible is an automation tool to orchestrate configuration and deployment of software. Ansible is based on agent less architecture by leveraging the SSH daemon. The Ansible playbook is a well defined collection of scripts that defines the work for server configuration and deployment. They are written in YAML and consists of multiple plays each defining the work to be done for a configuration on a managed server. 

Ansible Playbooks help solve the following problems:

  1. Provision of Azure Infrastructure required to run Sitecore and the deployment of Sitecore. Ansible supports the ability to seperate the provision of the infrastructure from the deployment of the Sitecore packages into “roles”. These roles can then be shared between different playbooks essentially allowing for re-use and the configuration of different playbooks for different purposes.
  2. Modularise the environment spin up into tasks/plays instead of one monolithic command doing everything in one go.
  3. By executing a single playbook, all the required tasks are coordinated to be executed to result in a fully operational instance of Sitecore up and running and ready to be customised by the organisations development team

Ansible Playbooks help with workflow between teams:

  1. Provide flexibility for Developers and DevOps teams to work together on separate piece of work to achieve a common goal. A DevOps team can work on the Azure Infrastructure set up and Developers can work on Application set up and vanilla deployment  
  2. Once the environment is provisioned hand it over to Development team for each site to deploy the custom code and configuration on Vanilla site.

Ansible has list of pre-built modules for Azure that can be leveraged for Azure Infrastructure Spin up and deployment. A list is available here https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#azure.  

We have used the azure_rm_deployment module during the setup journey. The best thing I liked about Ansible was the ability to structure the parameters in a clean and organised fashion to ensure future extensibility is maintained. Ansible supports the use of multiple parameter file. This allows for both shared and environment specific parameter files. You will see the example later in the blog. 

All the ARM templates, play books and tasks are source controlled and Ansible tower can be hooked into the Source control of your choice.

This allows/enforces all changes to the templates, play books and tasks to be made locally and then commited to the source control repository using familiar tools. Asnible will then retrieve the lastest versions of these files from source control as the initial step on execution.

This option is more streamlined than having to manually upload the updated files to an online repository like a storage account and have Ansible/Azure access them using URLs.

Below is one of the example of the playbook. The roles mentioned here are just an example. You will need more roles for a complete azure infrastructure and Sitecore deployment 

Note the variables {{ env }} and {{ scname }}. They are passed from the Ansible tower job template into the playbook. This variables needs to be configured in the Extra VARIABLES field as shown below in the example job template. 

The env name is your target environment for which you want to spin up the Sitecore Azure environment. This could be dev, test, regression or production and the site name is the name of your website. This allows you to use same playbook to spin up multiple sites for multiple environment based on the extra variables passed in the job template in Ansible tower. This combination forms the path to the yml file which contains the definition of the parameters, per site, per environment. Below is the snapshot of the variable file structure. 

  • Each role in the Playbook is a Play/Task and the naming convention is fairly self-explanatory. 
  • Each task has a yml file and ARM template (json file). However it is not mandatory to have an ARM template for each of the tasks.
  1. Create the resource group just to have the tasks yml file and no arm template. 

2. Create the Redis Cache resource that will contain both the tasks yml file and the ARM template. 

There are tons of resources available in the Azure ARM template repo https://github.com/Azure/azure-quickstart-templates to get you started. You can then customize it to suit your projects requirements. Sitecore ARM templates are a good starting point which you can utilize to get some ideas. The idea is that you can grab snippets from these example to form your own ARM template. 

I will be writing more blogs on Azure and Sitecore so stay tuned.

Top 5 Ways to Extend Sitecore HTML Cache

In 2017 I wrote a reasonably long post on all the different considerations a Sitecore Caching Strategy might cover.

Following on from that post its time to share some custom HTML cache extensions that we at Aceik may incorporate into our projects. This count down of custom settings has been collected from across the Sitecore community.

5) Vary By Personalisation

This one (in my opinion) is a must-have if your incorporating personalisation in your homepage.

I admit it will only be effective up to a certain number of content variations and even then needs to be used with caution. Still, if you can save your server and databases from getting hit and help keep Time to First Byte (TTFB) low, its always worth it.

Please note that if you’re displaying a customised message with data that is only relevant to that user, the number of variations may not make it worthwhile.  On the other hand, if your showing variations based on a handful of profile card match rules, we found it to be fairly effective.

Code Sample: ApplyVaryByPeronalizedDatasource.cs

Credits: Ahmed Okour

4) Vary By Resolution

Its a fairly common scenario that we want to display a different image based on the users screen size. So it stands to reason that we would need a way to differentiate this when it comes to caching our Renderings.

The particular implementation was used in combination with Scott Mulligan’s Sitecore Adaptive Image Library.

The Adaptive Image library stores the users screen resolution via a cookie in the front end razor/javavscript:

document.cookie = '@Sitecore.Configuration.Settings.GetSetting("resolutionCookieName") =' + Math.max(screen.width, screen.height) + '; path=/';
  • The first time around if no cookie is set it uses the default largest image size as the cache key.
  • If the cookie is set the cache incorporates the screen resolution.
args.CacheKey += "_#resolution:" + AdaptiveMediaProvider.GetScreenResolution();

Code 1:  ApplyVaryByResolution.cs

Code 2: AdaptiveMediaProvider.cs 

Credit:  Dadi Zhao

3) Vary By Timeout

This one’s a little different, it requires not only a new checkbox but also a new “Single-Line text” field that allows you to enter a timeout value.  The idea, as you might have guessed, is for the rendering cache to expire after a certain amount of time.

Code 1: ApplyVaryByTimeout.cs

Credit: Dylan Young

2) Vary By Url

An oldy but a goody. I’m a little surprised this one just hasn’t made it into the out of the box product. On the other hand, I can see how it could be overused if you don’t understand the context it applies to. Essentially you can take either the Context Item ID or the raw URL and make your rendering cache vary based on that key.

A good use case for this setting could be for navigation that requires the current page to always be highlighted.

Code 1: ApplyVaryByURLCaching.cs  (Context Item ID formula)

Code 2: ApplyVaryByRawUrlCaching.cs (Raw URL formula)

Credit:  The 10 other people that have blogged about this over the years.

1) Vary By Website

Given Sitecore is an Enterprise content management system we often see multi-site implementations launched on the platform. It makes sense then that you have an option to cache renderings that don’t change all that much on one site but have different content on another.

Example Usage: A global navigation used across all sites that requires some content for the context site to show differently.

Code: ApplyVaryByWebsite.cs

Credit: Younes van Ruth

That rounds out the count down of some of the top ways to extend Sitecore’s out of the box rendering cache. Your renderings will likely use a combination of these settings in order to achieve adequate caching coverage.

For a better idea on how you might add the top 5 above into Sitecore. Please see the technical footnote below. 



 

Technical Footnote:

All these extensions will add an extra checkbox in the Rendering cache tab within Sitecore.

cachesettings

In order for this check box to show up you need to add your custom checkbox fields to the template:

/sitecore/templates/System/Layout/Sections/Caching

You can achieve this in several ways. and there are a lot of other blogs “on the line” that describe how to add in these custom checkboxes so I won’t go into a deep dive here.

With regard to the Helix architecture lets outline one way you could set this up. Aceik has a module in the foundation layer that has all the custom cache checkboxes added to a single template (serialized in Unicorn within that module) . The system template above is then made to inherit from your custom template in order to inherit the custom caching fields.

custom

Don’t Forget your Sitecore Caching Strategy

Releasing a scalable Sitecore instance requires an in-depth knowledge of Sitecore’s multi-layered caching architecture. Here is a run through of what you will need to pull your projects Sitecore caching strategy together. Including Tips, tricks and pitfalls.

HTML/Rendering Cache Settings

HTML caching settings have been part of the core Sitecore product for many versions now. It’s worth chatting about these every now and again as they are critically important to the performance of your Sitecore instance.

CacheSettings

Indeed one of the first things we look for when reviewing a project that has performance complaints is to see if the Sitecore HTML cache settings have been done at all. The difference that properly setup cache settings can have (compared to a site without any) really shouldn’t be underestimated.

There are a lot of blog posts that define the above settings. Here is a good one to get you up to speed. We have also put some information on the various other layers of Sitecore cache at the bottom of this page.

Sample Caching Strategy Document

For the projects I run, I find it useful to have an overall caching strategy page that summarises the settings for every single rendering. This gives us a nice reference point whenever these settings need adjusting to see what might be affected.

cachingstrategy

 

Failure to Cache

In our experience performance, problems are usually reported by clients who have no caching settings turned on at all. This can cause the website to react very slowly or even bring the site down in times of heavy traffic.

Sitecore does have other layers of cache that will kick in (data, item and pre-fetch cache) if you fail to enable HTML caching. The first line of defence is the HTML caching and when properly configured really takes the pressure off all these other areas of caching and prevents the database from getting hit.

Imagine the following scenario for our made up Sitecore client “Bikes R Us”:

  • A page that has a large extended navigation displaying links to 50 other sub-pages across the site.
  • The content of the page contains several rendering components that also contain links to a number of products across various categories.
  • The code to construct this page traverses not only the tree to build the navigation but also numerous product sub-categories to gather all the links.
  • Developer A – has had no proper exposure to caching strategies before and marks the page as done without any HTML caching settings enabled.
  • The site goes live a month later.
  • “Bikes R Us” marketing team starts advertising via EDMs a month later and things go really well. The campaign also goes viral on Social Media with a bike offer too good to refuse.
  • The page that developer A built experiences more traffic than ever expected.
  • Unfortunately, with no caching the code to construct the page is hit again and again.
  • Data layer and Item caching do assist to a point, however, Developer A never increased the default cache limits so calls to other pages are reducing the effectiveness of these layers overall.
  • After a few hours, traffic to the website increases to the point that the server runs out of CPU capacity and starts sending back 500 errors instead of serving up pages.

The scenario above is entirely avoidable when a proper caching strategy is completed as part of the development. Ideally, the caching strategy should be completed as each component of the website is developed and then tested to save on double handling. The caching strategy should then be reviewed, double checked and fully in place before a full performance test is done on the website.

Unfortunately, what often happens on big site builds is the deadline looms and the caching strategy which should be verified before go-live gets forgotten about. Failure to do so causes severe performance issues and leads to the client asking questions a few weeks/months later.

Incorrectly Configured Cache

On the opposite side of the coin, an incorrectly tuned cache can also cause havoc with some areas of the site. Examples of this include Web Forms for Marketeers and member portals. The caching of forms or components that contain data related to you members will:

  • Cause forms to behave with unexpected behaviour
  • Potentially show sensitive user data belonging to one user to many other users
  • XDB personalised components may behave in an unexpected manner.

 

XDB and Caching

In general, it’s fairly difficult to turn on caching for those components that need to react to personalisation on a per-user basis.  The problem is if your entire homepage is making use of personalisation you may not be able to cache certain components on that page at all. The inability to cache those components properly means the specifications of the server will need to be ramped up to deal with the additional processing that occurs with each page hit.

The “Vary By User” rendering setting is probably going to help you on personalised components up to a point.

Caching and Performance Testing

Caching is closely related to performance testing and your overall caching strategy will affect the outcome of these tests. The aim of the performance test is to benchmark what amount of traffic your production environment can handle during this process.

If your hosting in the cloud why not setup your servers to autoscale when needed.

An often-forgotten point is that performance testing should be complemented by stress testing above and beyond your expected traffic requirements. The main aim of this stress test is to identify the breaking point of your productions environments so that you have this knowledge for the future. This will help your team to prepare for those extra-ordinary traffic events.

When it comes to performance/stress testing there is little point running the test from a single source or development computer. You will be limited by a single network connections capacity and this is not a true test particularly for those making use of cloud hosting.

We always recommend using a service like blazemeter or Azure load tests.

** Thanks to Derek Aceik’s resident DevOps extraordinaire for helping me with the above recommendations.

An additional cache setting

It’s worth getting to know each of the HTML/Rendering cache settings well as you will need to have a detailed knowledge of each of these when looking at your strategy overall. One particular setting we found was missing that we tend to use regularly was the ability to only have a variation based on “Vary By URL”. A member of the team (Jose D) was kind enough to hook this up for us on a recent project. We are happy to share this with the wider community in hope that you also find it useful for your projects.

Increase the default cache limits

Outrageously this is also an often-overlooked part of getting your Sitecore project onto production. The performance tuning guide pretty much spells this out for you. You need to increase the default caching sizes that come out of the box with a Sitecore vanilla install. The caching limits provided are appropriate for developer machines but grossly inadequate for production environments which really need a healthy cache size to be responsive. For instance, out of the box, the HTML cache size is 50MB while on a reasonable production server this should start at 100MB as a baseline. That’s 20 times increase.

Take a look at Sitecore’s performance tuning document in order to get these settings correct. Section 4.1

Fine Tuning

Configuring the cache correctly for your production server can take some time to get right. You will need to monitor the /sitecore/admin/cache.aspx page.

In order to get these settings right have a good look at Sitecore’s performance tuning document. Section 4.2 is very important and give you a guide as to how cache tuning should be performed.

Prefetch Cache

Remember that fine-tuning your site will involve adjusting the items that Sitecore prefetches on startup. Once again the performance tuning document has all the details on how to do this. It’s another important step to get things running smoothly. See the references at the bottom of this article to see how the Pre-fetch cache fits into the overall caching architecture.

Sitecore.Caching.CustomCache

By implementing caching within your code to wrap complex logic you can save your server a lot of processing effort. Particularly around I/O intensive code where a lot of data to be shifted/filtered/searched it really is a great idea and worth adding to your Sitecore coding arsenal.

To get up to speed on how to build a custom cache we recommend reading this document.

The main way to achieve your custom cache is to write an implementation of Sitecore.Caching.CustomCache. You can then wrap your logic with the custom cache to prevent the same code being hit every time.

var cacheKey = string.Concat(
string.Format("MyCustomKey-{0}", Sitecore.Context.Language.Name), ":", filterParam);

var result = this.sitecoreCacheService.GetOrAddToCache(cacheKey, () =>
{ 
 ... 
 return "MyDataResult"
});

return result;

 

Cloudflare / Akamai considerations

Many sites rely on a third-party service provider to sit in front of their website to add an additional layer of caching. This is great and helps sites scale to meet demand. It shouldn’t be used as an excuse not to do a caching strategy at all on the Sitecore side.

Remember that pages are likely to sit in the 3rd party cache only for a certain period of time. So, if your site has 1000s of content pages that are each only accessed semi-regularly the user will bi-pass the third-party cache altogether. In these cases, the Sitecore cache becomes the next line of defence.

With regard to caching and Cloudflare. The cache will only kick in on the media library and your Web API endpoints if the Cache-Control header is set to public and given a valid MaxAge.

  1. For your WEB API endpoints, we found it handy to use the attribute mentioned in this stack overflow page.  See CacheControlAttribute.cs
  2. For media library URLs you need to enable:
<!--  MEDIA RESPONSE - CACHEABILITY The HttpCacheability is used to set media response headers. Possible values: NoCache, Private, Public, Server, ServerAndNoCache, ServerAndPrivate Default value: public--> <setting name="MediaResponse.Cacheability" value="public" />

 

Disable Caching on CM, Enable on CD

Remember to disable HTML caching on CM environments as it may cause issues with the Experience Explorer and Preview modes.

  • Set cacheHtml=”false”  on your CM servers <site> node.

You can also disable the media cache on CM Servers so that content editor never get cached images:

<?xml version="1.0"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/" xmlns:role="http://www.sitecore.net/xmlconfig/role/">
 <sitecore role:require="Standalone OR ContentDelivery OR ContentManagement OR Processing">
 <settings>
 <!--
 CACHING ENABLED
 Determines if caching should be enabled at all
 Specify 'true' to enable caching and 'false' to disable all caching
 -->
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="Standalone OR ContentManagement" name="Media.CachingEnabled" value="false" />
 <setting patch:instead="*[@name='Media.CachingEnabled']" role:require="ContentDelivery" name="Media.CachingEnabled" value="true" />
 </settings>
 </sitecore>
</configuration>

 

Note: 

  • Don’t change the setting called “Caching.Enabled” on CM servers.

Reference Material:

Understanding the cache layers

The following is taken from http://learnsitecore.cmsuniverse.net/Developers/Articles/2009/07/CachingOverview.aspx

wcCcM

Definitions:

These definitions are described in the following stack overflow post:

Prefetch cache

This is item data pulled out from the database when the site starts up – from the Sitecore docs:

“Each database prefetch cache entry represents an item in a database. Database prefetch cache entries include all field values for all versions of that item, and information about the parent and children of the item.

Populating the prefetch cache results in smoother user experiences immediately after application restarts. Excessive use of prefetch caches can affect the time required for application initialization.”

Data cache

This cache is to minimise the round trips to the database, it again pulls item information from Sitecore but the difference being it does it when the item is requested (rather than start-up of the site); it will pull the data from the pre-fetch cache if it’s there or go back to the database if not.

Item cache

This cache has objects of type Sitecore.Data.Items.Item which would be used in code; when an item is requested in code it will look in the Item cache, then back up the data cache and up again to pre fetch cache and finally to the database.

HTML cache

This output caches the HTML from sublayouts and renderings, there are a nice level of configuration to only cache the HTML based on querystrings, different data etc.

 

Sitecore Helix: Lets Talk Layers

Here are some notes from the decision-making process our team uses with regard to what goes where and in which layer. Of course, the helix documentation does go over the guidelines but it’s not until you start working with the architecture that things begin to become clear.

Project Layer

Definition: The Project layer provides the context of the solution. This means the actual cohesive website or channel output from the implementation, such as the page types, layout and graphical design. It is on this layer that all the features of the solution are stitched together into a cohesive solution that fits the requirements.

Comment: The project layer is probably the most straightforward layer to understand. In our project, the modules in this layer remained lightweight and mostly contain razor view files that allow content editors to build up the HTML structure of the pages.

The website content (under home), page templates and component templates are serialized by Unicorn and also live in this layer.

It’s also worth mentioning one particular gotcha you may hit in development to do with template inheritance and you can read more in this blog post.

Feature or Foundation?

Feature Definition: The Feature layer contains concrete features of the solution as understood by the business owners and editors of the solution, for example news, articles, promotions, website search, etc. The features are expressed as seen in the business domain of the solution and not by technology, which means that the responsibility of a Feature layer module is defined by the intent of the module as seen by a business user and not by the underlying technology. Therefore, the module’s responsibility and naming should never be decided by specific technologies but rather by the module’s business value or business responsibility.

Discussion: For our feature modules, we aimed for single concrete features that are independent of each other. They may contain views, templates, controllers, renderings, configuration changes and related business logic code to tie it all together. The point is to always stick to the rule: “Classes that change together are packaged together”.

When building feature modules, it’s also very handy to think about the feature modules removal as you build it. Keep asking yourself how easy would it be to roll back this module and what would I need to do. Doing so will help you to keep those dependencies under control.

Foundation Definition: The lowest level layer in Helix is the Foundation layer, which as the name suggests forms the foundation of your solution. When a change occurs in one of these modules it can impact many other modules in the solution. This mean that these modules should be the most stable in your solution in term of the Stable Dependencies Principle.

 

Discussion: We found that our foundation modules usually consist of frameworks or code that provide a structural functionality to support the web application as a whole. Each foundation module may be used by multiple feature modules to provide them with the support they need to run properly. Our foundation modules contain API calls, configuration, ORM structures (Glass Mapper), initialisation code, interfaces and abstract base classes.

An important point is that unlike feature layer modules, the foundation layer modules can have dependencies to other foundation layer modules. If this was not the case it would be very difficult to construct the foundation layer in the first place.

For the most part, the team can make some fairly quick decisions about what goes where in the initial project planning. And what goes where is fairly obvious after you get familiar with the habitat example project. The main dilemma you’re going to encounter is around where your repositories and services (key business logic) might need to sit.

Business LogicWhat goes where! Help!

Let’s consider the definitions above, they seem straightforward enough. However, in agile projects where things may change rapidly or requirements are not immediately clear (which happens a lot), you’re inevitably going to need to make some judgment calls.

What am I talking about with the above statement, well let’s say one developer codes up a feature module at the beginning of that project. At first, it seems like that particular portion of code is only required by that particular feature. Down the track a requirement surfaces whereby the same business logic needs to be used in another Feature module. Helix rules dictate:

  • Classes that change together are packaged together.
  • No dependencies between feature projects should exist.

A lesser developer may be tempted at this point to duplicate the code in both feature modules to get the job done quickly. This, however, breaks some fairly important fundamental coding standards that many of us try to stick to. Step back to consider the technical debt that duplicate code leave behind vs dependencies between your Helix feature modules.

The solution to this dilemma; it’s time to refactor that feature logic and move it into the foundation layer. After which any feature modules that needs to reference the same code can reference it in the foundation layer.

Remember “with great power comes great responsibility” and this is especially true when touching code in the foundation layer. The code you touch in the foundation layer should be highly stable and well tested as the helix guidelines suggest.

Was the original decision a mistake?

On the flip side of the coin its worth considering that it wasn’t a mistake to put that piece of code in the feature layer to start with. Technically if no one else needed to use the code at the time and it was reasonably unforeseen that anyone else would need to use it, then it probably was the correct call.

Accept that things may change

The team members on your helix project will need to be flexible and accepting of change. I think it’s worth being prepared for some open discussions within your team about what goes in the foundation layer and what goes in the feature layer. It’s certainly going to be open to interpretation and a topic of debate. A team that can work together and be open to a change of direction within their code structure will help the code base stay within the helix guidelines as the project evolves.

Good luck!