JSS and OKTA: Blog 2 – Login via embedded OKTA Form (August RELEASE)

This blog post is based on the OKTA samples @ https://developer.okta.com/quickstart-fragments/angular/default-example/ and the OKTA sign in widget that can be integrated into the page.

The branch mentioned above contains a working example that I will now run through in the rest of this blog. Once up and running your should get a login embedded into our JSS application that allows us to login via OKTA.


Overview

Firstly lets have a quick run through of the changes we made to the first blog in order to integrate the widget.

  1. OKTA configuration settings
    • These are stored in a settings file in the root of the JSS application: jss-okta-config.json
    • All settings to do with our OKTA instance can be added in here.
  2. Create the placeholder component to contain the widget
    • In JSS we scaffold up a new component
    • src\app\components\okta-sign-in\okta-sign-in.component.ts
    • Have a look at https://github.com/TomTyack/jssokta/blob/feature/okta-sign-in-widget/JSS/src/app/components/okta-sign-in/okta-sign-in.component.ts
      1. ngOnInit()
        1. Note how the code dynamically imports the OKTA sign in widget.
        • import('@okta/okta-signin-widget')
        • This is necessary as the library contains a reference to window. Which will break our Server Side code if imported normaly.
        • So in order to workaround this flaw we dynamically import the library only after verifying that this code is running client side.
      2. detectTranslationLoading()
        • Wait for the dictionary service to be loaded so that we can inject the Dictionary into the form.
      3. bootupSignin()
        • Initialise the OKTA widget configuration, inject labels and URLS
      4. injectWidgetPhase()
        • Render the widget
    • This contains the widget code and the has a matching from end HTML template in Angular.
      • The HTML file contains the following important tag
      •  <div id=”okta-signin-container”></div>
  3. Add the component to the /login route
  4. Adjust the navigation to include a new button for the widget
  5. Dictionary additions

Demo Video

To show you a video of this JSS example in action take a quick gander at the following video:


Demo Installation

  1. Clone the Repository
  2. Deploy the application. Follow the same instructions from the first blog. <<< UPDATE LINK
  3. Run the application. Follow the same instructions from the first blog. <<< UPDATE LINK
  1. Click on the “Login Embedded” link in the navigation
  2. Login to OKTA (if not already) using the details you registered with.
  3. You should arrive back on the profile page
  4. undefined
  5. Success!! hopefully 🙂
  6. Inside the OKTA Dashboard is a handy log that shows all login activity. This is a great way to see what is going on. Screenshot shown at the bottom.
    • undefined

Summary

That concludes the run through of how to integrate the OKTA Angular SDK and Embedded Widget into Sitecore JSS. OKTA is a leader in user authentication management and having the ability to integrate into our JSS applications is an exciting prospect. I hope this example is of use to you and your teams if your considering the same technology mix.


JSS and OKTA: Blog 1 – Login via External OKTA Form

This blog post is based on the OKTA samples @ https://developer.okta.com/quickstart-fragments/angular/default-example/

The OKTA samples makes use of the OKTA Angular SDK and allows you to set up a development OKTA cloud instance for testing the code.

For this blog I have integrated the OKTA example into the Angular JSS starter repository to for a new repository to accompany this blog. https://github.com/TomTyack/jssokta

This repository contains a working example that I will now run through in the rest of this blog. All you need to do is sign up for your own developer sandbox (OKTA) instance using a dummy user and test it out.


Overview

Firstly lets have a quick run through of the changes we made to the original examples in order to integrate them with the angular JSS application.

  1. OKTA configuration settings
  2. Provide the Login and Logout Buttons
  3. Create the Callback Handler
    • The callback handles integration in JSS takes place inside: src\app\routing\routing.module.ts
    • JSS Example: https://github.com/TomTyack/jssokta/blob/master/JSS/src/app/routing/routing.module.ts
    • In this case I made the route: /implicitcallback
    • Importantly the route must be added at the top of the list so that JSS routing doesn’t hijack the route before our OKTA module gets a chance. I spent a little while scratching my head over this one when I originally added it the bottom of the route config.
  4. Update your NgModule
    • This requires a little bit of adjustment from the original example.
    • Module integration is done via: src\app\app.module.ts
  5. Use the Access Token

Demo Video

To show you a video of this JSS example in action take a quick gander at the following video:

Note: In the video I started off the demo by running in Disconnected mode with localhost. Surprisingly this worked and it redirected back to the connected app domain. In reality a better test would have been to start on the same domain in integrated mode. I’m not sure that you could run this test end to end in disconnected.

Demo Installation

  1. Clone the Repository
  2. Sign up for an OKTA developer account
    • https://developer.okta.com/signup/
    • When it asks you what sort of application you want just click “Do this later”.
    • Confirm your email address and fill out the security questions and change your temporary password.
  3. In the top navigation withing the OKTA Dashboard
  4. Back in the JSS OKTA Repository (open it in VS Code or your editor of choice)
    • From the command line run: npm install
    • Open the file: jss-okta-config.json (in the root of the JSS Application)
      • issuer: Swtich out “INSERT-OKTA-ID” with the relevant ID from the same domain your viewing the OKTA portal in.
      • redirectUri: Replace ‘jssokta’ with ‘jss.okta.portal’ or whichever domain you set.
      • clientId: Found on the application > General tab. About 20 characters long.
  5. IIS
    • OKTA requires us to be running a domain with https, as such it makes it difficult to test this out in Disconnected mode.
    • Setup a Sitecore instance and make sure JSS is installed.
    • Add your new domain to the local hosts file and make sure its mapped to 127.0.0.1
    • Add the new domain to your Sitecore instance in IIS and make sure it capable of HTTPS. Use the developer certificate as a default.
  6. Back in the JSS OKTA Repository
    • Got to sitecore\config\JSSOkta.config
      • hostName: jss.okta.portal
    • From the command line run: JSS Deploy app
      • run through the setup as you would any JSS application so that it would connect with your Sitecore instance.
      • Sample config from my tests: See scjssconfig.json at the bottom of this blog
    • From the command line run: JSS Deploy config
    • From the command line run (again): JSS Deploy app -c -d
  7. Test out the deployed application:
    • Navigate to your domain: example: https://jss.okta.portal/ (must have SSL)
    • If prompted about the SSL security warning proceed and ignore. “Proceed to jss.okta.portal (unsafe)”
    • undefined
    • Click on the “Login” link in the navigation
    • Login to OKTA (if not already) using the details you registered with.
    • You should arrive back on the profile page
    • undefined
    • Success!! hopefully 🙂
    • Inside the OKTA Dashboard is a handy log that shows all login activity. This is a great way to see what is going on. Screenshot shown at the bottom.
      • undefined

Summary

That concludes the run through of how to integrating the OKTA Angular SDK in Sitecore JSS. OKTA is a leader in user authentication management and having the ability to integrate into our JSS applications is an exciting prospect. I hope this example is of use to you and your teams if your considering the same technology mix.


OKTA Setup Screenshots and Sample Config

I have included screenshots of all my OKTA settings below. As I know this can be difficult to diagnose at times.


scjssconfig.json

{
  "sitecore": {
    "instancePath": "C:\\inetpub\\wwwroot\\test.dev.local",
    "apiKey": "{FB95B118-E04F-4D5B-9465-01AE804A2F5A}",
    "deploySecret": "r6bnvuhv13beudhix1lxnej2ci38u0i6kxhnpy22i6ps",
    "deployUrl": "http://jss.okta.portal/sitecore/api/jss/import",
    "layoutServiceHost": "https://jss.okta.portal"
  }
}

Introduce YML Linting to your JSS Apps

Intro: This post shares how and why you might like to introduce a YML linter into the build process for your next Sitecore JSS project. Particularly if you are relying on the YML import process when building a new application. Shout out to David Huby (Solutions Architect) for introducing team Aceik to yaml-lint.

Why would you want to do this on a JSS project?

When running the JSS application and testing latest changes, we sometimes discovered some strange behaviour with dictionary items, or a page might not load properly.

A 404 page displayed after an invalid YML change was made

This can be caused by a small (incorrect) change in YML breaking individual routes. For example, lets say you have an incorrect tab or character in the wrong place. The YML syntax requires correct spacing and line returns to be valid but this is not always so obvious when done incorrectly. Sometimes only after you run the JSS application and test out the changes do you discover some strange behaviours or the page not loading properly.

To avoid this we found it handy to introduce a YML linter into the JSS build process. This solves the issue of someone making a small change to the YML files and breaking individual routes.

Here are the steps needed to introduce a YML linter into a node-based JSS project:

  1. Install yaml-lint (https://www.npmjs.com/package/yaml-lint)
  2. In the application root create the file .yaml-lint.json
  3. Update the package.json
    • Create a new script entry called yamllint
      • “yamllint”: “node ./scripts/yaml-lint.js”
    • Update the script called ‘build’
      • “build”: “npm-run-all yamllint –serial bootstrap:connected build:client build:server”,
  4. Download the following scripts file and place it in the /scripts folder
    1. https://github.com/TomTyack/jss/blob/feature/YAML-Linter/samples/react/scripts/yaml-lint.js

You can also see the pull request with the above changes at:

https://github.com/Sitecore/jss/pull/385/files

Demo

One Performance Blog to Rule them all – Combining the 6 Pillars of Speed

I have done a number of posts and talks at user groups on Page Speed and performance over the last few years. I have split the various topics into individual blog posts for the most part as performance is dependent on many factors. What has really been missing is a complete demo of how all the different techniques come together to give your site a really good score. So that’s what I intend to demo here is the combination of the 6 pillars of page speed in one Sitecore instance. To recap here are the 6 pillars of page speed performance in my opinion:

1) Introduce image lazy loading

2) Ensure a cache strategy is in place and verify its working. (must have adequately sized production servers)

3) Deploy image compression techniques

4) Use responsive images (must serve up smaller images sizes for mobile)

5) Introduce Critical CSS and deferred CSS files

6) Javascript is not a page speed friend. Defer Defer Defer

I have shown a subset of these previously but crucially three critical pillars to do with imaging were hard to achieve at the time. This is now possible due to being able to support Next Gen image compression (webp), which I wrote about in my previous blog. With a little more time and investigation Image Lazy Loading, responsiveness and image compression to give a more complete picture of how each pillar impacts page speed.

Here are the tools and blogs I will use to achieve each of these:

1) Image Lazy Loading – Blog post by MVP Sitecore SAM and https://github.com/thinker3197/progressively

2) SXA Cache Settings – SXA official documentation

3) Next Image (WEBP) Image Compression – https://github.com/Aceik/ImageCompression

4) SXA Responsive Images – SXA official documentation

5) Introduce Critical CSS and deferred CSS files – https://github.com/Aceik/Sitecore-Speedy

6) Javascript is not a page speed friend. Defer Defer Defer – https://github.com/Aceik/Sitecore-Speedy

Alternatives: Mark Gibbons (MVP) recently upgraded the Dianoga image library to support WEBP. Worth a look if you don’t want to use a third party API. It also supports a CDN. Also Vincent Lui (MVP) also pointed out in his recent SUGCON talk, you can achieve both image compression and image lazy loading via some of the modern CDN’s. That is a great (easy) option if you are retro fitting these techniques to a live website.

I’m not going to dive deep into exactly how to setup each of these things as I think the individual links have sufficient instructions. I will show in the Demo videos how each pillar impacts the HTML rendered. For the most part I am keen to demonstrate the impact of each of these line items and how each one will benefit your page speed score.

Before we begin its important to understand that the algorithm (Lighthouse) behind Google’s Page Speed insight doesn’t work in an exactly linear fashion. If you improve your score by ticking off one of the above, don’t expect ticking off another issue will have the same benefit. The last 20 points out of 100 (on the mobile scoring system) is that hardest to achieve based on what I have seen.

Live Demo Video Series that accompanies this blog:


Test Outline:

Google Page Speed Insights — Scores can fluctuate widely based on network latency. At time you will experience score fluctuations at different times of the day on the same site.

In general this is a guide

Here is the general outline of the VM that hosted the IIS instance for testing. I also put the VM under some basic load while running the tests.

  • All the test below used Sitecore 9.3 and the SXA habitat example site.
  • Test used the live Google Page Speed insights tool via the url: https://developers.google.com/speed/pagespeed/insights/
  • Sitecore was setup on an Azure VM with the specifications:undefined
  • The test was run 5 times, to get an average score.
  • The test page was the homepage of the Habitat site and the page was requested before running the test 5 times so that the instance could be considered warm.
  • EXM and XDB were not running on these test instances.
  • Test results are Mobile Page Speed scores only – This is the most important metric in today environment and good desktop scores are not really a challenge.
  • The default Habitat cache rendering for Navigation was left on for all tests. (without this the site fails under basic load altogether)
  • All tests were conducted under load in an attempt to replicate a production environment. For this I used a node package called loadtest.
  • SXA CSS/Javascript optimisations were turned on, but as I have mentioned before this has a minimal performance boost.

loadtest -c 10 –rps 10 http://baselinecd.dev.local/

10 requests per second with a concurrency of 10

Baseline Score

The Baseline score encompasses the habitat site installed with no modifications.

Result: 48 / 38 / 40 / 34 / 38 = 39.6/100 Average

Observation: Heavily penalised for CSS and Javscript loading times.


Image Lazy Loading

All images on the homepage were converted to be Lazily loaded. A single large blurred image was used as the placeholder for all images.

Result: 57 / 55 / 61 / 52 / 63 = 57.6/100 Average

Observation: Around the mid point of the scale, image lazy loading has around a 15 – 20 point impact.


Rendering Cache Strategy

I have blogged extensively about this in the past but setting up cache settings properly is so critical and has a major impact. Its also one of the easiest things to fix for a poorly performing Sitecore site. Also note the only way to accurately demonstrate the impact that Rendering cache has on a site is to test it under load.

This test was run with higher user per second: loadtest -c 10 –rps 30 http://baselinecd.dev.local/

With Cache Enabled:

49 / 56 / 41 / 54 / 54 = 50.8

Without Cache Enabled:

ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT / ERR_TIMED_OUT = You get the point 🙂

Observation: Rendering cache settings are critical and should be the first step in Page Load Speed refinement for a Sitecore site. 10 Point benefit observed once a site is stable under load.


Image Compression

Result: 60 / 58 / 61 / 62 / 62 = 60.6/100 Average

Observation: Around the mid point of the scale, image lazy loading has around a 20 point impact.


Critical CSS

Result: 74 / 78 / 79 / 81 / 81 = 78.6/100 Average

Observation: The combination of critical CSS in the head and Deferred styles provides a meaningful page speed boost. 25 Point observed benefit.


Deferred Javascript

Result: 92 / 94 / 93 / 94 / 94 = 93.4/100 Average

Observation: Javascript has a massive impact, reducing it drastically in the initial payload provides massive page speed improvements. 40 Point observed benefit.

You might think, hey I will just do Deferred Javascript and it will be all good. While this particular PIllar/Criteria does have the biggest impact. Every site is different and as mentioned earlier scores fluctuate. The upper part of the scoring system is the hardest to reach. So while this is a great starting point, ignore the other speed pillars at your peril.


Responsive Images

Result: 56 / 54 / 59 / 56 / 60 = 57/100 Average

Observation: Around the mid point of the scale converting images to be responsive (srcset support) has about a 10 point impact.


Results Summary

CriteriaAverage ScoreObserved Benefit
No Change (SXA Habitat Home OOTB)41.8 / 100
Image Lazy Loading57.6 / 10015 Points
Sitecore Rendering/HTML Cache Settings50.8 / 10010 Points
Image Compression (webp)60.6 / 10020 Points
Critical CSS78.6 / 10025 Points
Deferred Javascript93.4 / 10040 Points
Responsive Images57 / 10010 Points

The Pillars Combined

In isolation we can see the rough results of what each of the pillars might do to our Page Speed. The real question is what does combining all these pillars produce.

Result: 100 / 100 / 100 / 100 / 100 = 100/100 Average

Observation: Do I expect this on an actual production site realistically ? That is certainly the dream, but in reality you should be over the moon if you make it into the 90s and pat your self on the back if you get into the 80s as well. For any Sitecore site if you make it into the 90’s for mobile, your doing an amazing job.

Admittedly for the combined demo I skipped the responsive image pillar. SXA supports Responsive Images but not in combination with data attributes. It was going to be a bunch of work to write a custom SXA handler to support both lazy loading and responsiveness at the same time. That is not to say its not possible. Either way the impact was minimal.

Conclusion

Page speed is so critical to SEO and visitor conversion. A slow site instantly turns away users on mobile and tablet devices. Admittedly the final result shown above and in the video have required that all the right tools be available to the Sitecore community. Which up until recently you likely needed to bake your own solutions in order to get that over the line.

I think its now becoming possible to aim fairly high (90/100 on mobile) with our Page Speed scores, but it does require getting most if not all of the Architecture Pillars above working together. Its worth learning each of these and understanding the pitfalls and limitations if you want really great page speed. Good luck and feel free to get in touch with any questions.

Footnote

The combined pillars can produce great results but you still need to load test before going live. Checkout the video below where I search for the breaking point using the loadtest tool. Please note that this node based load test tool should just be used for a guide. Before go live I recommend using a hosted load tool solution that has multiple geographic locations. Tests done based on one network location or device will result in a network bottle neck and give you false positives.

Bonus Video: https://www.youtube.com/watch?v=96YcxyhYh0U

Next Gen Image Compression in Sitecore

Spoiler: This post is not a post about Dianoga, I take a deep dive into Tiny PNG and Kraken.IO integrations into Sitecore. The results are worth checking out at the bottom.


At the start of the year, I’ve picked up where I left off, on page speed. Last year I took a deep dive into attempting to improve the page speed on Sitecore SXA sites by using some of Google’s recommended techniques to structure the page. If you haven’t already seen it, head on over the Sitecore Speedy and see some of the results we achieved.

I’ll be the first to admit that getting really good page speed scores isn’t easy. It takes a lot of different factors to come together. Just as a reminder, here is the main list that I would consider you need to check off to be winning at this game.

1) Introduce image lazy loading

2) Ensure a cache strategy is in place and verify its working.

3) Dianoga is your friend for image compression

4) Use responsive images (must serve up smaller images sizes for mobile)

5) Introduce Critical CSS and deferred CSS files

6) Javascript is not a page speed friend. Defer Defer Defer

For this post, i’m going to look at an alternative to Dianoga. I’m a big fan of Dianoga and have used it over the years to crunch loads of oversized images introduced by Content Editors. I will, however, say that it can add complexity to deployments and CI/CD pipelines and while some claim to have had success in Azure Apps, others have not.

On the flip side, content editors love Tiny PNG, which is one of the most popular image compression website utilities going around. Tiny PNG also has a developer API, so we have used this to build in a compression tool that can be used directly from your Sitecore toolbar.

The button below is hooked up to chat to Tiny PNG API. It will send across your image data and receive a compressed image back for storage.


Full disclosure, I’m not the first person to hook up Tiny PNG to the image library. I could find two other implementations

One will allow you to run a powershell script to connect to the Tiny PNG API and the other is a module to connect to the API on upload.


This implementation of the Tiny PNG API introduces the following variances:

  • A button in the CMS to crunch any single image.
  • A scheduled task that will process any image not already processed.
  • Error handling for when the API limits are reached
  • Logging that outlines which images were processed.
  • Before and After compression information stored in any Image field of choice.
  • A feature toggle to turn the whole feature on/off

All the source code is available at: https://github.com/Aceik/ImageCompression

Now let’s jump in have a look at the results just from crunching a few images down:

Without image compression:

Click to Enlarge Image

To compress the images on the page, we head on over to the “Compress” button in the Media tab that we have introduced.

Click to enlarge

A few examples of compression results taken from homepage images:

Before: 158.4 KB | After: 110.6 KB

Before: 197.8 KB | After: 135.3 KB

Before: 640.0 KB | After: 120.7 KB

After compressing all the images on the page the saving can be seen below.

Click to enlarge

So our total image size saving is 2.4MB – 1.3MB = 1.1MB

A pretty decent saving from just pressing the compress button on 27 homepage images. Also, consider that the user won’t notice any difference in image quality as this method uses lossless compression.


The compression achieved is great for helping us tick off one of the requirements for fast pages with Google. But as we are about to find out Google will likely still complain about two other criteria. When it comes to Google Page Speed insights a page that does not have properly processed images will bring up the following three recommendations:

Here is a break down of how we address each one:

  1. Serve image in next-gen formats – Image formats like JPEG 2000, JPEG XR, and WebP often provide better compression than PNG or JPEG, which means faster downloads and less data consumption. Learn more.
  2. Properly Size images – Your CSS layouts should be responsive and use modern image retrieval techniques that adapt the image size requested based on screen size. Read More
  3. Efficiently encode images – The Tiny PNG integration above will take care of this. This is all about compressing the image to as small as it can get without a visible loss of quality.

So assuming you have already achieved number three using the Tiny PNG integration or another source, let us look at how we can solve the next-gen image requirement.

As a quick side note the testing I did after converting the images to next-gen also ticked item number two above. I don't think this should be relied on however and its best to incorporate responsive images into your projects from the beginning.  

When looking into how to convert images to a next-gen format I opted to target webp. Google has a nice little page explaining the format here.

WebP is natively supported in Google Chrome, Firefox, Edge, the Opera browser, and by many other tools and software libraries.

Once again I opted to look for an API that would provide the conversion for me so that Sitecore could easily connect, send the image and then store the result. All without any extra hosting requirements. I opted to go with Kraken.IO image APIs as they have a free 100MB trial offer and well free is a good price when building proof of concepts. The integration is all available on Aceik’s github repository. Just signup for your own API keys add them to the module settings (in the CMS) and start converting.

To test out just how much this would impact the image payload size for the whole page, I once again converted all the images on the SXA habitat homepage.

Here are the results:

Click to enlarge

So our total image size saving is now 2.4MB – 0.79MB = 1.61MB

The reduction in size from a non-compressed image to a webp formatted image is truly impressive.

A few examples:


Conclusion

I can only conclude by saying that if page speed is really an important factor for your Sitecore project take a look at Tiny PNG. If you want to go next level with your image formats and achieve great compression try out the Kraken.IO API integration as it could be well worth the small subscription fee.


Results

CompressionTotal Image SizeSaving
None2.4 MB
Tiny PNG1.3 MB1.1MB
Kraken.IO (webp)0.79MB1.61MB

Notes:

The module and code mentioned in this blog post are available on Aceik’s Github account. This also contains installations instructions.

GitHub: https://github.com/Aceik/ImageCompression

After installation, your content editors will simply be able to compress and convert images as needed from within the CMS.

Click to enlarge

The Github Readme contains a run down and the standard settings inside Sitecore as shown below:

Accessing the JSS Dictionary in C#

This is a quick post to guide developers through gaining access to the JSS Dictionary in the backend C# code.

Why would you want to be able to do this?

The reason we originally had to do this was that our JSS Angular application had editable content from the dictionary that we also wanted to access in C#. In our particular case, it was to inject the content into an email template that would be sent to the user. To save duplicating content it made sense for both the front end and C# to have access to the same dictionary.

Where do we start?

The following assumes you have a Sitecore instance with JSS installed and a JSS application you are working on. Grab your favourite de-compilation tool (I use ILSpy) and locate the following DLL in the bin folder of your running Sitecore instance:

Sitecore.JavaScriptServices.Globalization.dll

Once you have that open in ILSpy you want to have a search for DictionaryServiceController

public class DictionaryServiceController : ApiController

The following method is what we want to use in our C# code:

public DictionaryServiceResult GetDictionary(string appName, string language)

It takes the unique application name (that belongs to your application) and the language (“en”) as a parameter. As a result, you will get back a dictionary object that you can use to lookup up your content.

This is the Controller that would normally be called via an API on the front end. So how do we call it from normal C# service for instance?

Firstly, the controller has a constructor that has three parameters that are injected via DI (Dependency Injection).

IConfigurationResolver configurationResolver, 
BaseLanguageManager languageManager, 
IApplicationDictionaryReader appDictionaryReader

Using ILSpy once again you can find that the above three parameters are all set up in the DI container via RegisterDependencies.cs in various JSS assemblies. The Controller itself is already registered in the DI Container as well, which is very handy.

If you have a look at showconfig.aspx in the admin tools you can see that a lot of the dependencies are registered via RegisterDependencies.cs

For example:

<configurator type="Sitecore.JavaScriptServices.AppServices.RegisterDependencies, Sitecore.JavaScriptServices.AppServices" patch:source="Sitecore.JavaScriptServices.AppServices.config"/>

Dependency injection is a whole other topic so I will leave that to your personal preference as to how you achieve it. For the purposes of the following complete code example, I have used the Services Attribute style setup. If you want to keep consistency with Sitecore you could also setup via the RegisterDependencies.cs class of your own and use a patch file to kick it off.


Example Service:

using Sitecore.Foundation.DependencyInjection; // Borrowed from habitat
using Sitecore.Diagnostics;
using Sitecore.JavaScriptServices.Globalization.Controllers;

namespace Sitecore.Foundation.JSS.Services
{
    public interface ITranslationService
    {
        string TranslateKey(string key);
    }

    [Service(typeof(ITranslationService), Lifetime = Lifetime.Transient)] 
    public class TranslationService : ITranslationService
    {
        private readonly DictionaryServiceController _controller;
        
        public TranslationService(DictionaryServiceController controller)
        {
            this._controller = controller;
        }

        public string TranslateKey(string key)
        {
            var dictionary = GetDictionary();
            if (dictionary.phrases.ContainsKey(key))
                return dictionary.phrases[key];
            Log.Error("Dictionary key {key} not found", this);
            return string.Empty;
        }

        private DictionaryServiceResult GetDictionary(string appName = "myAppName", string language = "en")
        {
           return _controller.GetDictionary(appName, "en");
        }
    }
}

Above is a simple service that can be used from just about anywhere in your C# code.

Simply change the appName and language as required to access the correct JSS dictionary. Also, remember to publish your app dictionary to the web database or you may get no results.


There we have it, accessing the JSS dictionary from C# in a nutshell. I hope this helps some other folks get this done quickly on JSS builds.

An Introduction to Sitecore Pipelines

What is a Sitecore Pipeline?

In Sitecore pipelines describe a series of discrete steps that are taken to achieve some objective.  If you think about writing code to handle an HTTP request for example you could create a monolithic class that does the job from end to end; however the pipeline approach is a number of classes that can be invoked in order.  First do this and then do that etc. 

There are many Pipelines in Sitecore.  You can view existing pipelines and the processors that they call using Sitecore Rocks in Visual Studio.  Right click on the site connection and choose Manage. Click on the Pipelines tab in the Visual Studio edit pane.  Click on one of the listed pipelines to see all the processors that are executed as part of the pipeline.  There are pipelines with a single processor at one end of the scale to the httpRequestBegin pipeline with 45 distinct steps at the other.

Why are they useful?

Thinking about the monolithic class above it would be very difficult to maintain or modify, It would also be truly massive.  So modularising it would make it much easier to maintain. 

It also makes it much easier to customise.  For example, if we consider a pipeline that has three processors it might be drawn something like

To add to the existing functionality we could include a new step like

Where we add some custom function after Step 1 and before Step 2. 

We could also replace an existing step completely

How are they defined?

Pipelines are defined using XML in sitecore.config.  In the example below we can see the httpRequestEnd pipeline definition.  The three processors are called in the order in which they are listed.  A parameters object is passed between them to provide continuity.  The final processor is also receiving four additional parameters from the config file.

<?xml version="1.0" encoding="utf-8"?>
<sitecore database="SqlServer" xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:security="http://www.sitecore.net/xmlconfig/security/">
…
<pipelines>   
… 
<httpRequestEnd>
      <processor type="Sitecore.Pipelines.PreprocessRequest.CheckIgnoreFlag, Sitecore.Kernel" />
      <processor type="Sitecore.Pipelines.HttpRequest.EndDiagnostics, Sitecore.Kernel" role:require="Standalone or ContentManagement" />
      <!--<processor type="Sitecore.Pipelines.HttpRequest.ResizePicture, Sitecore.Kernel"/>-->
      <processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel">
        <ShowThresholdWarnings>false</ShowThresholdWarnings>
        <TimingThreshold desc="Milliseconds">1000</TimingThreshold>
        <ItemThreshold desc="Item count">1000</ItemThreshold>
        <MemoryThreshold desc="KB">10000</MemoryThreshold>
      </processor>
    </httpRequestEnd>
…
    </pipelines>
…
  </sitecore>

How can I work with Sitecore Pipelines?

Existing Sitecore pipelines can be customised as outlined above and it is also possible to create a brand new pipeline from scratch

Customise existing Pipelines

First thing to do is to create a configuration patch to add the new processor class into the pipeline at the desired location.  As you can see from the code it is possible to pass variables to the processor.  Here we are adding a processor called NewsArticleLogEntryProcessor into the httpRequestBegin pipeline after the ItemResolver

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <httpRequestBegin>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin.NewsArticleLogEntryProcessor,Fourbyclub.CustomCode" patch:after="processor[@type='Sitecore.Pipelines.HttpRequest.ItemResolver, Sitecore.Kernel']">
          <NewsArticleTemplateID>{B871115E-609F-44BB-91A4-A37F5E881CA6}</NewsArticleTemplateID>
        </processor> 
      </httpRequestBegin>
    </pipelines>
  </sitecore>
</configuration>

Then we need to create the processor.  Inherit the HttpRequestProcessor and Implement the Process method.  All we are doing here is writing to the log if the requested item is a NewsArticle.

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin
{
    using Sitecore.Pipelines.HttpRequest;
    using Sitecore.Diagnostics;

    // TODO: \App_Config\include\NewsArticleLogEntryProcessor.config created automatically when creating NewsArticleLogEntryProcessor class.

    public class NewsArticleLogEntryProcessor : HttpRequestProcessor
    {
        
        // Declare a property of type string:
        private string _newsArticleTemplateID;
        public string NewsArticleTemplateID { get { return _newsArticleTemplateID; } set { _newsArticleTemplateID = value; } }

        public override void Process(HttpRequestArgs args)
        {
            Assert.ArgumentNotNull(args, "args");
            if ((Sitecore.Context.Item != null) && (!string.IsNullOrEmpty(_newsArticleTemplateID)))
            {
                Assert.IsNotNull(Sitecore.Context.Item, "No item in parameters");
                // use util to get id from string property
                if (Sitecore.Context.Item.TemplateID == Sitecore.MainUtil.GetID(_newsArticleTemplateID))
                {
                    // view in log file later, so add FourbyclubCustomCode
                    Log.Info(string.Format("FourbyclubCustomCode: News Article requested is {0} and the item path is {1}", Sitecore.Context.Item.DisplayName, Sitecore.Context.Item.Paths.FullPath), this);
                }
            }
        }

    }
}

Create a new Pipleline

To create and new pipeline is a little more work but still very simple.  The first thing to do is to declare the pipeline with a configuration patch

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <logWriter>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter.logWriterProcessor,Fourbyclub.CustomCode" />
      </logWriter>
    </pipelines>
  </sitecore>
</configuration>

The XML above will create a pipeline called logWriter that has a single processor called logWriterProcessor, which will be in the Fourbyclub.CustomCode.dll.

Pipelines must pass a PipelineArgs object to each processor as it is called so that needs to be defined

using Sitecore.Pipelines;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class LogWriterPipelineArgs : PipelineArgs
    {
        public string LogMessage { get; set; }
    }
}

At least one processor is needed to do the work of our pipeline

using Sitecore.Diagnostics;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class logWriterProcessor
    {
        public void Process(LogWriterPipelineArgs args)
        {
            Log.Info(string.Format("FourbyclubCustomCode: The message was {0}", args.LogMessage), this);
        }
    }
}

Finally we need to invoke the pipeline in our code somewhere.  Instantiate the LogWriterPipelineArgs and set the LogMessage.  Then call CorePipeline.Run and pass it the name of the pipeline and the args.object

var pipelineargs = new LogWriterPipelineArgs();
pipelineargs.LogMessage = "Requested item is not a News Article";
CorePipeline.Run("logWriter", pipelineargs);

Conclusion

Thank you for reading and I hope that this short introduction to Sitecore Pipelines has shown the power of pipelines to customise Sitecore and help to build maintainable code.  However we should always check if there is a way we can implement something using existing Sitecore functionality rather than going for a pipeline as a first resort. As with any customisation does each pipeline or processor added have a potential to increase the challenge of Sitecore upgrades?

Training for your Sitecore project

To mis-quote Benjamin Franklin ‘By failing to train you are training to fail”.  You can provide the best tools in the world but unless people know how to use them, they are useless.  And the more complex the tool the more apparent that becomes.  I think Sitecore is a simple, intuitive tool but then I have been delivering training on Sitecore for eight years.  I’m now familiar with the system. But I do know that at first sight it can appear overwhelming.

So, how best to use training to mitigate risk in your Sitecore project, or any other for that matter? 

I recommend a three-phase approach starting as early as possible in the project.  If it is possible start before the vendor is decided.  Phase one is pretraining.  Phase two shortly before Go Live or UAT. And finally phase three is ongoing maintenance to allow for staff churn, new features etc.

Let’s look at each phase in a bit more detail. 

Phase One/Pretraining

Phase one is the first tranche of training.  As stated above it should be undertaken early in the project.  Assuming you are starting the project by selecting the vendor for a software project; once you have a short list consider sending key stakeholders to classroom training on each of your shortlisted vendors.  While it may be considered an unnecessary cost it will provide a good insight into the real-world use of the various offerings.  And also consider that the cost of pretraining is insignificant compared to the cost of a failed project.

Once you have decided on a vendor, if they have not already, key business and technical staff should attend appropriate training for their role.  Selection should be based upon who will be working with the implementation partner.  Training at this early stage will equip the team with the skills to work with the implementer.  It also shows attendees the capabilities of the chosen platform; they know what to ask for. 

If an implementation partner hasn’t been selected yet training this early can also be useful in their selection.

Delivery for this tranche should be public classroom training, unless you have the numbers to make a private course worthwhile (hint 6+ staff is the tipping point from a public class to a private one).  Classroom training allows for questions to be asked and a good trainer can adapt the delivery to ensure the learning outcomes are achieved.

Phase two/Go Live

Phase two is where the bulk of your users receive training – this is because you are about to go live with your new website.  By now it should be possible to train on, or at least reference, your application as it will be implemented.  Various options are available.  eLearning, although to be fair I am not a fan of eLearning, it is too easy to click through and be distracted by other incoming tasks and emails without absorbing the information.  However, it is cheap and repeatable.  For business users, you could even consider bespoke eLearning.  While the initial cost is high you own the content and there are negligible ongoing costs plus it covers maintenance training or phase three.  Classroom training such as public courses will be generic and provide a good understanding of the features of the chosen application – it will probably be useful to have an internal follow up to familiarise business users with their application.  Developers should be fine with just a public course and some time to get to know the code.  It is, again, possible to get customized or completely bespoke training delivered in a classroom – where your lesson is about the application you are implementing within your organisation.  Delivery costs should be approximately the same as attending public courses but there will be development costs involved; how much will depend on the degree of customisation required

Phase three/Maintenance

Once your website is live and all the users and developers are trained, we move to phase three or maintenance.  Additional training maybe required if you upgrade the version of the application or introduce new features (depending on the enormity of the changes). 

Phase three training is primarily needed for staff churn.  Developers should start by attending the public courses and then learn peer to peer, consider pair programming here.  For business users it is beneficial to attend public classroom training to get the solid foundation to build on.  Just peer to peer onboarding, while tempting, can perpetuate bad habits and often leaves knowledge gaps in the real understanding of the application.  At the very least you need a set of learning objectives that must be ticked off preferably by your in-house super user.

Aceik is a Sitecore training provider.  We teach public courses around the country.  You can view the public schedule here: Upcoming Courses New courses are being worked on constantly so if you do not see what you want please enquire here or email David Newman at the link below.  We also provide custom or private training email David Newman in the first instance or if you are an Aceik customer already contact your account manager.

Sitecore PaaS and Ansible

Sitecore PaaS and Azure is a good match and the idea is to blend in Ansible for Sitecore PaaS infrastructure set up on Azure and vanilla Site deployment.

Why would you use Ansible? Using Powershell scripts with parameter files is the common approach. Ansible is a very valid alternate approach for organisations who have Ansible in their tech stack already or for those that prefer it over Powershell.

Let’s start with a brief overview of Ansible. Ansible is an automation tool to orchestrate configuration and deployment of software. Ansible is based on agent less architecture by leveraging the SSH daemon. The Ansible playbook is a well defined collection of scripts that defines the work for server configuration and deployment. They are written in YAML and consists of multiple plays each defining the work to be done for a configuration on a managed server. 

Ansible Playbooks help solve the following problems:

  1. Provision of Azure Infrastructure required to run Sitecore and the deployment of Sitecore. Ansible supports the ability to seperate the provision of the infrastructure from the deployment of the Sitecore packages into “roles”. These roles can then be shared between different playbooks essentially allowing for re-use and the configuration of different playbooks for different purposes.
  2. Modularise the environment spin up into tasks/plays instead of one monolithic command doing everything in one go.
  3. By executing a single playbook, all the required tasks are coordinated to be executed to result in a fully operational instance of Sitecore up and running and ready to be customised by the organisations development team

Ansible Playbooks help with workflow between teams:

  1. Provide flexibility for Developers and DevOps teams to work together on separate piece of work to achieve a common goal. A DevOps team can work on the Azure Infrastructure set up and Developers can work on Application set up and vanilla deployment  
  2. Once the environment is provisioned hand it over to Development team for each site to deploy the custom code and configuration on Vanilla site.

Ansible has list of pre-built modules for Azure that can be leveraged for Azure Infrastructure Spin up and deployment. A list is available here https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#azure.  

We have used the azure_rm_deployment module during the setup journey. The best thing I liked about Ansible was the ability to structure the parameters in a clean and organised fashion to ensure future extensibility is maintained. Ansible supports the use of multiple parameter file. This allows for both shared and environment specific parameter files. You will see the example later in the blog. 

All the ARM templates, play books and tasks are source controlled and Ansible tower can be hooked into the Source control of your choice.

This allows/enforces all changes to the templates, play books and tasks to be made locally and then commited to the source control repository using familiar tools. Asnible will then retrieve the lastest versions of these files from source control as the initial step on execution.

This option is more streamlined than having to manually upload the updated files to an online repository like a storage account and have Ansible/Azure access them using URLs.

Below is one of the example of the playbook. The roles mentioned here are just an example. You will need more roles for a complete azure infrastructure and Sitecore deployment 

Note the variables {{ env }} and {{ scname }}. They are passed from the Ansible tower job template into the playbook. This variables needs to be configured in the Extra VARIABLES field as shown below in the example job template. 

The env name is your target environment for which you want to spin up the Sitecore Azure environment. This could be dev, test, regression or production and the site name is the name of your website. This allows you to use same playbook to spin up multiple sites for multiple environment based on the extra variables passed in the job template in Ansible tower. This combination forms the path to the yml file which contains the definition of the parameters, per site, per environment. Below is the snapshot of the variable file structure. 

  • Each role in the Playbook is a Play/Task and the naming convention is fairly self-explanatory. 
  • Each task has a yml file and ARM template (json file). However it is not mandatory to have an ARM template for each of the tasks.
  1. Create the resource group just to have the tasks yml file and no arm template. 

2. Create the Redis Cache resource that will contain both the tasks yml file and the ARM template. 

There are tons of resources available in the Azure ARM template repo https://github.com/Azure/azure-quickstart-templates to get you started. You can then customize it to suit your projects requirements. Sitecore ARM templates are a good starting point which you can utilize to get some ideas. The idea is that you can grab snippets from these example to form your own ARM template. 

I will be writing more blogs on Azure and Sitecore so stay tuned.

Top 5 Ways to Extend Sitecore HTML Cache

In 2017 I wrote a reasonably long post on all the different considerations a Sitecore Caching Strategy might cover.

Following on from that post its time to share some custom HTML cache extensions that we at Aceik may incorporate into our projects. This count down of custom settings has been collected from across the Sitecore community.

5) Vary By Personalisation

This one (in my opinion) is a must-have if your incorporating personalisation in your homepage.

I admit it will only be effective up to a certain number of content variations and even then needs to be used with caution. Still, if you can save your server and databases from getting hit and help keep Time to First Byte (TTFB) low, its always worth it.

Please note that if you’re displaying a customised message with data that is only relevant to that user, the number of variations may not make it worthwhile.  On the other hand, if your showing variations based on a handful of profile card match rules, we found it to be fairly effective.

Code Sample: ApplyVaryByPeronalizedDatasource.cs

Credits: Ahmed Okour

4) Vary By Resolution

Its a fairly common scenario that we want to display a different image based on the users screen size. So it stands to reason that we would need a way to differentiate this when it comes to caching our Renderings.

The particular implementation was used in combination with Scott Mulligan’s Sitecore Adaptive Image Library.

The Adaptive Image library stores the users screen resolution via a cookie in the front end razor/javavscript:

document.cookie = '@Sitecore.Configuration.Settings.GetSetting("resolutionCookieName") =' + Math.max(screen.width, screen.height) + '; path=/';
  • The first time around if no cookie is set it uses the default largest image size as the cache key.
  • If the cookie is set the cache incorporates the screen resolution.
args.CacheKey += "_#resolution:" + AdaptiveMediaProvider.GetScreenResolution();

Code 1:  ApplyVaryByResolution.cs

Code 2: AdaptiveMediaProvider.cs 

Credit:  Dadi Zhao

3) Vary By Timeout

This one’s a little different, it requires not only a new checkbox but also a new “Single-Line text” field that allows you to enter a timeout value.  The idea, as you might have guessed, is for the rendering cache to expire after a certain amount of time.

Code 1: ApplyVaryByTimeout.cs

Credit: Dylan Young

2) Vary By Url

An oldy but a goody. I’m a little surprised this one just hasn’t made it into the out of the box product. On the other hand, I can see how it could be overused if you don’t understand the context it applies to. Essentially you can take either the Context Item ID or the raw URL and make your rendering cache vary based on that key.

A good use case for this setting could be for navigation that requires the current page to always be highlighted.

Code 1: ApplyVaryByURLCaching.cs  (Context Item ID formula)

Code 2: ApplyVaryByRawUrlCaching.cs (Raw URL formula)

Credit:  The 10 other people that have blogged about this over the years.

1) Vary By Website

Given Sitecore is an Enterprise content management system we often see multi-site implementations launched on the platform. It makes sense then that you have an option to cache renderings that don’t change all that much on one site but have different content on another.

Example Usage: A global navigation used across all sites that requires some content for the context site to show differently.

Code: ApplyVaryByWebsite.cs

Credit: Younes van Ruth

That rounds out the count down of some of the top ways to extend Sitecore’s out of the box rendering cache. Your renderings will likely use a combination of these settings in order to achieve adequate caching coverage.

For a better idea on how you might add the top 5 above into Sitecore. Please see the technical footnote below. 



 

Technical Footnote:

All these extensions will add an extra checkbox in the Rendering cache tab within Sitecore.

cachesettings

In order for this check box to show up you need to add your custom checkbox fields to the template:

/sitecore/templates/System/Layout/Sections/Caching

You can achieve this in several ways. and there are a lot of other blogs “on the line” that describe how to add in these custom checkboxes so I won’t go into a deep dive here.

With regard to the Helix architecture lets outline one way you could set this up. Aceik has a module in the foundation layer that has all the custom cache checkboxes added to a single template (serialized in Unicorn within that module) . The system template above is then made to inherit from your custom template in order to inherit the custom caching fields.

custom