Introduce YML Linting to your JSS Apps

Intro: This post shares how and why you might like to introduce a YML linter into the build process for your next Sitecore JSS project. Particularly if you are relying on the YML import process when building a new application. Shout out to David Huby (Solutions Architect) for introducing team Aceik to yaml-lint.

Why would you want to do this on a JSS project?

When running the JSS application and testing latest changes, we sometimes discovered some strange behaviour with dictionary items, or a page might not load properly.

A 404 page displayed after an invalid YML change was made

This can be caused by a small (incorrect) change in YML breaking individual routes. For example, lets say you have an incorrect tab or character in the wrong place. The YML syntax requires correct spacing and line returns to be valid but this is not always so obvious when done incorrectly. Sometimes only after you run the JSS application and test out the changes do you discover some strange behaviours or the page not loading properly.

To avoid this we found it handy to introduce a YML linter into the JSS build process. This solves the issue of someone making a small change to the YML files and breaking individual routes.

Here are the steps needed to introduce a YML linter into a node-based JSS project:

  1. Install yaml-lint (https://www.npmjs.com/package/yaml-lint)
  2. In the application root create the file .yaml-lint.json
  3. Update the package.json
    • Create a new script entry called yamllint
      • “yamllint”: “node ./scripts/yaml-lint.js”
    • Update the script called ‘build’
      • “build”: “npm-run-all yamllint –serial bootstrap:connected build:client build:server”,
  4. Download the following scripts file and place it in the /scripts folder
    1. https://github.com/TomTyack/jss/blob/feature/YAML-Linter/samples/react/scripts/yaml-lint.js

You can also see the pull request with the above changes at:

https://github.com/Sitecore/jss/pull/385/files

Demo

Unicorn & Sitecore with custom rule configurations

Unicorn is one of the most widely used utilities for syncing Sitecore item changes between different environments. Sitecore 9 has introduced rule based configuration that is used to target configurations for different roles (ContentManagement, ContentDelivery etc..). We can also create custom rules by registering them in web.config.

This post will walk you through the use of custom rules to optimise Unicorn for development and deployment.

Environment rule configuration

Create a custom env rule by registering the below in web.config. This can then be used to target the configurations to be enabled/disabled based on the environment.

<appsettings> 
...
<!-- Possible values Development, Test, UAT, Production -->
<add key="env:define" value="Development" />    
... 
</appsettings>

Default Helix configuration uses serialization as the root folder within each module at the respective helix layer. You can also make different unicorn configurations to go to different folders by configuring the targetDatastore for that configuration.

<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-once" />
Use this to target the configurations to write to serialization-once folder

<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-dev" />
Use this to target the configurations to write to serialization-dev folder

The above configurations help differentiate the items that are serialized and synchronized only once and to be only synchronized on developer environments.

Sample Configuration for Developer Content Serialization

<configuration 
name="Project.Website.Serialization-dev"
description="Project Website Dev Unicorn items"
dependencies="Project.Website"
extends="Helix.Base"
env:require="Development">
<targetDataStore
physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-dev" />
<predicate
type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn"
singleInstance="true">
<include
name="Dev.Content"
database="master"
path="/sitecore/content/<site>/<devcontent>" />
</predicate>
</configuration>

Sample Configuration for New Items Only Serialization

This configuration becomes always update on developer environments.
<configuration 
name="Project.Website.Serialization-Once"
description="Project Website new items only"
dependencies="Foundation.*,Feature.*,Project.Core"
extends="Helix.Base">
<evaluator
type="Unicorn.Evaluators.NewItemOnlyEvaluator, Unicorn"
singleInstance="true"
env:require="!Development"/>
<!-- This will be New Items Only in All environments except Development -->
<targetDataStore
physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-once" />
<predicate
type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn"
singleInstance="true">
<include
name="Project.Content.Newitems"
database="master"
path="/sitecore/content/<site>/<new-items>" />
</predicate>
</configuration>

Unicorn rule configuration

A custom unicorn rule can be created by registering the below in web.config. This can then be used to apply configurations based on if unicorn is enable/disabled.

<appsettings> 
...
<!-- Possible values On, Off -->
<add key="unicorn:define" value="On" />
... 
</appsettings>

Having unicorn enabled in non development environments can impede the authoring quality of life especially when using Azure PaaS where the disk I/O is not that great. As part of the CI/CD once the unicorn sync has been performed we can disable unicorn using the following patch config. This config only works when unicorn rule is set to Off.

<configuration 
xmlns:patch="http://www.sitecore.net/xmlconfig/"
xmlns:role="http://www.sitecore.net/xmlconfig/role/"
xmlns:unicorn="http://www.sitecore.net/xmlconfig/unicorn/">
<sitecore
unicorn:require="Off"
role:require="ContentManagement Or Standalone">
<unicorn>
<configurations>
<patch:delete />
</configurations>
</unicorn>
</sitecore>
</configuration>

As part of the the deployment pipeline have a step to change the web.config unicorn:define value to Off so this above patch will take into effect.

An Introduction to Sitecore Pipelines

What is a Sitecore Pipeline?

In Sitecore pipelines describe a series of discrete steps that are taken to achieve some objective.  If you think about writing code to handle an HTTP request for example you could create a monolithic class that does the job from end to end; however the pipeline approach is a number of classes that can be invoked in order.  First do this and then do that etc. 

There are many Pipelines in Sitecore.  You can view existing pipelines and the processors that they call using Sitecore Rocks in Visual Studio.  Right click on the site connection and choose Manage. Click on the Pipelines tab in the Visual Studio edit pane.  Click on one of the listed pipelines to see all the processors that are executed as part of the pipeline.  There are pipelines with a single processor at one end of the scale to the httpRequestBegin pipeline with 45 distinct steps at the other.

Why are they useful?

Thinking about the monolithic class above it would be very difficult to maintain or modify, It would also be truly massive.  So modularising it would make it much easier to maintain. 

It also makes it much easier to customise.  For example, if we consider a pipeline that has three processors it might be drawn something like

To add to the existing functionality we could include a new step like

Where we add some custom function after Step 1 and before Step 2. 

We could also replace an existing step completely

How are they defined?

Pipelines are defined using XML in sitecore.config.  In the example below we can see the httpRequestEnd pipeline definition.  The three processors are called in the order in which they are listed.  A parameters object is passed between them to provide continuity.  The final processor is also receiving four additional parameters from the config file.

<?xml version="1.0" encoding="utf-8"?>
<sitecore database="SqlServer" xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:security="http://www.sitecore.net/xmlconfig/security/">
…
<pipelines>   
… 
<httpRequestEnd>
      <processor type="Sitecore.Pipelines.PreprocessRequest.CheckIgnoreFlag, Sitecore.Kernel" />
      <processor type="Sitecore.Pipelines.HttpRequest.EndDiagnostics, Sitecore.Kernel" role:require="Standalone or ContentManagement" />
      <!--<processor type="Sitecore.Pipelines.HttpRequest.ResizePicture, Sitecore.Kernel"/>-->
      <processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel">
        <ShowThresholdWarnings>false</ShowThresholdWarnings>
        <TimingThreshold desc="Milliseconds">1000</TimingThreshold>
        <ItemThreshold desc="Item count">1000</ItemThreshold>
        <MemoryThreshold desc="KB">10000</MemoryThreshold>
      </processor>
    </httpRequestEnd>
…
    </pipelines>
…
  </sitecore>

How can I work with Sitecore Pipelines?

Existing Sitecore pipelines can be customised as outlined above and it is also possible to create a brand new pipeline from scratch

Customise existing Pipelines

First thing to do is to create a configuration patch to add the new processor class into the pipeline at the desired location.  As you can see from the code it is possible to pass variables to the processor.  Here we are adding a processor called NewsArticleLogEntryProcessor into the httpRequestBegin pipeline after the ItemResolver

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <httpRequestBegin>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin.NewsArticleLogEntryProcessor,Fourbyclub.CustomCode" patch:after="processor[@type='Sitecore.Pipelines.HttpRequest.ItemResolver, Sitecore.Kernel']">
          <NewsArticleTemplateID>{B871115E-609F-44BB-91A4-A37F5E881CA6}</NewsArticleTemplateID>
        </processor> 
      </httpRequestBegin>
    </pipelines>
  </sitecore>
</configuration>

Then we need to create the processor.  Inherit the HttpRequestProcessor and Implement the Process method.  All we are doing here is writing to the log if the requested item is a NewsArticle.

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.httpRequestBegin
{
    using Sitecore.Pipelines.HttpRequest;
    using Sitecore.Diagnostics;

    // TODO: \App_Config\include\NewsArticleLogEntryProcessor.config created automatically when creating NewsArticleLogEntryProcessor class.

    public class NewsArticleLogEntryProcessor : HttpRequestProcessor
    {
        
        // Declare a property of type string:
        private string _newsArticleTemplateID;
        public string NewsArticleTemplateID { get { return _newsArticleTemplateID; } set { _newsArticleTemplateID = value; } }

        public override void Process(HttpRequestArgs args)
        {
            Assert.ArgumentNotNull(args, "args");
            if ((Sitecore.Context.Item != null) && (!string.IsNullOrEmpty(_newsArticleTemplateID)))
            {
                Assert.IsNotNull(Sitecore.Context.Item, "No item in parameters");
                // use util to get id from string property
                if (Sitecore.Context.Item.TemplateID == Sitecore.MainUtil.GetID(_newsArticleTemplateID))
                {
                    // view in log file later, so add FourbyclubCustomCode
                    Log.Info(string.Format("FourbyclubCustomCode: News Article requested is {0} and the item path is {1}", Sitecore.Context.Item.DisplayName, Sitecore.Context.Item.Paths.FullPath), this);
                }
            }
        }

    }
}

Create a new Pipleline

To create and new pipeline is a little more work but still very simple.  The first thing to do is to declare the pipeline with a configuration patch

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <logWriter>
        <processor type="Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter.logWriterProcessor,Fourbyclub.CustomCode" />
      </logWriter>
    </pipelines>
  </sitecore>
</configuration>

The XML above will create a pipeline called logWriter that has a single processor called logWriterProcessor, which will be in the Fourbyclub.CustomCode.dll.

Pipelines must pass a PipelineArgs object to each processor as it is called so that needs to be defined

using Sitecore.Pipelines;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class LogWriterPipelineArgs : PipelineArgs
    {
        public string LogMessage { get; set; }
    }
}

At least one processor is needed to do the work of our pipeline

using Sitecore.Diagnostics;

namespace Fourbyclub.CustomCode.CustomCode.Pipelines.logWriter
{
    public class logWriterProcessor
    {
        public void Process(LogWriterPipelineArgs args)
        {
            Log.Info(string.Format("FourbyclubCustomCode: The message was {0}", args.LogMessage), this);
        }
    }
}

Finally we need to invoke the pipeline in our code somewhere.  Instantiate the LogWriterPipelineArgs and set the LogMessage.  Then call CorePipeline.Run and pass it the name of the pipeline and the args.object

var pipelineargs = new LogWriterPipelineArgs();
pipelineargs.LogMessage = "Requested item is not a News Article";
CorePipeline.Run("logWriter", pipelineargs);

Conclusion

Thank you for reading and I hope that this short introduction to Sitecore Pipelines has shown the power of pipelines to customise Sitecore and help to build maintainable code.  However we should always check if there is a way we can implement something using existing Sitecore functionality rather than going for a pipeline as a first resort. As with any customisation does each pipeline or processor added have a potential to increase the challenge of Sitecore upgrades?

JSS – Some Key Takeaways

Introduction

In the following outline I will take you through some learning points that Aceik discovered on our first JSS project. Some of this is personal opinion and based on our experience with the Angular framework.

Disconnected Mode – Get started quickly

If your team is just starting out and you want to see JSS in action you will likely start out running in Disconnected Mode. This seems like a great way to get non Sitecore developers up and running without them needing to run an actual Sitecore instance. A key benefit is also that front end developers in your team can work on the project without having any Sitecore skills.

It has been mentioned by Nick Wesselman in the global Sitecore slack (Hi Nick, hope its ok if I quote you) that:

“*I wrote the import process for JSS and I can tell you it was not intended for anything beyond quick start for front end devs and short lived campaign sites”

and  ….

“Sitecore-first was always the intended workflow for anything non-trivial”.

I have to be honest I was disappointed when I read the above comments as we did get a fair way into the project supporting our front end Devs and our backend Devs. It does make sense though that Disconnected mode has its limitations. Your not going to get all the bells and whistles available that Sitecore provides, without actually having Sitecore itself. Still the opportunity to support those developers that don’t know Sitecore is a big draw card and I for one see Disconnected mode as something that is very useful.  

Disconnected Mode – Example usage

From personal experience building out a member portal in Angular we were able to mock up secure API calls in Disconnected mode by detecting which mode the app was run in. The workflow involved building individual components that were verified to be working in Disconnected mode by front end developers. These same components were then verified in Connected mode running in Sitecore. Some people may consider this to be double handling in some ways. I still think the the benefits of continuing to support front end developers has its advantages.  

GraphQL Endpoints

GraphQL is a paradigm shift from traditional APIs in that you have a single API endpoint that you can run queries and mutations against to produce results and updates.

Custom GraphQL Endpoints

A couple of things that took us a while to figure out were

  1. Adding multiple schemes to our single endpoint.
  2. Sending mutations with complex object structures (Nested POCOs)
    1. See Example Mutation Query
    2. See Example Variables that match the above query
    3. See Example Schema in C#

How to Turn on Mutations

If you need to send updates to the server by convention in GraphQL you write these as mutations.

You won’t get far unless you actually enable mutations in your JSS app config. This might seem obvious but it took us a while to find an example and work it out.

See Example Line 128

            <mutations hint="raw:AddMutation">
              <mutation name="createItem" type="Sitecore.Services.GraphQL.Content.Mutations.CreateItemMutation, Sitecore.Services.GraphQL.Content" />
              <mutation name="updateItem" type="Sitecore.Services.GraphQL.Content.Mutations.UpdateItemMutation, Sitecore.Services.GraphQL.Content" />
            </mutations>

Secure vs Insecure Graphql Endpoints

Something we required in the case of our member portal project was custom GraphQL endpoints for logged in users and in some cases insecure endpoints for data that did not require an Authenticated user.

Essentially in Angular we solved this using multiple Apollo clients. A full example is available here: (along with detailed explanation)

https://sitecore.stackexchange.com/questions/22229/in-jss-how-do-i-support-both-secure-and-open-graphql-endpoints/22230#22230

Conclusion

That’s a wrap for some of our key JSS learnings so far. We may come back and add to these over time as we learn more. Happy JSS-ing !!

SXA Speedy – Supercharge your SXA Page Speed Scores in Google

We are excited to preview our latest Open Source module. Before jumping into the actual technical details here are some of the early results we are seeing against the Habitat SXA Demo.


Results:

Results

Before:

After

After:

Before
* Results based on Mobile Lighthouse Audit in chrome. 
* Results are based on a local developer machine. Production results usually incur an additional penalty due to network latency.

Want to know more about our latest open source SXA Sitecore module …. read on ….


I’m continually surprised by the number of new site launches that fail to implement Google recommendations for Page Speed. If you believe what Niel Patel has to say this score is vitally important to SEO and your search ranking. At Aceik it’s one of the key benchmarks we use to measure the projects we launch and the projects we inherit and have to fix.

The main issue is often a fairly low mobile score, desktop tends to be easier to cater for. In particular, pick out any SXA project that you know has launched recently and even with bundling properly turned on its unlikely to get over 70 / 100 (mobile score). The majority we tried came in somewhere around the 50 to 60 out 100 mark.

Getting that page score into the desired zone (which I would suggest is 90+) is not easy but here is a reasonable checklist to get close.

1) Introduce image lazy loading
2) Ensure a cache strategy is in place and verify its working.
3) Dianoga is your friend for image compression
4) Use responsive images (must serve up smaller images sizes for mobile)
5) Introduce Critical CSS and deferred CSS files
7) Javascript is not a page speed friend. Defer Defer Defer

The last two items are the main topics that I believe are the hardest to get right. These are the focus of our new module.

Critical_plus_defer

Check out the GitHub repository.

I have also done an installation and usage video.

So how will the module help you get critical and JS defer right?

Deferred Javascript Load

For Javascript, it uses a deferred loading technique mentioned here. I attempted a few different techniques before finding this blog and the script he provides (closer to the bottom of the article) seems to get the best results.  It essentially incorporates some clever tactics (as mentioned in the article) that defer script load without compromising load order.

I also added in one more technique that I have found useful and that is to use a cookie to detect a first or second-time visitor. Second-time visitors naturally will have all external resources cached locally, so we can, therefore, provide a completely different loading experience on the 2nd pass. It stands to reason that only on the very first-page load we need to provide a deferred experience.

Critical + Deferred CSS Load

For CSS we incorporated the Critical Viewport technique that has been recommended by Google for some time. This technique was mentioned in this previous blog post. Generating the Critical CSS is not something we want to be doing manually and there is an excellent gulp based package that does this for you.

It can require some intervention and tweaking of the Critical CSS once generated, but the Gulp scripts provided in the module do seek to address/automate this.

Our module has a button added into the Configure panel inside the Sitecore CMS. So Content Editors can trigger off the re-generation of the Critical CSS when ever needed.

Generate Critical button added to Configure.

Local vs Production Scores

It’s also important to remember that the scores you achieve via Lighthouse built into Chrome on localhost and your non-public development servers can be vastly different than production. In fact, it’s probably safest to assume that non-production boxes give false positives in the region of 10 to 20 points. So it’s best to assume that your score on production will be a little worse than expected.

Conclusion

It’s a fair statement that you can’t just install the module and expect Page Load to be perfect in under 10 minutes.  Achieving top Page Load Speed’s requires many technical things to work together. By ensuring that the previously mentioned checklists are done (Adequate Servers, Sitecore Cache, Image Loading techniques) you are partway over the line. By introducing the deferred load techniques in the module (as recommended by Google) you should then be a step closer to top score.

For more hints please see the Wiki on Github.

This module has been submitted to the Sitecore Marketplace and is awaiting approval.


Author: Thomas Tyack – Solutions Architect / Sitecore MVP 2019

Part 3: External Tracking via FXM and Google Client ID

In this third part of our Experience Profile customisation series, we look at how we might integrate FXM into a third party website.  For the purposes of this blog, we assume the third party website is not built with Sitecore.

You can view Part 1, Part 2 and  Part 4 via the respective links.

A great example of where you might want to do this is if you link off to a third party shopping cart or payment gateway. In this particular scenario, you can use FXM to solve a few marketing requirements.

  • Pages Viewed: Track the pages the user views on an external site.
  • Session Merge: Continue to build the user’s Experience Profile and timeline.
  • Personalise content blocks in the checkout process.  Great for cross promotion.
  • Fire off goals at each step of the checkout process.
  • Fire off goals and outcomes once a purchase occurs.
Note: In the examples that follow we also show what to do in each scenario for single page application. View the footnote for more details about how you might support these with regards to FXM.

So let’s now examine how each requirement can be solved.

Pages Viewed

Page views are a quick win, simply injecting the beacon will record the page view.

For a single page application, each time the screen changes you could use:

SCBeacon.trackEvent('Page visited')

Session Merge

If you inject the Beacon on page load you get some session merging functionality out of the box. If you have a look at the compatibility table for different browsers it’s worth noting that Safari support is limited.

Here is a potential workaround for this notable lack of Safari support:

  • Follow the instructions in Part 1 to identify a user via Google Client ID.
  • When linking to the external website pass through the Google Client ID (see part 1 for more details) as a URL parameter.
  • ?clientID=GA1.2.2129791670.1552388156
  • Initialise google analytics with the same Client ID.  This can also be achieved by setting the Client ID on the page load event in the GTM container.
  • function getUrlVars(){var n={};window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi,function(r,e,i){n[e]=i});return n}
    ga('create', 'UA-XXXXX-Y', {
      'clientId': getUrlVars()["clientID"]
    });
  • Inject the FXM beacon
  • Setup a custom Page Event called “updateGoogleCid” in Sitecore.
  • Hook up a custom FXM procesor that will merge the session.

The process above works for single page applications as well.

Trigger Goals

Out of the box triggering a goal is easily achieved by ‘page filter‘ or ‘capture a click action‘.

For single page applications, you can use the following API calls in your javascript.

SCBeacon.trackGoal('Checkout')

Trigger Goals and Outcomes on Purchase

Out of the box triggering an outcome is achieved via a ‘capture a click action‘.

For the purposes of checkout, you are likely to want to see the dollar value purchased for that particular user in the Experience Profile. In order to achieve this, you need to use the javascript API to pass through the dollar value.  Be sure to create an outcome in Sitecore called ‘Purchase Outcome’.

SCBeacon.trackOutcome("Purchase Outcome", 
{ 
monetaryValue: revenue, 
xTransactionId: transactionId
});

A great tip that we received from the SBOS team in Australia was to trigger goals at checkout that had engagement value staggered according to the amount spent.

So, for example, you may have some javascript logic that looks like this:

if(revenue <= 99)
{
     SCBeacon.trackGoal('lowvalue')
}else if(revenue >= 100 && revenue < 500)
{
     SCBeacon.trackGoal('midvalue')
}else if(revenue >= 500 && revenue < 1000)
{
     SCBeacon.trackGoal('highvalue')
}else{
     SCBeacon.trackGoal('veryhighvalue')
}

For single page applications, you will need to use the javascript API.


 

Conclusion: In order to use FXM on any external website not built on Sitecore you need access to insert the Beacon code. If the external website is not a Single Page Application (also note some other limitations) you can use the FXM Experience Editor to achieve much of the desired functionality.

For those external websites containing Single page applications, ideally, you can also get access to either the GTM container or get the external website to insert some javascript for you. Using some clever javascript coding you can still record marketing events using the FXM javascript API. 

To continue reading jump over to Part 4, where we cover off a handy way to get personalisation working on the very first-page load.


Footnote: Single Page Applications

It’s important to note that out of the box FXM does not support single page applications. Look a bit further under the hood however and you will realise that FXM includes a great Javascript API.  After mentioning that you might now be thinking that if its a third party website you’re unlikely to get access to the source in order to implement any API calls.  At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

At the end of the day, your going to need some sort access to inject FXM in order to achieve any sort of integration.

This will likely place you in one of the following scenarios:

  1. Not a single page application, in which case you just need the external website to include the FXM beacon. (instructions)
    • This is by far the simplest scenario and happy days if your in this category.
  2. A single page application, with which you have access to make changes.
    • In this case, inject the FXM beacon on page load and use the Javascript API to trigger events, goals and outcomes.
  3. A single page application, with which you have no direct access to make changes, but can request changes to the GTM container.
    • In this case, a great backup is using the GTM container to inject the Beacon. You can then write custom javascript that uses javascript listeners to talk with the FXM API.
    • With some single page application frameworks (Angular, React, Vue) hooking into the existing javascript listeners will prove difficult. Your last remaining option may turn out to be inside the GTM container again. If the application is already sending back telemetry to Google Analytics, make good use of it. This could be achieved by either:
      • Writing a custom javascript snippet that looks for changes in Googles datalayer.
      • If events are configured directly in GTM, simply ask for changes to each event to include an FXM API call as well.
  4. If your unlucky and you have no access to make changes at all …. well …..
    •   shrug

 

 

 

Part 1 – Experience Profile – Identify Users Early

This is the first part of a four-part blog series where I will introduce some XDB customisation that could be of use on your next Sitecore project. All these customisations relate back to the Experience Profile and identifying the user.

Part 1:  We introduce the concept of early profile identification, there is no such thing as the anonymous user.

Part 2: We dive into the world of multi-site, multi-domain tracking. How to implement a global-common cookie for all your brand’s sites.

Part 3: External Tracking via FXM and Google Client ID – How to continue tracking a user on another website (not hosted in Sitecore).  (Release TBC)

Part 4: How to achieve Instant personalisation on the very first page load. We can make use of the fact that inbound links from a social stream can already identify a users demographic.


 

Part 1:  Identify Users Early

Some of this blog has not been updated for Sitecore 9 yet, other parts have.

In part one I’m going to talk about identifying your visitors as early on in the visit.

By Identifying users I am referring to allowing users to show up in the Experience Profile.

ExperienceProfileButton

By default, Sitecore will not track every single anonymous user that reaches your site.  In order to get them to show up in the Experience Profile, a few quick changes are necessary.

One of the simplest ways to do this is using either WFFM or the new Forms components in Sitecore 9. In fact, the setup hasn’t changed all that much between the version for this particular use case.

Sitecore 9:  Setup forms save actions

Sitecore 8:  Setup save action in WFFM

You can use these save action with any forms on your website that collect personal details. The identification of a user should be happening when a user submits a form that contains personal details. This is a well documented OOTB forms feature that you can set up without any developer intervention.

Another way to identify a user is to do so programmatically by updating the contacts facet details and then calling the identify method on the tracker.

Sitecore 9:  (reference)

Sitecore.Analytics.Tracker.Current.Session.IdentifyAs("sitecoreextranet", "identifier");

Sitecore 8:

Sitecore.Analytics.Tracker.Current.Session.Identify("identifier")
A side by side code example is available here. 

Calling the above line of code with a string identifier associates the visit data with that identifier. When the user visits again on a different device or tracked website if you are able to call the same line of code with the exact same identifier the visitor’s data will be merged into a single Experience Profile record.

Taking the above concept a little bit further we can also track users across non-Sitecore based websites. By using some google smarts and injecting the FXM beacon onto a third party website we can continue to track the user including, page visits, goals, and outcomes.  (this is covered off more in Part 3)


 

No User is Anonymous

Given that we can choose when a user should be identified and displayed in the Experience Profile. Its time to introduce a concept that no user is anonymous. In fact, this is true for the majority of websites in existence, if they use Google Analytics.

Google assigns an identifier called the Client ID to each visitor that comes along to your website.  The Client ID is stored in the GA cookie and has an expiration date of 2 years after creation.

Note: Google also has a concept of User ID that is used to track sessions across devices. The difference is that each website must send this value to Google in order for it to be used. In reality, this is going to most relevant if you only want to identify users in Sitecore if they have performed Authentication. 

We can use Google’s Client ID to allow the user to show up in the Experience Profile as early as is necessary.

To do this setup the following:

  1. Read the Client ID via JavaScript
    • if (typeof ga !== "undefined"){
          cid = ga.getAll()[0].get("clientId");
      }
  2. Send the Client ID to XDB / XConnect via async javascript.  (Github Reference)
    • if (typeof cid !== "undefined") {	
      	var setEventPath = '/api/xdb/Analytics/TriggerEvent/Event/?eventName=updateGoogleCid&data=' + cid;
              $.ajax({
      		type: 'POST',
      		url: setEventPath,
      		dataType: 'json',
      		success: function (json) {
      		      setCookie(cookieName, cid, 1);
      		},
      		error: function () {
      		      console.warn("An error occurred triggering the event");
      		}
      	});
      }
    • The above code assumes a custom Controller was set up to trigger Goals via Ajax/Javascript.  (Github Reference)
  3. Identify the user (See code examples mentioned earlier or look at our example controller)

Note 1:  The above code only needs to be triggered once per visitor. To save this running multiple times you can assign a cookie to the user. By checking if the cookie has been set you can prevent the above process from running more times then necessary. 

Note 2:  In a single site environment you may choose to leave this identification until a certain amount of visit data has been collected.  For example, writing some logic to check that the user has achieved a certain amount of goals.  This will prevent users with little or no data showing the Experience Profile. 

Note 3: In a multi-site environment the opposite to note 2 becomes necessary. With visitors hitting multiple sites you need to identify them as early as possible. The main reason being that you want any visit data collected from the current website merged with visit data from any other site visits. The resulting merged data provides a great overview of a users movements. This will be discussed more in part 2 when we will look into multi-site XDB visitor identification using a Global common cookie.


Experience Profile – First Name, Last Name

This next step is optional. Given that you have identified the user it will now show in the Experience Profile.  At this point, you may not have a first and last name for that visitor. As an alternative, you could split the Client ID into two numbers and use them as the initial values for the first and last name. If the user completes a newsletter signup, logs in or makes an inquiry at a later time, that would be a good opportunity to update these to the correct values.

firstlast.png

(Github Reference)


 

Part 1: Conclusion

We have demonstrated above how you can identify a user so that they show up in the Experience Profile. As part of this, we have looked at how you could use the Client ID from Google to identify the user as early on as you like, potentially on the very first-page load.  In part 2 of this series on Experience Profile customisations we take a look at how to track users across multiple top level domains.

Monitoring and Debugging Interaction Processing in Sitecore 9 on Azure PaaS

When configuring a new instance of Sitecore XP or maintaining an existing one, you may encounter a situation where your interactions report shows far fewer interactions than expected.

low-interactions
Where are my interactions?

One possible cause is interaction processing which hasn’t kept up with the interactions being logged on your website. In some cases this can be so slow that it appears collection, processing, and reporting aren’t working at all. Here are a few things you can look at to help you diagnose your issue.

 

Are interactions being recorded?

SELECT TOP 10 * FROM xdb_collection.Interactions ORDER BY StartDateTime ASC
Run this command in each of your shard databases to see the recent interactions which have been recorded. Compare the interactions being logged with the expected number and frequency of interactions in the environment you’re looking at.

 

How many interactions are waiting to be processed?

SELECT COUNT(*) FROM xdb_processing_pools.InteractionLiveProcessingPool
This command will indicate the number of interactions waiting to be processed. Monitoring the number of records in this table can give you an indication of the number of new records being created and the number of new interactions which are being queued for processing.
If the number of records is steadily building up, either processing isn’t working or it’s working too slowly to handle the workload.
If you’re collecting interactions but not seeing the size of the live interaction processing pool change at all, there might be an issue with aggregation.

If Analytics reports don’t look quite right, there are some things you can try:

Disable device detection

We encountered an issue with slow processing on a recent project. After logging an issue with Sitecore support, they advised:
Device detection has been known to cause the slowness in rebuilding reporting DB.
Try disabling device detection to determine if this has been impacting the speed of processing.

 

Check the CPU usage on your processing role

If you’re consistently seeing a high level of activity, you may need to scale your processing instances up or out.

high-average-cpu
Time for more instances…

Check connection strings

Use the Server Role Configuration Reference to ensure you have the correct settings on each of your servers

Check Application Insights errors

Check in Application Insights for any repeated error messages that might indicate misconfiguration.

 

millions-of-interactions
That’s more like it!

Helpful links