Chapter 4: Deploy Sitecore into Kubernetes (AWS EKS)

Kubernetes

Sitecore now has official support of deployments into kubernetes. This is a guide of what’s involved in deploying Sitecore into kubernetes (Tested with AWS). Download the Sitecore container deployment and extract the relevant (xp / XM) configuration files to a working folder.

Prepare certificates

Prepare the certificate (.crt) and private key (.key) files for content management, Content Delivery and Identity Server roles and copy them into the relevant folders as per below screenshot.

Mutli-Site configuration

If you have a multi site implementation you might need to create a separate folder for each domain name under the tls folder and copy the relevant cert and key files and update the Kustomization.yaml

Update the ingress configuration (ingress.yaml) with the domain names to content delivery as per below

Prepare and deploy secrets

  1. Update the database passwords in secrets folder text files and change the usernames if you have modified the database usernames.
  2. Generate 64 character keys for telerik encryption, identity secret and reporting api key and update the relevant text files
  3. Create a x509 certificate for identity server to be used for signing credentials and update the identity certificate and password text files. (sample)
  4. Update the license text file with base64 encoded string of the license file (sample)

Once the secret files are all prepared, run the following command to push the secrets into kubernetes environment.

kubectl apply -k ./secrets

Deploy external services (optional)

There is a option to use services like SOLR, Redis and SQL in containers for a non-prod environment. It is not recommended/supported to use these in a production environment. To deploy these external services run the following command

kubectl apply -f ./external/

Wait for the new pods to be in Running status. Use kubectl get pods -o wide -A to check the status of the external pods.

We would need to deploy the initialization jobs to finish setting up the SOLR and SQL pods. The default job configurtaion files provided by sitecore are missing the node selectors, so there is a chance that these jobs end up on linux nodes and fail or get stuck on ContainerCreating status. Add the below snippet just above imagePullSecrets under spec into both solr and mssql init job YAML files (in ./init folder)

      nodeSelector:
        kubernetes.io/os: windows

Run the following command to deploy the init jobs into the cluster.

kubectl apply -f ./init/

Wait for the jobs to return a Completed status, Use the get pods status command used previously.

Deploy Sitecore and Ingress configuration

To deploy sitecore and the nginx ingress configuration run the following command

kubectl apply -f ./ -f ./ingress-nginx/ingress.yaml

Wait for the deployments to be completed and all pods to return either a Running or a Completed status and Sitecore should now be ready.

Check out the other related posts:

Sitecore on AWS Kubernetes (EKS)

Chapter 1: Configure AWS and Kubernetes CLI

Chapter 2: AWS Kubernetes Cluster for windows workloads

Chapter 3: Deploy nginx ingress controller into AWS EKS Cluster

Chapter 3: Deploy nginx ingress controller into AWS EKS Cluster

Kubernetes

Ingress controller is used in kubernetes to direct external traffic to the relevant containers/pods running in the cluster. nginx is a very common and popular ingress controller used with kubernetes. This article guides through the steps involved in deploying nginx ingress controller.

Install helm

helm is one of the popular ways of managing deployments into kubernetes environments. Use the below chocolatey command to install helm.

choco install kubernetes-helm -y

Register Repositories to helm

Add the default repository to helm using below command.

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

Add the bitnami repository to helm using the below command

helm repo add bitnami https://charts.bitnami.com/bitnami

Once the above repos are added to helm. Search for nginx using the below command to ensure that we have pulled the latest nginx charts into the repository. Use the below command to search

helm search repo nginx

Deploy nginx ingress controller

Run the following command to install bitnami nginx controller on to the default linux nodes in the cluster.

helm install <deployment-name> bitnami/nginx-ingress-controller `
--set replicaCount=1 `
--set nodeSelector."beta\.kubernetes\.io/os"=linux `
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux `
--set-string controller.config.proxy-body-size=10m `
--set service.externalTrafficPolicy=Local `

Retrieve external ip for DNS

Run the following command to check the status of the ingress controller apps deployed into the cluster. Wait until the status changes to Running.

kubectl get pods -o wide -A

Run the below command to get the external ip of the ingress controller in the cluster. This returns a cname which can be used for managing the DNS entries going into the applications running in the cluster.

kubectl --namespace default get services -o wide -w bitnami-ingress-nginx-ingress-controller

Note: If you are working with a dummy DNS names you can perform a nslookup on the CNAME and extract the IP Address to add them to your local hosts file with the relevant host names.

Check out the other related posts:

Sitecore on AWS Kubernetes (EKS)

Chapter 1: Configure AWS and Kubernetes CLI

Chapter 2: AWS Kubernetes Cluster for windows workloads

Chapter 4: Deploy Sitecore into Kubernetes

Chapter 2: AWS Kubernetes Cluster for windows workloads

Kubernetes

AWS has been supporting windows workloads in kubernetes since march 2019. The default cluster created only supports linux workloads, this article is a detailed account of how to create a kubernetes cluster in AWS for windows workloads.

Create Cluster

Create a new EKS cluster using the below command. This command creates the default nodes that have to be linux. These nodes host the default applications (pods) that manage the orchestration of kubernetes. Please ensure to update <Cluster-Name> and <Node-Group-Name> and adust the number of nodes accordingly. Its a good practice to have at least 2 nodes to ensure the high availability.

eksctl create cluster `
--name <Cluster-Name> `
--version 1.17 `
--nodegroup-name <Node-Group-Name> `
--node-type t3.medium `
--nodes 2 `
--nodes-min 1 `
--nodes-max 3 `
--node-ami auto

Export Kube config

Run the following command to export the kubernetes cluster configuration into the user profile so the context is set for kubectl

aws eks --region ap-southeast-2 update-kubeconfig --name <cluster name>

Ensure that all the default pods are in running state using the below command.

kubectl get pods -o wide -A

Install VPC controller

Install vpc controller to support windows nodes in the cluster using the below command.

eksctl utils install-vpc-controllers --cluster <Cluster-Name> --approve

Run the get pods command again to see the VPC controller pods running as per below and wait until all the pods are in Running status.

Create windows nodes

Create a node group with windows nodes to support windows workloads using the below command. Please update the <Cluster-Name> and <Node-Group-Name>. Adjust the number of nodes according to the need.

eksctl create nodegroup `
--region ap-southeast-2 `
--cluster <Cluster-Name> `
--version 1.17 `
--name <Node-Group-Name> `
--node-type t3.large `
--nodes 2 `
--nodes-min 1 `
--nodes-max 3 `
--node-ami-family WindowsServer2019FullContainer `
--node-ami auto `

Check the status of all the nodes running correctly using the below command

kubectl get nodes -L kubernetes.io/os `
--sort-by=".status.conditions[?(@.reason == 'KubeletReady' )].lastTransitionTime" `

Check out the other related posts:

Sitecore on AWS Kubernetes (EKS)

Chapter 1: Configure AWS and Kubernetes CLI

Chapter 3: Deploy nginx ingress controller into AWS EKS Cluster

Chapter 4: Deploy Sitecore into Kubernetes

Chapter 1: Configure AWS and Kubernetes CLI

Kubernetes

AWS supports kubernetes workloads for windows through EKS and also supports use of command line interface for deploying and configuring resources. Using CLI makes it easy for automating this process through a CI/CD pipeline when required. This article guides through the configuration of CLI for AWS and Kubernetes.

Install and configure AWS CLI

Install AWS CLI using chocolatey

choco install awscli -y

Once successfully installed run aws --version to check if the cli is correctly installed.

Configure AWS CLI with the relevant access key for CLI. This can be generated using the My Security Credentials page through AWS Console. (detailed instructions)

Use the below command to start configuring the CLI

aws configure

Install CLI for EKS

eksctl is the command line utility used for managing the EKS cluster. Use below chocolatey command to install eksctl

choco install eksctl -y

Once successfully installed run eksctl version to verify the version (see below)

Note: eksctl seemed to only use default profile. If you are using named profile for aws cli. this will always use default profile. Hopefully in the newer versions of the cli this will be addressed by AWS.

Install CLI for kubernetes

Install kubectl using chocolatey using the below command

choco install kubernetes-cli -y

Check out the other related posts:

Sitecore on AWS Kubernetes (EKS)

Chapter 2: AWS Kubernetes Cluster for windows workloads

Chapter 3: Deploy nginx ingress controller into AWS EKS Cluster

Chapter 4: Deploy Sitecore into Kubernetes

Sitecore on AWS Kubernetes (EKS)

Kubernetes

Sitecore with release of version 10 now officially supports deployments into container workloads using kubernetes. Sitecore has officially marked PaaS would be deprecated in the future releases (more details). This article is about how to deploy Sitecore 10 into AWS kubernetes (EKS). This article is broken down into four chapters as per below with a brief summary.

1. Configure AWS and Kubernetes CLI

AWS command line interface could be used to create or manage resources in AWS. This chapter guides through the installation and configuration of AWS CLI and Kubernetes CLI. Click here to start configuring your CLI.

2. Create AWS Kubernetes Cluster

A Kubernetes cluster needs to be provisioned to deploy Sitecore workloads. This chapter guides through the creation and configuration of AWS EKS Cluster. Click here to start provisioning the cluster.

3. Deploy ingress controller into EKS Cluster

An ingress controller is used to create external access to the applications deployed into the Kubernetes cluster. This chapter guides through the preparation and deployment of ingress controller. Click here to start deploying the ingress controller.

4. Deploy Sitecore into AWS EKS

This is the last chapter that guides through the preparation of the configuration files and deploying Sitecore into kubernetes cluster. Click here to start deploying Sitecore.

Credits

Credit to Gurpreet (Garry) Singh & Jason Umiker for sharing their knowledge and expertise on AWS EKS.

Sitecore Commerce 10 Development Environment with Docker

Sitecore Commerce 10 was released with official support for use of containers for development and production. There are different topologies that are available for development xc0, xc1 and xc1-cxa. But unfortunately the only images that have a working SXA storefront are xc1-cxa.

This article describes how to customize the xc0 docker compose file to be able to spin up a development environment with a fully functioning SXA storefront.

Customized version of xc0 docker compose file is available on my github.

Update SQL & SOLR Image Reference to XC1

  • Use xc1 image for SQL so the databases have all the items for SXA, SXA Steorefront and SPE.
  • Use xc1 image for SOLR to it includes the cores for SXA indexes.

Add Processing Instance and Customize

  • Copy the prc role from xc1-cxa docker compose file
  • Update the xConnect-Collection connection string to http://xconnect

Add Reporting Instance

  • Copy and add the rep role from xc1-cxa docker compose file

Update Content Management Instance and Customize

  • Copy and replace cm role from xc1-cxa docker compose file
  • Update all the xconnect and xdb connection strings to http://xconnect (as per the above screenshot).

Unicorn & Sitecore with custom rule configurations

Unicorn is one of the most widely used utilities for syncing Sitecore item changes between different environments. Sitecore 9 has introduced rule based configuration that is used to target configurations for different roles (ContentManagement, ContentDelivery etc..). We can also create custom rules by registering them in web.config.

This post will walk you through the use of custom rules to optimise Unicorn for development and deployment.

Environment rule configuration

Create a custom env rule by registering the below in web.config. This can then be used to target the configurations to be enabled/disabled based on the environment.

<appsettings> 
...
<!-- Possible values Development, Test, UAT, Production -->
<add key="env:define" value="Development" />    
... 
</appsettings>

Default Helix configuration uses serialization as the root folder within each module at the respective helix layer. You can also make different unicorn configurations to go to different folders by configuring the targetDatastore for that configuration.

<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-once" />
Use this to target the configurations to write to serialization-once folder

<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-dev" />
Use this to target the configurations to write to serialization-dev folder

The above configurations help differentiate the items that are serialized and synchronized only once and to be only synchronized on developer environments.

Sample Configuration for Developer Content Serialization

<configuration 
name="Project.Website.Serialization-dev"
description="Project Website Dev Unicorn items"
dependencies="Project.Website"
extends="Helix.Base"
env:require="Development">
<targetDataStore
physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-dev" />
<predicate
type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn"
singleInstance="true">
<include
name="Dev.Content"
database="master"
path="/sitecore/content/<site>/<devcontent>" />
</predicate>
</configuration>

Sample Configuration for New Items Only Serialization

This configuration becomes always update on developer environments.
<configuration 
name="Project.Website.Serialization-Once"
description="Project Website new items only"
dependencies="Foundation.*,Feature.*,Project.Core"
extends="Helix.Base">
<evaluator
type="Unicorn.Evaluators.NewItemOnlyEvaluator, Unicorn"
singleInstance="true"
env:require="!Development"/>
<!-- This will be New Items Only in All environments except Development -->
<targetDataStore
physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization-once" />
<predicate
type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn"
singleInstance="true">
<include
name="Project.Content.Newitems"
database="master"
path="/sitecore/content/<site>/<new-items>" />
</predicate>
</configuration>

Unicorn rule configuration

A custom unicorn rule can be created by registering the below in web.config. This can then be used to apply configurations based on if unicorn is enable/disabled.

<appsettings> 
...
<!-- Possible values On, Off -->
<add key="unicorn:define" value="On" />
... 
</appsettings>

Having unicorn enabled in non development environments can impede the authoring quality of life especially when using Azure PaaS where the disk I/O is not that great. As part of the CI/CD once the unicorn sync has been performed we can disable unicorn using the following patch config. This config only works when unicorn rule is set to Off.

<configuration 
xmlns:patch="http://www.sitecore.net/xmlconfig/"
xmlns:role="http://www.sitecore.net/xmlconfig/role/"
xmlns:unicorn="http://www.sitecore.net/xmlconfig/unicorn/">
<sitecore
unicorn:require="Off"
role:require="ContentManagement Or Standalone">
<unicorn>
<configurations>
<patch:delete />
</configurations>
</unicorn>
</sitecore>
</configuration>

As part of the the deployment pipeline have a step to change the web.config unicorn:define value to Off so this above patch will take into effect.