Using Kubernetes to measure homepage speed index

Hi,

We decided to try out using a Kubernetes cronjob to measure the SPEED INDEX score of a system. We thought this would be a great idea. We can have a job that polls a server and measures the performance of the HomePage loading.

We use Cypress and a lighthouse plugin for synthetics.
https://github.com/mfrachet/cypress-audit

Why Speed Index and not something like BrowserTimings?

Speed index measures the time it takes for the visible (above-the-fold) parts of a webpage to appear to the users. It became part of the WebPageTest in April 2012. The Lighthouse-powered Audits panel in the Chrome DevTools also measures it.

Speed index can be used to compare performance against competitors or previous versions of the same website. It best utilised alongside other metrics like load time and start-render to get a better understanding of a site’s quality.

Thats pretty awesome. You get a good metric that shows you the user experience. BrowserTimings usually do not factor in when the user can interact with the page. Even if a web page takes 10 seconds to load, if it is usable within 4 seconds, we good as gold.

The Problem

SpeedIndex uses your browser and it requires CPU/Memory, both of which can be throttled by kubernetes.

We noticed that when we used a normal machine the speed index was good – around 5 seconds. We noticed in kubernetes it would be around 12 seconds!

We had set some throttling limits, but never though it would get throttled.

resources:
              limits:
                cpu: 1500m
                memory: 1500M
              requests:
                cpu: "1"
                memory: 1G

However, it was getting throttled!

We decided to remove the resource limits

resources:
 {}

Now, there is no throttling and the speed index measured similar to our home device.

The above graph shows that throttling disabled improved the metric.

Conclusion

Even those Speed Index is a great way to measure performance, we cannot use it on Kubernetes as a measuring device, because even with throttling off, you just never know if another container/pod is stealing resources. The best way forward when measuring metrics (NOT GATHERING), is to measure as close as possible to the source e.g. Client Side Javascript etc.

So for us, it is back to the drawing board and pick a measurement as close to the client as possible.

Possible solutions:

https://github.com/WPO-Foundation/RUM-SpeedIndex/blob/master/src/rum-speedindex.js

Avoid using kubernetes to be a measuring device that measures metrics related to CPU/Memory

Questions

There seems to be a bug in Kubernetes with some linux kernels regarding CFS/Throttling

https://github.com/kubernetes/kubernetes/issues/67577

What still puzzles me, is I have a physical node that is not using power. I have a process using 0.4 cpu with a limit of 2.5 cpus and it gets throttled!

This is on Azure Kubernetes 1.16. However I can reproduce the issue on Google GKE as well. So it is a kubernetes algorithm of some sort that limits more than you would expect!

Site Reliability Engineering with Gitops & Kubernetes – Part 1

Intro

One of the key pillars regarding SRE is being able to make quantitative decisions based on key metrics.

The major challenge is what are key metrics and this is testament to the plethora of monitoring software out in the wild today.

At a foundational level you want to ensure your services are always running, however 100% availability in not practical.

Class SRE implements Devops
{
  MeasureEverything()
}

Availability/Error Budget

You then figure out what availability is practical for your application and services.

Availability

Your error budget will then be the downtime figure e.g. 99% is 7.2 hours of downtime a month that you can afford to have.

SLAs, SLOs and SLIs

This will be the starting point on your journey to implementing quantitative analysis to

  • Service Level Agreements
  • Service Level Objectives
  • Service Level Indicators

This blog post is all about how you can measure Service Level Objectives without breaking the bank. You do not need to spend millions of dollars on bloated monitoring solutions to observe key metrics that really impact your customers.

Just like baking a cake, these are the ingredients we will use to implement an agile, scaleable monitoring platform that is solely dedicated to doing one thing well.

Outcome

This is what we want our cake to deliver:

  • Measuring your SLA Compliance Level
  • Measuring your Error Budget Burn Rate
  • Measuring if you have exhausted your error budget
Service Level Compliance – SLAs -> SLOs -> SLIs

If you look at the cake above, you can see all your meaningful information in one dashboard.

  1. Around 11am the error budget burn rate went up. (A kin to your kids spending all their pocket money in one day!)
  2. Compliance was breached (99% availability) – The purple line (burn rate) went above the maximum budget (yellow line)

These are metrics you will want to ALERT on at any time of the day. These sort of metrics matter. They are violating a Service Level Agreement.

What about my other metrics?

Aaaah, like %Disk Queue Length, Processor Time, Kubernetes Nodes/Pods/Pools etc? Well…

I treat these metrics as second class citizens. Like a layered onion. Your initial metrics should be – Am I violating my SLA? If not, then you can use the other metrics that we have enjoyed over the decades to compliment your observeability into the systems and as a visual aid for diagnostics and troubleshooting.

Alerting

Another important consideration is the evolution of infrastructure. In 1999 you will have wanted to receive an alert if a server ran out of disk space. In 2020, you are running container orchestration clusters and other high availability systems. A container running out of disk space is not so critical as it used to be in 1999.

  • Evaluate every single alert you have and ask yourself. Do I really need to wake someone up at 3am for this problem?
  • Always alert on Service Level Compliance levels ABOUT to breach
  • Always alert on Error Budget Burn Rates going up sharply
  • Do not alert someone out of hours because the CPU is 100% for 5 minutes unless Service Level Compliance is being affected to

You will have happier engineers and a more productive team. You will be cooled headed during an incident because you know the different between a cluster node going down versus Service Level Compliance violations. Always solve the Service Level Compliance and then fix the other problems.

Ingredients

Where are the ingredients you promised? You said it will not break the bank, I am intrigued.

Summary

In this post we have touched on the FOUNDATION of what we want out of monitoring.

We know exactly what we want to measure – Service Level Compliance, Error Budget Burn Rate and Max Budget. All this revolves around deciding on the level of availability we give a service.

We touched on the basic ingredients that we can use to build this solution.

In my next blog post we will discuss how we mix all these ingredients together to provide a platform that is is good at doing one thing well.

Measuring Service Level Compliance & Error Budget Burn Rates

When you give your child $30 to spend a month and they need to save $10 a month. You need to be alerted if they spending too fast (Burn Rate).

Microsoft Azure Devops – Dynamic Docker Agent (Ansible)

Often you may require a unique custom build/release agent with a specific set of tools.

A good example is a dynamic Ansible Agent that can manage post deployment configuration. This ensures configuration drift is minimised.

Secondly this part of a release is not too critical, so we can afford to spend a bit of time downloading a docker image if it is not already cached.

This article demonstates how you can dynamically spawn a docker container during your release pipeline to apply configuration leveraging Ansible. It will also demonstrate how to use Ansible Dynamic Inventory to detect Azure Virtual machine scale set instances – in the past you would run hacks on facter.

Prerequsites

You will require:

  • A docker image with ansible – You can use mine as a starting point – https://github.com/Romiko/DockerUbuntuDev
    The above is hosted at: dockerhub – romiko/ansible:latest (See reference at bottom of this page)
  • A Self-host Azure Devops Agent – Linux
  • Docker installed on the self-hosted agent
  • Docker configured to expose Docker Socket
    docker run -v /var/run/docker.sock:/var/run/docker.sock -d –name some_container some_image

Release Pipeline

Configure a CLI Task in your release pipeline.

variables:
  env: 'dev'

steps:
- task: AzureCLI@2
  displayName: 'Azure CLI Ansible'
  inputs:
    azureSubscription: 'RangerRom'
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
     set -x
     
     docker run --rm -v $(System.DefaultWorkingDirectory)/myproject/config:/playbooks/ romiko/ansible:latest \
      "cd  /playbooks/ansible; ansible-playbook --version; az login --service-principal --username $servicePrincipalId --password $servicePrincipalKey --tenant $tenantId; az account set --subscription $subscription;ansible-playbook my-playbook.yaml -i inventory_$(env)_azure_rm.yml --extra-vars \"ansible_ssh_pass=$(clientpassword)\""
    addSpnToEnvironment: true
    workingDirectory: '$(System.DefaultWorkingDirectory)/myproject/config/ansible'

In the above the code that is causing a SIBLING container to spawn on the self-hosted devops agent is:

docker run –rm -v $(System.DefaultWorkingDirectory)/myproject/config:/playbooks/ romiko/ansible:latest \ <command to execute inside the container>

Here we have a mount point occuring where the config folder in the repo will be mounted into the docker container.

-v <SourceFolder>:<MountPointInDockerContainer>

The rest of the code after the \ will execute on the docker container. So in the above,

  • The container will become a sibling,
  • Entry into a bash shell
  • Container will mount a /playbooks folder containing the source code from the build artifacts
  • Connect to azure
  • Run an anisble playbook.
  • The playbook will find all virtual machine scale sets in a resoruce group with a name pattern
  • Apply a configuration by configuring logstash to auto reload config files when they change
  • Apply a configuration by copying files

Ansible

The above is used to deploy configurations to an Azure Virtual Machine Scale Set. Ansible has a feature called dynamica inventory. We will leverage this feature to detect all active nodes/instances in a VMSS.

The structure of ansible is as follows:

Ansible Dynamic Inventory

So lets see how ansible can be used to detect all running instances in an Azure Virtual machine Scale Set

inventory_dev_azure_rm.yml

Below it will detect any VMSS cluster in resourcegroup rom-dev-elk-stack that has logstash in the name

plugin: azure_rm

include_vmss_resource_groups:
- rom-dev-elk-stack

conditional_groups:
  logstash_hosts: "'logstash' in name"

auth_source: auto

logstash_hosts.yml (Ensure this lives in a group_vars folder)

Now, I can configure ssh using a username or ssh keys.

---
ansible_connection: ssh
ansible_ssh_user: logstash

logstash-playbook.yaml

Below I now have ansible doing some configuration checks for me on a logstash pipeline (upstream/downstream architecture).


    - name: Logstash auto reloads check interval
      lineinfile:
        path: /etc/logstash/logstash.yml
        regexp: '^config\.reload\.interval'
        line: "config.reload.interval: 30s"
      become: true
      notify:
        - restart_service

    - name: Copy pipeline configs
      copy:
        src: ../pipelines/conf.d/
        dest: /etc/logstash/conf.d/
        owner: logstash
        group: logstash
      become: true
    
    - name: Copy pipeline settings
      copy:
        src: ../pipelines/register/
        dest: /etc/logstash/
        owner: logstash
        group: logstash
      become: true

To improve security – replace user/password ansible login with an SSH key pair.

References

To read up more about Docker Socket mount points. Check out

https://www.develves.net/blogs/asd/2016-05-27-alternative-to-docker-in-docker/

https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html

Thanks to Shawn Wang and Ducas Francis for the inspirations on Docker Socket.

https://azure.microsoft.com/en-au/services/devops/

Azure Streaming Analytics – Beware – JSON Engine

After investigating an issue with Azure Streamiung Analytics, we discovered it cannot deserialise JSON that have the same property names but differ in case e.g.

{
  "Proxy": "abc",
  "proxy": "def"
}

If you send the above payload to a Streaming Analytics Job, it will fail.

Source ‘<unknown_location>’ had 1 occurrences of kind ‘InputDeserializerError.InvalidData’ between processing times ‘2020-03-30T00:19:27.8689879Z’ and ‘2020-03-30T00:19:27.8689879Z’. Could not deserialize the input event(s) from resource ‘Partition: [8], Offset: [1], SequenceNumber: [1]’ as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format

We opened a ticket with Microsoft. This was the response.

“Hi Romiko,

Thank you for being patience with us. I had further discussion with our ASA PG and here’s our findings.

Findings

ASA unfortunately does not support case sensitive column. We understand it is possible for json documents to add to have two columns that differ only in case and that some libraries support it. However there hasn’t been a compelling use case to support it. We will update the documentation as well. 

We are sorry for the inconvenience. If you have any questions or concerns, please feel free to reach out to me. I will be happy to assist you.”

Indeed other libraries do support this, such as powershell, c#, python etc.

C#

using Newtonsoft.Json;

data = “{‘name’: ‘Ashley’, ‘Name’: ‘Romiko’}”

dynamic message = JsonConvert.DeserializeObject(data);

var text = JsonConvert.SerializeObject(message);

I have built the following tool on github as a workaround for streaming data to elastic. https://github.com/Romiko/EventHub-CheckMalformedEvents

A significant reason why Microsoft should support it – is the Elastic Common Schema. (ECS), a new specification that provides a consistent and customizable way to structure your data in Elasticsearch, facilitating the analysis of data from diverse sources. With ECS, analytics content such as dashboards and machine learning jobs can be applied more broadly, searches can be crafted more narrowly, and field names are easier to remember.

When introducing a new schema, there is always dealing with existing/custom data. Elastic have an ingenious way to solve this. All fields in ECS are lower case. So your existing data can be guarnteed to not conflict if you use an UpperCase.

Let us reference Elastic’s advice

https://www.elastic.co/guide/en/ecs/current/ecs-custom-fields-in-ecs.html

image.png

Elastic, who deal with Big Data all the time recommend using Proxy vs proxy to ensure migrations to ECS is a vaiable/conflic free solution.

Conclusion

If you are migrating huge amounts of data to Elastic Common Schema (ECS), Consider if Azure Streaming Analytics is a good fit due to the JSON limits.

You can also vote to fix this issue here and improve Microsoft’s product offering 🙂

https://feedback.azure.com/forums/270577-stream-analytics/suggestions/40122079-azure-stream-analytics-to-be-able-to-handle-case-s

Debugging Azure Event Hubs and Stream Analytics Jobs

When you are dealing with millions of events per day (Json format). You need a debugging tool to deal with events that do no behave as expected.

Recently we had an issue where an Azure Streaming analytics job was in a degraded state. A colleague eventually found the issue to be the output of the Azure Streaming Analytics Job.

The error message was very misleading.

[11:36:35] Source 'EventHub' had 76 occurrences of kind 'InputDeserializerError.TypeConversionError' between processing times '2020-03-24T00:31:36.1109029Z' and '2020-03-24T00:36:35.9676583Z'. Could not deserialize the input event(s) from resource 'Partition: [11], Offset: [86672449297304], SequenceNumber: [137530194]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format\r\n"

The source of the issue was CosmosDB, we need to increase the RU’s. However the error seemed to indicate a serialization issue.

We developed a tool that could subscribe to events at exactly the same time of the error, using the sequence number and partition.

We also wanted to be able to use the tool for a large number of events +- 1 Million per hour.

Please click link to the EventHub .Net client. This tool is optimised to use as little memory as possible and leverage asynchronous file writes for the an optimal event subscription experience (Console app of course).

Have purposely avoided the newton soft library for the final file write to improve the performance.

The output will be a json array of events.

The next time you need to be able to subscribe to event hubs to diagnose an issue with a particular event, I would recommend using this tool to get the events you are interested in analysing.

Thank you.

Troubleshooting Azure Event Hubs and Azure Streaming Analytics

When you are dealing with millions of events per day (Json format). You need a debugging tool to deal with events that do no behave as expected.

Recently we had an issue where Azure Streaming analytics was in a degraded state. A colleague eventually found the issue to be the output of the Azure Streaming Analytics Job.

The error message was very misleading.

[11:36:35] Source 'EventHub' had 76 occurrences of kind 'InputDeserializerError.TypeConversionError' between processing times '2020-03-24T00:31:36.1109029Z' and '2020-03-24T00:36:35.9676583Z'. Could not deserialize the input event(s) from resource 'Partition: [11], Offset: [86672449297304], SequenceNumber: [137530194]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format\r\n"

The source of the issue was CosmosDB, we need to increase the RU’s. However the error seemed to indicate a serialization issue.

We developed a tool that could subscribe to events at exactly the same time of the error, using the sequence number and partition.

We also wanted to be able to use the tool for a large number of events +- 1 Million per hour.

Please click link to the EventHub .Net client. This tool is optimised to use as little memory as possible and leverage asynchronous file writes for the an optimal event subscription experience (Console app of course).

Have purposely avoided the newton soft library for the final file write to improve the performance.

The output will be a json array of events.

using System;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Azure.Messaging.EventHubs.Consumer;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;

namespace CheckMalformedEvents
{
    class GetMalformedEvents
    {
        private static string partitionId;
        private static IConfigurationRoot configuration;
        private static string connectionString;
        private static string consumergroup;
        private static EventHubConsumerClient eventHubClient;
        private static EventPosition startingPosition;
        private static DateTimeOffset processingEnqueueEndTimeUTC;

        static void Main(string[] args)

        {
            Init();
            ShowIntro();

            try
            {
                GetEvents(eventHubClient, startingPosition, processingEnqueueEndTimeUTC).Wait();
            }
            catch (AggregateException e)
            {
                Console.WriteLine($"{e.Message}");

            }
            catch (Exception e)
            {
                Console.WriteLine($"{e.Message}");
            }
        }

        private static void Init()
        {
            var builder = new ConfigurationBuilder()
                            .SetBasePath(Directory.GetCurrentDirectory())
                            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);

            configuration = builder.Build();

            connectionString = configuration.GetConnectionString("eventhub");
            consumergroup = configuration.GetConnectionString("consumergroup");
            eventHubClient = new EventHubConsumerClient(consumergroup, connectionString);

            partitionId = configuration["partitionId"];
            if (long.TryParse(configuration["SequenceNumber"], out long sequenceNumber) == false)
                throw new ArgumentException("Invalid SequenceNumber");

            processingEnqueueEndTimeUTC = DateTimeOffset.Parse(configuration["ProcessingEnqueueEndTimeUTC"]);
            startingPosition = EventPosition.FromSequenceNumber(sequenceNumber);
        }

        private static void ShowIntro()
        {
            Console.WriteLine("This tool is used to troubleshoot malformed messages in an Azure EventHub");
            Console.WriteLine("Sample Error Message to troubleshoot - First get the errors from the Streaming Analytics Jobs Input blade.\r\n");
        }

        private static async Task<CancellationTokenSource> GetEvents(EventHubConsumerClient eventHubClient, EventPosition startingPosition, DateTimeOffset endEnqueueTime)
        {
            var cancellationSource = new CancellationTokenSource();
            if (int.TryParse(configuration["TerminateAfterSeconds"], out int TerminateAfterSeconds) == false)
                throw new ArgumentException("Invalid TerminateAfterSeconds");

            cancellationSource.CancelAfter(TimeSpan.FromSeconds(TerminateAfterSeconds));
            string path = Path.Combine(Directory.GetCurrentDirectory(), $"{Path.GetRandomFileName()}.json");

            int count = 0;
            byte[] encodedText;
            using FileStream sourceStream = new FileStream(path, FileMode.Append, FileAccess.Write, FileShare.Write, bufferSize: 4096, useAsync: true);
            {
                encodedText = Encoding.Unicode.GetBytes("{\r\n\"events\": [" + Environment.NewLine);
                await sourceStream.WriteAsync(encodedText, 0, encodedText.Length);
                encodedText = Encoding.Unicode.GetBytes("");
                await foreach (PartitionEvent receivedEvent in eventHubClient.ReadEventsFromPartitionAsync(partitionId, startingPosition, cancellationSource.Token))
                {
                    if (encodedText.Length > 0)
                        await sourceStream.WriteAsync(encodedText, 0, encodedText.Length);

                    count++;
                    using var sr = new StreamReader(receivedEvent.Data.BodyAsStream);
                    var data = sr.ReadToEnd();
                    var partition = receivedEvent.Data.PartitionKey;
                    var offset = receivedEvent.Data.Offset;
                    var sequence = receivedEvent.Data.SequenceNumber;

                    try
                    {
                        IsEventValidJson(count, receivedEvent, data, partition, offset, sequence);
                    }
                    catch (Exception ex)
                    {
                        Console.WriteLine($"Serialization issue Partition: { partition}, Offset: {offset}, Sequence Number: { sequence }");
                        Console.WriteLine(ex.Message);
                    }

                    if (receivedEvent.Data.EnqueuedTime > endEnqueueTime)
                    {
                        Console.WriteLine($"Last Message EnqueueTime: {receivedEvent.Data.EnqueuedTime:o}, Offset: {receivedEvent.Data.Offset}, Sequence: {receivedEvent.Data.SequenceNumber}");
                        Console.WriteLine($"Total Events Streamed: {count}");
                        Console.WriteLine($"-----------");
                        break;
                    }

                    encodedText = Encoding.Unicode.GetBytes(data + "," + Environment.NewLine);
                    await sourceStream.WriteAsync(encodedText, 0, encodedText.Length);
                }
                encodedText = await FinaliseFile(encodedText, sourceStream);
            }

            Console.WriteLine($"\r\n Output located at: {path}");
            return cancellationSource;
        }

        private static async Task<byte[]> FinaliseFile(byte[] encodedText, FileStream sourceStream)
        {
            await sourceStream.WriteAsync(encodedText, 0, encodedText.Length - 6); //Remove ,\r\n on last line
            encodedText = Encoding.Unicode.GetBytes("]\r\n}" + Environment.NewLine);
            await sourceStream.WriteAsync(encodedText, 0, encodedText.Length);
            return encodedText;
        }

        private static void IsEventValidJson(int count, PartitionEvent receivedEvent, string data, string partition, long offset, long sequence)
        {
            dynamic message = JsonConvert.DeserializeObject(data);
            message.AzureEventHubsPartition = partition;
            message.AzureEventHubsOffset = offset;
            message.AzureEventHubsSequence = sequence;
            message.AzureEnqueuedTime = receivedEvent.Data.EnqueuedTime.ToString("o");

            if (count == 0)
                Console.WriteLine($"First Message EnqueueTime: {message.AzureEnqueuedTime}, Offset: {message.AzureEventHubsOffset}, Sequence: {message.AzureEventHubsSequence}");
        }
    }
}

The next time you need to be able to subscribe to event hubs to diagnose an issue with a particular event, I would recommend using this tool to get the events you are interested in analysing.

Thank you.

What is Devops – Part 1

Patrick Debois from Belgium is the actual culprit to blame for the term Devops, he wanted more synergy between developers and operations back in 2007.

Fast-forward a few years and now we have “Devops” everywhere we go. If you using the coolest tools in town such as Kubernetes, Azure Devops Pipelines, Jenkins, Grafana etc – then you probably reckon that you are heavy into Devops. This can not be further from the truth.

The fact is that Devops is more about a set of patterns and practices within a culture that nurtures shared responsibilities across all teams during the software development life-cycle.

Put it this way, if you only have 1 dude in your team that is “doing Devops”, then you may want to consider if you are really implementing Devops or one of it’s anti-patterns. Ultimately you need to invest in everyone within the SDLC teams to get on board with the cultural shift.

If we cannot get the majority of engineers involved in the SDLC to share responsibilities, then we have failed at our objectives regarding Devops, even if we using the latest cool tools from Prometheus to AKS/GKE. In a recent project that I was engaged in there was only 1 devops dude, when he fell ill nobody from any of the other engineering teams could perform his duties. Despite the fact that confluence has numerous playbooks and “How To’s”. Why?

It comes down to people, process & culture. All of which can be remedied with strong technical leadership and encouraging your engineers to work with the process and tools in their daily routine. Hence why I encourage developers that are hosting their code on Kubernetes to use Minikube on their laptops.

If there is any advice that I can provide teams that want to implement Devops – Focus on People then Process and finally the Tools.

In order to setup the transition for success – we will discuss in the next part of this series the pillars of Devops.

Installing Kubernetes – The Hard Way – Visual Guide

This is a visual guide to compliment the process of setting up your own Kubernetes Cluster on Google Cloud. This is a visual guide to Kelsey Hightower GIT project called Kubernetes The Hard Way. It can be challenging to remember all the steps a long the way, I found having a visual guide like this valuable to refreshing my memory.

Provision the network in Google Cloud

VPC

Provision Network

Firewall Rules

External IP Address

Provision Controllers and Workers – Compute Instances

Controller and Worker Instances

Workers will have pod CIDR

10.200.0.0/24

10.200.1.0/24

10.200.2.0/24

Provision a CA and TLS Certificates

Certificate Authority

Client & Server Certificates

Kubelet Client Certificates

Controller Manager Client Certificates

Kube Proxy Client Certificates

Scheduler Client Certificates

Kubernetes API Server Certificate

Reference https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md

Service Account Key Pair

Certificate Distribution – Compute Instances

Generating Kubernetes Configuration Files for Authentication

Generating the Data Encryption Config and Key

Bootstrapping etcd cluster

Use TMUX set synchronize-panes on to run on multiple instances at same time. Saves time!

Notice where are using TMUX in a Windows Ubuntu

Linux Subsystem and running commands in parallel to save a lot of time.

The only manual command is actually ssh into each controller, once in, we activate tmux synchronize feature. So what you type in one panel will duplicate to all others.

Bootstrapping the Control Pane (services)

Bootstrapping the Control Pane (LB + Health)

Required Nginx as Google health checks does not support https

Bootstrapping the Control Pane (Cluster Roles)

Bootstrapping the Worker Nodes

Configure kubectl remote access

Provisioning Network Routes

DNS Cluster Add-On

First Pod deployed to cluster – using CoreDNS

Smoke Test

Once you have completed the install of your kubernetes cluster, ensure you tear it down after some time to ensure you do not get billed for the 6 compute instances, load balancer and public statis ip address.

A big thank you to Kelsey for setting up a really comprehensive instruction guide.

Minikube + CloudCode + VSCode – WindDevelopment Environment

As a developer you can deploy your docker containers to a local Kubernetes cluster on your laptop using minikube. You can then use Google Cloud Code extension for Visual Studio Code.

You can then make real time changes to your code and the app will deploy in the background automatically.

  1. Install kubectl – https://kubernetes.io/docs/tasks/tools/install-kubectl/
  2. Install minikube – https://kubernetes.io/docs/tasks/tools/install-minikube/
    1. For Windows users, I recommend the Chocolaty approach
  3. Configure Google Cloud Code to use minikube.
  4. Deploy your application to your local minikube cluster in Visual Studio Code
  5. Ensure you add your container registry in the .vscode\launch.json file – See Appendix

Ensure you running Visual Studio Code as Administrator.

Once deployed, you can make changes to your code and it will automatically be deployed to the cluster.

Quick Start – Create minikube Cluster in Windows (Hyper-V) and deploy a simple web server.

minikube start --vm-driver=hyperv
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
kubectl expose deployment hello-minikube --type=NodePort --port=8080
kubectl get pod
minikube service hello-minikube --url
minikube dashboard

Grab the output from minikube service hello-minikube –url and browse your web app/service.

Appendix

Starting the Cluster and deploying a default container.

VS Code Deployment

  • Setup your Container Registry in the .vscode\launch.json
  • Click Cloud Code on the bottom tray
  • Click “Run on Kubernetes”
  • Open a separate command prompt as administrator

.vscode\launch.json

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Run on Kubernetes",
            "type": "cloudcode.kubernetes",
            "request": "launch",
            "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
            "watch": true,
            "cleanUp": true,
            "portForward": true,
            "imageRegistry": "romikocontainerregistry/minikube"
        },
        {
            "podSelector": {
                "app": "node-hello-world"
            },
            "type": "cloudcode",
            "language": "Node",
            "request": "attach",
            "debugPort": 9229,
            "localRoot": "${workspaceFolder}",
            "remoteRoot": "/hello-world",
            "name": "Debug on Kubernetes"
        }
    ]
}
minikube dashboard

We can see our new service is being deployed by VSCode Cloud Code extension. Whenever we make changes to the code, it will automatically deploy.

minikube service nodejs-hello-world-external --url

The above command will give us the url to browse the web app.

If I now change the text for Hello, world! It will automatically deploy. Just refresh the browser 🙂

Here in the status bar we can see deployments as we update code.

Debugging

Once you have deployed your app to Minikube, you can then kick off debugging. This is pretty awesome. Basically your development environment is now a full Kubernetes stack with attached debugging proving a seamless experience.

Check out https://cloud.google.com/code/docs/vscode/debug#nodejs for more information.

You will notice in the launch.json file we setup the debugger port etc. Below I am using port 9229. So all I need to do is start the app with

CMD [“node”, “–inspect=9229”, “app.js”]

or in the launch.json set the “args”: [“–inspect=9229”]. Only supported in launch request type.

Also ensure the Pod Selector is correct. You can use the pod name or label. You can confirm the pod name using the minikube dashboard.

http://127.0.0.1:61668/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/pod?namespace=default

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Run on Kubernetes",
            "type": "cloudcode.kubernetes",
            "request": "launch",
            "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
            "watch": true,
            "cleanUp": true,
            "portForward": true,
            "imageRegistry": "dccausbcontainerregistry/minikube",
            "args": ["--inspect=9229"]
        },
        {
            "name": "Debug on Kubernetes",
            "podSelector": {
                "app": "nodejs-hello-world"
            },
            "type": "cloudcode",
            "language": "Node",
            "request": "attach",
            "debugPort": 9229,
            "localRoot": "${workspaceFolder}",
            "remoteRoot": "/hello-world"
        }
    ]
}

Now we can do the following

  1. Click Run on Kubernetes
  2. Set a Break Point
  3. Click Debug on Kubernetes

TIPS

  • Run command prompt, powershell and vscode as Administrator
  • Use Choco for Windows installs
  • If you going to reboot/sleep/shutdown your machine. Please run:
minikube stop

If you do not, you risk corrupting hyper-v and you will get all sorts of issues.

AzureAD Identity – Implicit Grant between a SPA and API Gateway

Overview

This article will discuss a minimum complex design if you have several SPA Apps or other Apps that call a global API . This design is future proof for when you later need to introduce an API gateway, since an API gateway uses a similar concept of 1 access token for all API’s.

If you have a new in house client app that requires AAD integration for Authentication and Authorization and your apps are all exposed within your organisation network only.

Anyone in the organisation can create apps, however it is important to adhere to some conventions, expecially if you want to reduce the costs around onboarding new apps and the administrative overhead.

If you can affort it, I would recommend https://www.okta.com/ , currently AAD still has a far higher OPEX cost associated with the day to day operations regarding Identity Management versus Azure AD. So lets get started with the Implicit Grant Flow for SPA Apps. Hopefully Microsoft will support the Code Authorization with PKCE for SPA’s soon.

Diagram showing the Implciti Grant Flow in an AAD environment

Design

  • App registrations need to be created per environment (Dev, QA, UAT, Prod).
  • All physical API’s you host are linked to 1 App Registration in AAD (Same Audience)
  • All Clients have their own App registration using the API Audience
  • API exposes a default scope – Later when you implement an API gateway SCOPE logic can reside on the gateway. This keeps the design future proof.

This is for SPA apps that use Implicit Grant Flow. AAD currently only supports this flow for SPA apps; otherwise Code Authorization Flow with PKCE is superior, which OKTA supports 🙂

  1. Users are added to Azure AD Groups
  2. 1 Global API App Registration per environment (Global API)
  3. Application Roles are ONLY created in the Global API APp registration
  4. AD Groups are assigned to Application Roles

Diagram Illustration the AAD Relationship in this design. We only support 1 Audience (The Global API).


Diagram illustrating the AAD implementation Stategy regarding Users, Groups, Apps and Application Roles (API)

Implicit Grant SPA ->Global API

Prerquisite – You already have a Global API App Registration that exposes an API in AAD.

  • Go to the Azure Portal
  • Click “App Registrations”Click “New Registration”
  • Provide a name that adheres to the convention – Department <env> <AppName>SPA
  • Once the app is created
  • Go to the Branding Blade and ensure the Domain is verified and select your domaiun name.
  • Add any custom branding e.g. LogoClick Authentication Blade
  • Enable Implicit Grant Flow by ticking – ID Token, Access Token
  • Record the APP ID by clicking Overview and copying the Application (client) ID

If you do not want a consent dialog to show when the user uses the app. You need to add your shiny new app to the Global API app as a trusted application. Each environment has a GLOBAL API e.g. Department <envcode> API

Navigate to Department<envcode> API

Click “Expose an API

”Click “Add a client Application”

There is another method to add API permissions via the SPA App instead of trusted clients, however this required Global Tenant Admin grants.

  • Add the API’s in the API Permissions blade then
  • Click the Grant/revoke admin consent for XYZ Company Limited button, and then select Yes when you are asked if you want to grant consent for the requested permissions for all account in the tenant. You need to contact an Azure AD tenant admin to do this.

You are now ready to use your new APP for Authentication.Update your code base to use a supported OAuth2.0 and OpenID library e.g. MSAL.JS
https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-v2-librariesEnsure you are compliant e.g. Tokens are being verified

  • All Access Tokens provided by the Global API are valid for 1 hour. You will need to refresh the token for a new one. The library will do this for you.
  • Ensure all Tokens are Verified on the SPA side (ID Token and Access Token)
  • NEVER send the ID Token to the API. Only the Access Token (Bearer)
  • AVOID storing the access token in an insecure storage location

Global API Manifest (Global API App Registration)

  • Create your Global API
  • Ensure the Branding is correct
  • Expose an API e.g. default
  • Add Trusted Client Apps like the SPA above
  • Edit the Manfiest to include any Application or User Roles. Application Roles are great for service to service communication which is not covered in this article.

Here is where you define roles in the app manifest on the Global API App.

"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"description": "Standard users of MyApp1 SPA.",
"displayName": "MyApp1-Payroll",
"id": "12343b9d-2e21-4ac3-b3cd-f0227588fc03",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "MyApp1-Payroll"
},
...

Your API will implement RBAC based on the App Roles defined in the Global API Manifest. In AAD Portal, add additional App Roles

[Authorize(Roles = "Manager")]
public class ManagerController : Controller
{
}
 
[Authorize(Roles = "Administrator")]
public class AdministrationController : Controller
{
}
 
public class HomeController : Controller
{
}
 
//Policy based
[Authorize(Policy = "AtLeast21")]
public class AlcoholPurchaseController : Controller
{
    public IActionResult Index() => View();
}

Ensure you are using Microsoft Identity V2!

This design will provide the least amount of administrative overhead for apps that need to be publish internally within the organisation and provides a decent level of security coverage.