Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Category: Cloud (page 1 of 2)

Application security with mutual TLS (mTLS) via Istio

TLS Overview

If we take an example of accessing a website such as https://www.virtualthoughts.co.uk/, these are the high-level steps of what occurs:

 

 

 

  1. The client initiates a connection to the web server requesting an HTTPS connection.
  2. The web server responds with its public key. The client validates the key with its list of known Certificate Authorities.
  3. A session key is generated by the client and encrypted with the web server’s public key and is sent back to the web server.
  4. The web server decrypts the session key with its private key. End to end encryption is established.

By default, the TLS protocol only proves the identity of the server to the client using X.509 certificate and the authentication of the client to the server is left to the application layer.  For external, public-facing websites, this is an acceptable and well-established implementation of TLS. But what about communication between different microservices?

 

As opposed to monolithic applications, microservices are usually inter-connected which allow them to be scaled/modified/etc independently. But this does raise some challenges. For example:

  • How do we ensure service-to-service communication is always encrypted?
  • How can do we do this without changing the application source code?
  • How can we automatically secure communication when we introduce a new service to an application?
  • How can we authenticate clients and servers and fully establish a “zero trust” network policy?

Istio can help us address these challenges:

Example Application

To demonstrate Istio’s mTLS capabilities a WordPress Helm chart was deployed into a namespace with automatic sidecar injection. Installing and configuring Istio can be found on a previous blog post. By default, the policy specifies no mTLS between the respective services. As such, the topology of the solution is depicted below:

 

 

We can validate this by using Istioctl:

 

All of the “testsite” services (WordPress frontend and backend) Envoy proxies are using HTTP as their transport mechanism. Therefore mTLS has not been configured yet.

Creating Istio Objects – Policy and Destination Rules

As you might expect, establishing mutual TLS (mTLS) is a two-part process, First, we must configure the clients to leverage mTLS, as well as the servers. This is accomplished with Policy and Destination rules.

Policy (AKA – what I, the server, will accept)

apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
 name: "default"
 namespace: "wordpress-app"
spec:
 peers:
 - mtls: {}

This example policy strictly enforces only mTLS connections between services within the “wordpress-app” namespace

DestinationRule (AKA – what I, the client, will send out)

 apiVersion: "authentication.istio.io/v1alpha1" 
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
 name: "vt-wordpress-mariadb"
 namespace: "wordpress-app"
spec:
 host: "*.wordpress-app.svc.cluster.local"
 trafficPolicy:
 tls:
 mode: ISTIO_MUTUAL

This example enforces the use of mutual TLS when communicating with any service in the wordpress-app namespace. Applying these and re-running the previous istioctl command yields the following result:

This is accomplished largely due to Citadel – a component in the Istio control plane that manages certificate creation and distribution:

When mTLS is configured the traffic flow (from a high level) can be described as follows:

  • Citadel provides certificates to the sidecar pods and manages them.
  • WordPress pod creates a packet to query the MYSQL database.
    • WordPress Envoy sidecar pod intercepts this and establishes a connection to the destination sidecar pod and presents its certificate for authenticity.
  • MYSQL Envoy sidecar pod receives a connection request, validates the client’s certificate and sends its own back.
  • WordPress Envoy sidecar pod receives MYSQL’s certificate and checks it for authenticity.
  • Both proxies are in agreement as to each other’s identity and establish an encrypted tunnel between the two.

This is what makes it “mutual” TLS. In effect, both services are presenting, inspecting and validating each other’s certificate as a prerequisite for service-to-service communication. This differs from a standard HTTPs site described earlier on where only the client was validating the server.

Additional Comments

Some additional observations I’ve made from this exercise

  • If enforcing strict mTLS on a service that’s exposed externally from a load balancer, your clients will obviously need to send x509 certificates that can be validated by Citadel. A more flexible alternative to this is to employ an Istio gateway that provides TLS termination at the cluster boundary. This negates the need to provision x509 certs to each and every client, whilst maintaining mTLS within the cluster.
  • Envoy sidecar pods can affect liveness probes and might require you to implement
     sidecarInjectorWebhook.rewriteAppHTTPProbe=true 

    upon installing Helm 

Step by Step – Istio up and running

Service Mesh is a pretty hot topic in the Kubernetes ecosystem currently, and I wanted to get it up and running in my own lab environment. Istio’s documentation has a pre-baked solution to demonstrate some of its capabilities (a book app, if memory serves me correctly), but I wanted to deploy my own app to get more “hands-on” experience with the tech, even if it’s only very basic to start with.

Install Istio

There are a number of prerequisite steps that need to be satisfied prior to installing Istio. These are specific to my environment,  others may differ.

Install the Helm client

 sudo snap install helm --classic 

Grab Istio (1.2.0 in this example)

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.0 sh - cd istio-1.2.0/

Create the Helm service account (named “tiller”)

kubectl apply -f install/kubernetes/helm/helm-service-account.yaml

Initialise Helm  using the service account specified in the previous step

helm init --service-account tiller

Create a namespace to accommodate the Istio components

kubectl create ns istio-system

Initialise Istio into the aforementioned namespace:

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

Monitor the state of the pods – it will take some time for the pods to finish – these create the CRD’s required for Istio

kubectl get pods -n istio-system
NAME                      READY   STATUS              RESTARTS   AGE
istio-init-crd-10-t82m6   0/1     ContainerCreating   0          95s
istio-init-crd-11-42622   0/1     ContainerCreating   0          95s
istio-init-crd-12-65m5v   0/1     ContainerCreating   0          95s

Install Istio into the aforementioned namespace

helm install install/kubernetes/helm/istio --name istio --namespace istio-system

Configure a namespace for automatic sidecar injection

By this point, we have the internal foundations for Istio, but we’re not leveraging it. One of the fundamental workings of Istio is the use of pod sidecars. Sidecars act as the data plane, facilitating a lot of the features we want to leverage from Istio.

The overall architecture of an Istio-based application.

Istio doesn’t do this automatically, out of the box for all pods deployed into an environment, but Istio will inject sidecars into pods deployed into namespaces that have the istio-injection=enabled label set.

kubectl create ns app-with-injection
namespace/app-with-injection created
kubectl label namespace app-with-injection istio-injection=enabled
namespace/app-with-injection labeled

We can validate this by creating a pod into this namespace:

kubectl run nginx -n app-with-injection --image nginx

And checking the Pod contents (notice how this pod has two containers)

kubectl get pods -n app-with-injection
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-96mbz   2/2     Running   0          52s

The proxy sidecar:

 

Environment Anatomy

The diagram below shows how my test environment is set up

 

 

Key considerations:

The Istio gateway will reside on the edge
80% of all traffic will be routed to v1 of my web application
20% of all traffic will be routed to v2 of my web application

The application manifest can be found at https://raw.githubusercontent.com/David-VTUK/istioexample/master/webapp.yaml

To accomplish this we need to implement two key objects:

Gateway

This is our entry point into our application. By default, Istio deploys the gateway object (we must note the external IP)

kubectl get svc -n istio-system
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                
istio-citadel            ClusterIP      10.100.200.47                        
istio-galley             ClusterIP      10.100.200.149                       
istio-ingressgateway     LoadBalancer   10.100.200.244   10.10.20.150,100.64.80.1   
istio-pilot              ClusterIP      10.100.200.170                       
istio-policy             ClusterIP      10.100.200.3                        
istio-sidecar-injector   ClusterIP      10.100.200.169                   
istio-telemetry          ClusterIP      10.100.200.141                      
prometheus               ClusterIP      10.100.200.238                     

We configure the Gateway by deploying a gateway manifest file:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"              
  • Kind : Type of object. Gateway is a CRD (Custom Resource Definition) that Istio implements
  • Selector: What this applies to, in this case the default Ingress Gateway
  • Ports: Which ports we want to listen to on the external IP address, together with a name and protocol
  • Hosts : We can implement layer 7 load balancing on the edge, but as I’ll be testing this out via IP address, “*” will suffice. In production, this would likely be an FQDN of an external facing website

 

VirtualService

gateway object helps us define the entry point into the cluster, but we have yet to effectively tell the gateway where to route traffic to. This is where the VirtualService object type comes in. This is where we define routing intelligence.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: demoapp
spec:
  hosts:
  - "*"
  gateways:
  - http-gateway
  http:
  - route:
    - destination:
        port:
          number: 80
        host: vt-webapp-v1.app-with-injection.svc.cluster.local
      weight: 80
    - destination:
        port:
          number: 80
        host: vt-webapp-v2.app-with-injection.svc.cluster.local
      weight: 20      

What the above effectively does is listen for all HTTP requests (hence the “*” under “hosts”) and route 80% of traffic to V1 of the webapp, by directing traffic at the respective service and 20% to v2.

Testing

The “WebApp” is pretty simple. It simply displays one of the following (depending on the version)

 

What we should now see from accessing the external IP is traffic being split across both services via a 80/20 split:

Out of 10 curl commands 8 were routed to v1 of my app, 2 were routed to v2 of my app.

Conclusion

Admittedly, this is an extremely simple example of a more simple use case of Istio, but as I’m learning, I think it’s a decent start, and I hope others find it useful.

Introducing Velero – Backup and DR for Kubernetes Applications

Image result for velero logo heptio

 

What is Velero?

Velero (previously known as Heptio ARK) provides a suite of tools to backup Kubernetes resources and applications for two main purposes:

  • Disaster Recovery – Recover Kubernetes cluster components and applications.
  • Migration – Migrate your Kubernetes applications to another Kubernetes cluster.

Migrating Kubernetes applications is a compelling use case. One of the significant benefits of using Kubernetes is the predictability of the platform and consequently the portability of applications that reside on it. With the main exception of nuances with persistent storage, the Kubernetes API will fell almost indistinguishable whether it resides on prem, GKE, AKS, EKS and elsewhere. If you have your Kubernetes-based application on one provider and want to migrate it to another or duplicate it to run elsewhere for dev/test, this can easily be achieved. Especially with Velero.

 

Install and Configure Velero

Velero consists primarily of the server (runs in a container) a storage location (ie S3 bucket) and the CLI.

Rather than reiterate what’s already in Velero’s existing and comprehensive documentation, the straight forward instructions are located https://heptio.github.io/velero/master/.

 

Demo App

To understand and get to grips with Velero I decided to write my own application. It’s a really simple application written in Golang and does the following:

  • Every 2 seconds, output the contents of /mnt/data/data.txt with a timestamp.

The pod will mount /mnt/data as a persistent volume based on the respective claim. The file it reads is “data.txt” which contains “some data”. The PV type is hostpath, which is suitable for testing in this single worker node cluster.

The overall solution is depicted below:

After deploying the application, we can validate it’s working as expected by inspecting the stdout file descriptor with kubectl logs:

Execute a backup (Note that in production this would likely be scheduled, and this backup includes everything. Ideally, you’d probably want to back up on a per namespace or app basis).

Validate the backup:

We can also see the backup residing in the previously defined S3 bucket:

Simulating a Disaster and Restoring

To simulate a disaster, I’ll simply remove my Pod, PV and PVC:

And for good measure, issue the same command previously used to extrapolate the logs:

We can use the velero restore command to pull the backup from our S3 bucket and restore our application.

Validate the restore:

Note that because the previous backup encompassed all namespaces and all resource types, Velero employs some logic to determine which objects it should restore, based on what already exists. However, my app, its namespace, PV and PVC has been restored, which we can validate with:

Closing Thoughts

Being able to mobilise a Kubernetes-based application on virtually any Kubernetes-based cluster is one of many compelling reasons to leverage this technology. Equally as important as ensuring availability and reliability of applications is to adopt a solid backup and disaster recovery solution. Velero is designed to facilitate this, and its flexibility in where to store and retrieve backups from helps with preventing lock-in with a particular solution, provider or vendor. Whether it’s on prem, Azure, Google Cloud, AWS, Digital Ocean or others, disasters do happen, downtime occurs. Being able to lift and shift entire cluster resources in a standardised is quickly becoming a solid requirement for modern applications.

Bootstrapping Prometheus, Grafana and Alertmanager to PKS deployed K8s Clusters

PKS is a comprehensive platform for the provisioning and management of Kubernetes clusters, which can be further enhanced by leveraging its extensibility options. In this post, we will modify a plan to deploy a yaml manifest file which provisions Prometheus, Grafana, and Alertmanager backed by NSX-T load balancers.

Why Prometheus, Grafana and Alertmanager?

The Cloud Native Computing Foundation accepted Prometheus as its second incubated project, the first being Kubernetes. Originally developed by SoundCloud. It has quickly become a popular platform for the monitoring of Kubernetes platforms. Built upon a powerful analytics engine, extensive and highly flexible data modeling can be accomplished with relative ease.

Grafana is, amongst other things, a visualisation tool that enables users to graph, chart, and generally visually represent data from a wide range of sources, Prometheus being one of them.

AlertManager handles alerts that are sent by applications such as Prometheus and performs a number of operations such as deduplicating, grouping and routing.

What I wanted to do, as someone unfamiliar with these tools is to devise a way to deploy these components in an automated way in which I can destroy, and recreate with ease. The topology of the solution is depicted below:

 

 

Constructing the manifest file

TLDR; I’ve placed the entire manifest file here, which creates the following:

  • Create the “monitoring” namespace.
  • Create a service account for Prometheus.
  • Create a cluster role requires for the Prometheus service account.
  • Create a cluster role binding for the Prometheus service account and the Prometheus cluster role.
  • Create a config map for Prometheus that:
    • Defines the Alertmonitor target.
    • Defines K8S master, K8S worker, and cAdvisor scrape targets.
    • Defines where to input alert rules from.
  • Create a config map for Prometheus that:
    • Provides a template for alerting rules.
  • Create a single replica deployment for Prometheus.
    • Expose this deployment via “LoadBalancer” (NSX-T).
  • Create a single replica deployment for Grafana.
    • Expose this deployment via “LoadBalancer” (NSX-T).
  • Create a single replica deployment for Alertmonitor.
    • Expose this deployment via “Loadbalancer” (NSX-T).

 

Bootstrapping the manifest file in PKS

Log into Ops manager and select the PKS tile:

Select a plan from the left-hand side, record the plan name for later:

Scroll down and paste the aforementioned YAML file into the Add-ons section

Save and then provision a cluster using the aforementioned plan:


david@mgmt-jumpbox:~$ pks create-cluster k8s --external-hostname k8s.virtualthoughts.co.uk --plan medium-with-prometheus-grafana-alertmanager --num-nodes 2

After which execute the following to acquire the list of loadbalancer IP addresses for the respective services:

david@mgmt-jumpbox:~$ kubectl get svc -n monitoring
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)        AGE
alertmanager   LoadBalancer   10.100.200.218   100.64.96.5,172.16.12.129   80:31747/TCP   100m
grafana        LoadBalancer   10.100.200.89    100.64.96.5,172.16.12.128   80:31007/TCP   100m
prometheus     LoadBalancer   10.100.200.75    100.64.96.5,172.16.12.127   80:31558/TCP   100m

Prometheus – Quick tour

Logging into http://prometheus-lb-vip/targets will list the list of scrape targets for Prometheus, which have been configured via the respective configmap and include:

  • API server (Master)
  • Nodes (Workers)
  • cAdvisor (Pods)
  • Prometheus (self)

Which can be graphed / modelled / queried:

 

Grafana – Quick Tour

Out of the box, Grafana has very limited configuration applied – I struggled a little bit with constructing a configmap that would automatically add Prometheus as a data source, so a little bit of manual configuration is required (for now). Accessing http://grafana-lb-vip will prompt for a logon: (admin/admin) is the default

 

Add a data source:

Hint : use “prometheus.monitoring.svc.cluster.local” as the source URL

After creating a dashboard (or importing one) we can validate Grafana is extracting information from Prometheus

 

 

Alertmanager – Quick Tour

Alertmanager can be accessed via http://alertmanager-lb-vip. From the YAML manifest it has a vanilla config, but Prometheus is configured to use it as a alert target via the configmap:

  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s
      
    alerting:
      alertmanagers:
        - static_configs:
          - targets: 
            - "alertmanager.monitoring.svc.cluster.local:80"

Alerts need to be configured in Prometheus in order for Alertmanager to ingest / deduplicate / forward etc.

As an example, I tested some integration with Discord:

Practical to send alerts to a Discord sever? Probably not.

Fun? Yes!

CKA Exam Experience (Inc study & lab guide)

Introduction

Over the long bank holiday weekend, I sat and passed the Certified Kubernetes Exam (CKA). This blog post goes over my experience (With respect to the NDA) together with a lab guide I’ve made which I’ve uploaded hoping it might help others.

 

 

Format

The online exams consist of a set of performance-based items (problems) to be solved on the command line. For the CKA there are 24 questions of varying difficulty. At the time of writing, the only option to sit this exam is through remote proctoring.

This link contains the most pertinent information to assimilate.

 

Experience

I’m a huge fan of practical exams. I’m so glad the powers at be decided to go down this route. I absolutely loathe multiple choice exams for many reasons. The remote proctoring was a new experience for me, and I wasn’t completely comfortable with it. Given the choice, I would have preferred to go to a test center. I hope The Linux Foundation adds this option in the future.

I sat the exam first time around feeling relatively confident, but knew I had some weaker areas. After a painstaking wait I received the following email:

confused oh no GIF by It's Always Sunny in Philadelphia

I brushed myself off, crammed the areas I was weaker on, took the exam again and waited….

…and waited

From my experience, as a techie, I get a much more accomplished feeling when passing practical exams, such as this / VCAP’s etc. Either way, to say I was happy would be a gross understatement.

 

Takeaways

  • A lot of people say this exam is “hard”. I get really discouraged reading up on peoples exam experiences saying exams are “hard”. I would say a more accurate adjective for this exam would be “Fair”. Know the curriculum, practice your craft, and you’ll get there.
  • Lean on the documentation as much as you need to. You have access to kubernetes.io/docs during the exam.
  • It’s a practical exam, so practice, practice, and practice some more.
  • You get a free retake, so don’t worry if you don’t pass first time.
  • kubectl run somedeployment –image=nginx –replicas=5 –dry-run -o yaml. Output existing or new objects to a yaml file if you need to make finer adjustments or create objects from scratch.

 

Lab Guide

My revision approach for this exam predominantly consisted of:

  1. Reading up on the topics
  2. Apply the knowledge to practical examples
  3. Validate the approach

I ended up with three documents:

  • Revision Notes
  • Practice lab exercises
  • Practice lab exercises answers (writing this helped me commit this information to memory)

 

All of which can be found at https://github.com/David-VTUK/CKA-StudyGuide

 

Exposing the K8s dashboard via a NSX-T Load balancer

For the following to work, your k8s infrastructure needs to leverage some kind of CNI that’s able to provision load balancers. For this example I’m leveraging PKS which has native integration with NSX-T.

The default way to access the Kubernetes dashboard is to leverage the kubectl proxy command. However, this is somewhat limiting for a production environment. An alternative way is to expose the dashboard through a load balancer.

 

Modify the Dashboard service by executing : kubectl -n kube-system edit service kubernetes-dashboard and modifying the “type” field from “ClusterIP” to “LoadBalancer”

 

Afterwards, the service will be reconfigured to be presented by a load balancer external VIP.

At which point we can access it directly:

 

PKS, Harbor and the importance of container registries

What are container registries and why do we need them?

A lot of the time, particularly when individuals and organisations are evaluating, testing and experimenting with containers they will use public container registries such as Docker Hub.  These public registries provide an easy-to-use, simple way to access images. As developers, application owners, system admins etc gain familiarity and experience additional operational considerations need to be explored, such as:

  • Organisation – How can we organise container images in a meaningful way? Such as by environment state (Prod/Dev/Test) and application type?
  • RBAC – How can we implement role-based access control to a container registry?
  • Vulnerability Scanning – How can we scan container images for known vulnerabilities?
  • Efficiency – How can we centrally manage all our container images and deploy an application from them?
  • Security – Some images need to kept under lock and key, rather than using an external service like Docker Hub.

Introducing VMware Harbor Registry

VMware Harbor Registry has been designed to address these considerations as enterprise-class container registry solution with integration into PKS. In this post, We’ll have a quick primer on getting up and running with Harbor in PKS and explore some of its features. To begin, we need to download PKS Harbor from the Pivotal site and import it into ops manager.

After which the tile will be added (When doing this for the first time it will have an orange bar at the bottom. Press the tile to configure).

The following need to be defined with applicable parameters to suit your environment.

  • Availability Zone and Networks – This is where the Harbor VM will reside, and the respective configuration will be dependent on your setup.
  • General – Hostname and IP address settings
  • Certificate – Generate a self-signed certificate, or BYOC (bring your own certificate)
  • Credentials – Define the local admin password
  • Authentication – Choose between
    • Internal
    • LDAP
    • UAA in PKS
    • UAA in PAS
  • Container Registry store – Choose where to store container images pushed to Harbor
    • Local file system
    • NFS Server
    • S3 Bucket
    • Google Cloud Storage
  • Clair Proxy Settings
  • Notary settings
  • Resource Config

VMware Harbor Registry – Organisation

Harbor employs the concept of “projects”. Projects are a way of collecting images for a specific application or service. When images are pushed to Harbor, they reside within a project:

 

Projects can either be private or public and can be configured during, or after, project creation:

A project is comprised of a number of components:

 

VMware Harbor Registry – RBAC

In Harbor, we have three role types we can assign to projects:

 

rbac

Image source: https://github.com/goharbor/harbor/blob/master/docs/user_guide.md#managing-projects

  • Guest – Read-only access, can pull images
  • Developer – Read/write access, can pull and push images
  • Admin – Read/Write access, as well as project-level activities, such as modifying parameters and permissions.

As a practical example, AD groups can be created to facilitate these roles:

And these AD groups can be mapped to respective permissions within the project

 

Therefore, facilitating RBAC within our Harbor environment. Pretty handy.

VMware Harbor Registry – Vulnerability Scanning

The ability to identify, evaluate and remediate vulnerabilities is a standard operation is modern software development and deployment. Thankfully Harbor addresses this with integration with Clair – an open source project that addresses the identification, categorisation and analysis of vulnerabilities within containers. As a demonstration we need to first push an image to Harbor:

After initiating a scan, Harbor can inform us of what vulnerabilities exist within this container image

We can then explore more details about these vulnerabilities, including when they were fixed:

 

Conclusion

Harbor provides us with an enterprise level, container registry solution. This blog post has only scratched the surface, and with constant development being invested into the project, expect more features and improvements.

 

Kubernetes zero to hero – from single VM webserver to a scalable microservices infrastructure

Preamble

Having spent a number of months familiarising myself with container technology I inevitably got “stuck in” with Kubernetes. Containers are brilliant, but I personally don’t see the value of managing individual containers – it’s still the pets vs cattle mentality. Orchestrating containers with the likes of Kubernetes, however, makes a ton of sense and reinforces the microservices approach to building and deploying applications.

To test myself, I decided to document end-to-end the entire journey from taking a web server residing on a standalone virtual machine, containerise it, and deploying it via Kubernetes.

 

Disclaimer – I’m not a developer so the application example I’ll be using is relatively simple – but the fundamentals would be similar for other applications of increasing complexity.

Current and Intended State

I currently have a simple HTTP web server running on an Ubuntu VM on an ESXi host. For many reasons, this is a suboptimal design. The web server is facilitated by Apache2. As far as configurations go, it’s almost as basic as you can get, but surprisingly (shockingly) is widespread, even with front-facing, live websites.

 

At the end of this exercise, we will have redeployed this application in the following fashion:

 

 

So, to quote Khan from Star Trek Into Darkness:

Image result for shall we begin gif

 

Install Docker

You can install Docker on a number of operating systems, however, I had a spare Ubuntu server box idling so I used this as a kind of “staging” box where I could tinker with creating the Docker components prior to installing Kubernetes, but you could do this on a Kubernetes worker node if desired.

Curl and pipe the get docker url to your shell

 david@ubuntu_1804:~$ curl -sSL https://get.docker.com/ | sh 

Which should result in the following:

 

david@ubuntu_1804:~$ curl -sSL https://get.docker.com/ | sh
# Executing docker install script, commit: 36b78b2
+ sudo -E sh -c apt-get update -qq > /dev/null
+ sudo -E sh -c apt-get install -y -qq apt-transport-https ca-certificates curl > /dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - > /dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic edge" > /etc/apt/sources.list.d/docker.list
+ [ ubuntu = debian ]
+ sudo -E sh -c apt-get update -qq > /dev/null
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce > /dev/null
+ sudo -E sh -c docker version
Client:
 Version:      18.05.0-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   f150324
 Built:        Wed May  9 22:16:13 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   f150324
  Built:        Wed May  9 22:14:23 2018
  OS/Arch:      linux/amd64
  Experimental: false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker david

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

I want the ability to issue Docker commands without having to switch to root, so I added my user to the “docker” group

sudo usermod -aG docker david

Well, that was easy.

Create container image

The first thing we need to do is copy over our application code to our host. For this example, I have a simple .html file acting as a landing page:

To keep things tidy, I suggest creating a directory on your Docker machine.

david@ubuntu_1804:~$ mkdir WebApp
david@ubuntu_1804:~$ cd WebApp/

..Copy over code (ie via SCP)..
david@ubuntu_1804:~/WebApp$ ls
index.html

To create a docker image (and consequently a container) we must create what’s known as a Dockerfile. In short, a Dockerfile is a human-readable document that acts as a guide on how to create your image. Think of it like an instruction manual you get with flat-pack furniture, it provides the steps required to get to the final, constructed model. Hopefully without any spare screws.

So, with your text editor of choice, create “Dockerfile” within the application directory containing your code. Below is one for my application, which we will break down:

FROM – All “Dockerfile” files must begin with a “FROM” statement. This defines the base image for our application, which is pulled from Dockerhub (https://hub.docker.com/explore/).  Official releases are available from a number of companies, including Ubuntu, MySQL, Microsoft, NGINX, etc. These differ from your bog-standard OS install. Much more lightweight, hardneded and specifically engineered to cater for containerise workloads.

LABEL – This is metadata denoting the maintainer for this image.

RUN – When the docker image is instantiated, the following commands will be executed to compile this image. As you can see for this image I specify to update and upgrade the base OS as well as install Apache2.

COPY – This command copies over my application data (in this case, index.html) into /var/www/html, which is the root directory for the Apache2 service.

WORKDIR – Sets the working directory.

CMD – Runs a command within the container after creation. In this example, I’m specifying the Apache2 service to run in the foreground.

EXPOSE – Defines which port you want to open on containers from this image. As this is a webserver, I want TCP 80 open (TCP is the default). You can also add TCP 433, or whichever port your application requires.

The next step is to build our image using “Dockerfile” to do this, we can issue the following command. The “.” dictates we will use “Dockerfile” from the current directory.

docker build -t webapp:0.1 .

This command names the constructed image as “webapp”. The value after the colon determines the version of this image, in this case, I’m tagging this image as version 0.1. Should I make a change, I can recompile the image and increment the version number.

After issuing this command the terminal window will output the build process, similar to below. This includes Docker dragging down the base image and making modifications as per our Dockerfile:

david@ubuntu_1804:~/WebApp$ docker build -t webapp:v0.1 .
Sending build context to Docker daemon  3.072kB
Step 1/6 : FROM ubuntu:latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest
 ---> 113a43faa138
Step 2/6 : LABEL maintainer="david@virtualthoughts.co.uk"
 ---> Running in bdcf972318b5
Removing intermediate container bdcf972318b5
 ---> 3e6f9671a0af
Step 3/6 : RUN         apt-get update &&         apt-get -y upgrade &&         apt-get install -y apache2
 ---> Running in c26d1729a269

....

To validate the Image has been created, we can issue a “Docker image ls” command:

david@ubuntu_1804:~/WebApp$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
webapp              v0.1                5f42fa00f1f3        3 minutes ago       232MB
ubuntu              latest              113a43faa138        5 weeks ago         81.2MB

It’s a good idea at this stage to test our image, so let’s create a container from it:

david@ubuntu_1804:~/WebApp$ docker run -d -p 80:80 -t webapp:v0.1

This command runs a container in detached mode (ie we don’t shell into it) and maps port 80 from the host to the container, using the webapp:v0.1 image.

Therefore, curl’ing the localhost address should yield a HTTP response from our container:

david@ubuntu_1804:~/WebApp$ curl localhost
<h1 style="color: #5e9ca0;">Virtualthoughts.co.uk demo application</h1>
<h2 style="color: #2e6c80;">About this app:</h2>
<p>Imagination required. Consider the possibilities!</p>

Perfect. Our application is now containerised.

Install Kubernetes

As shown in the diagram at the beginning of this post, Kubernetes is composed of master and worker nodes in a production deployment. For my own learning and development I wanted to recreate this, however, there are ways you can deploy single-server solutions.  For my test environment I created the following:

  • 3x VM’s
    • Ubuntu 18.04
    • 2vCPU
    • 2GB RAM
    • 20GB Local Disk
    • Single IP – Attached to the management network

In an ideal world, you would flesh out the networking and storage requirements, but for internal testing, this was sufficient for me.

Once the VM’s are installed, create the master node by installing Kubernetes:

Add the GPG key as Root

root@ubuntu_1804:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
OK

Add repo for K8s

root@ubuntu_1804:~# echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" &amp;amp;amp;amp;amp;amp;gt; /etc/apt/sources.list.d/kubernetes.list

Install Kubelet, Kubeadm, Kubectl and the Kubernetes CNI

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Next, we can initialise our master, but before we do so, consideration needs to be made with regards to the networking model we’ll be using. For my example, I used flannel, which states that we need to define the CIDR address range for our containers during the initialisation process.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Which results in the following:

Follow the instructions to run the mkdir, cp and chown commands as a non-root user. At the bottom is a command to add worker nodes – keep this safe.

Deploy the Flannel supporting constructs by executing the following:


kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

The process for creating worker nodes is similar – Install Docker (previously covered) and install the Kubernetes packages minus running the kubeadmin init command – replace it with the Kubeadm join command. Nodes can then be validated on the master node.

david@k8s-master-01:~$ kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
k8s-master-01   Ready     master    22h       v1.11.0
k8s-worker01    Ready     none      22h       v1.11.0
k8s-worker02    Ready     none      19h       v1.11.0

 

Deploy an application to the Kubernetes cluster

At this stage, we have a functioning, albeit empty K8s Cluster, but it’s ready to start hosting applications. For my application, I took a two-step process:

  • Configure the replication controller (how many containers should run for this app)
  • Configure the service object (how to access this cluster from the outside)

A Replication Controller in Docker ensures that a specified number of container replicas are running at any one time. In this example, I have created an account on Dockerhub and uploaded my image to it, so my worker nodes can pull it. We define a replication controller in a YAML file:

david@k8s-master-01:~$ cat webapp-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: webapp
spec:
  replicas: 5
  selector:
    app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: virtualthoughts/webapp:latest
        ports:
        - containerPort: 80

Kind: The type of object this is

Spec: How this application should be deployed, including the image to be used.

Replicas: How many containers for this app should be running at any given time. Kubernetes constantly monitors the environment and if there’s a deviation between how many replicas should be running, and how many are currently running, it will reconcile automatically.

Label: Labels are very important. We label the containers in this replication controller so we can later tie them into a service object. This means as containers are created and destroyed, they are automatically included in the service object based on tags – After all, we don’t care much about containers as individual entities.

Next is to create the service object:

david@k8s-master-01:~$ cat webapp-svc.yml
kind: Service
apiVersion: v1
metadata:
  name: webapp
  labels:
    app: webapp
spec:
  type: NodePort
  selector:
    app: webapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Kind: The type of object this is.

Selector: Which containers should be included in this service?

Type: Type of service object. In a cloud environment, for example, we can change this to “Loadbalancer” to leverage cloud platform-specific load balancers from the likes of GCP and AWS. But for this example, I don’t have an external load balancer so it’s not applicable.

What we’re accomplishing here are two fundamental operational aspects of our application:

  • We declare a minimum number of containers (pods) to be available at all times to facilitate our workload.
  • We’re establishing a relationship between containers (pods) with a service object via the use of tags. Therefore, any new containers that are created with the same tag will automatically be included in this service object. Think of the service object as a central point to access the application. We do not access the application by directly sending HTTP requests to containers.

To deploy these YAML files we issue a command via Kubectl:

david@k8s-master-01:~$ sudo kubectl create -f webapp-svc.yml
david@k8s-master-01:~$ sudo kubectl create -f webapp-rc.yml

We can also check the service:

We have labels to define the service and which containers to include, and we also have the current list of endpoints. Think of endpoints as loadbalancer members. Because of how nodeport works we can hit any of our K8S worker nodes on port 30813 and reach our service which will load balance across all endpoints.

I tested this on my two worker nodes (and I also added a bit of code to my index.html to return the hostname of the container servicing the HTTP request):

Conclusion

I had a lot of fun doing this, and the more I learn about containers and orchestration the more I believe it’s the next facilitator change in the way we manage applications, as significant as the change between physical and virtual machines.

Introducing VMware Kubernetes Engine

On the 26th of June 2018, VMware publically announced VKE – VMware Kubernetes Engine in Beta (with GA planned for later on this year). For me, the development of this solution flew under the radar, and its subsequent release came as quite a surprise – albeit quite a good one. So, where exactly does this solution fit with other Kubernetes based solutions that currently exist?

VKE Overview

VKE sits within VMware’s portfolio of cloud-native solutions as is pitched as a fully managed, Kubernetes-as-a-service offering. Therefore we have multiple ways we can consume Kubernetes resources from the VMware ecosystem, depicted in the diagram below.

 

Which prompts some customers to ask – Why should I pick VKS over PKS or vice-versa? From a high level, some of the differences are listed below:

PKS VKS
Management Responsibility Customer/Enterprise Fully Managed
Consumption Model Install, Configure, Manage, Consume Consume
Residence Public and Private Cloud Public Cloud only

 

What we can ascertain here is that VKE is designed to abstract away all the infrastructure components that are required for an operational Kubernetes deployment. As a reminder, PKS is composed of:

  • PKS Control Plane
  • Kubernetes core
  • BOSH
  • Harbor
  • GCP Service Broker
  • NSX-T

Which is quite a lot to manage and maintain. VKS however, takes away the requirement for us, the customers to manage such entities, and simply provides a Kubernetes endpoint for us to consume. Networking, storage and other aspects are abstracted. Of course, there are use cases for both VKE and PKS. VKE is not looking to be a replacement for PKS.

 

How does it work?

Under the hood, VKS is deployed on top of AWS (who recently announced EKS, Amazon’s own managed Kubernetes-as-a-service platform) but in fitting with VMware’s ethos of “Any app, any cloud”, this is likely to extend to other cloud platforms – notably Azure. In addition to simply leveraging the AWS backend, VKS adds a few new features:

  • VMware Smart Cluster – Essentially, this is a layer of resource management, designed to automate the allocation of compute resources for maximum efficiency and cost saving, as well as automatic remediation of nodes.
  • Full end-to-end encryption – Designed so that all data, be it in transit or at rest is encrypted by default.
  • Role based access control – Map enterprise users to clusters.
  • Integration with Amazon Services – EC2, Lambda, S3, ES, Machine learning, the list is extensive.
  • Integration with VMware cloud services – Log insight, Wavefront, Cost insight, etc.

 

Why shouldn’t I just use EKS if I wanted an AWS-backed Kubernetes instance?

To be honest, this is a good question. If you aren’t using any VMware services currently, then it makes sense to go with EKS. However, existing VMware customers can potentially gain a consistent operational experience with both on-premises and cloud-based resources using familiar tools. Plus, when VKS opens up to other cloud providers, this will add tremendous agility to the placement of Kubernetes workloads, facilitating a true multi-cloud experience.

This service is obviously very new, and no doubt will change a bit up to GA, but it’s definitely worth keeping an eye on, considering the growing adoption of Kubernetes in general.

Hybrid Cloud monitoring with VMware vRealize Operations

Applications and the underlying infrastructure, be it public, private or hybrid cloud are becoming increasingly sophisticated. Because of this, the way in which we monitor and observe these environments requires more sophisticated tools. In this blog post, we look at vRealize Operations and how it can be a facilitator of true hybrid cloud monitoring.

What is vRealize Operations?

vRealize Operations forms part of the overall vRealize suite from VMware – a collection of products targeted to accommodate cloud management and automation. In particular, vRealize Operations, as the name implies, primarily caters to operations management with full visibility across physical, virtual and cloud-based environments. The anatomy of vRealize Operations is depicted below

 

Integrated Cloud Operations Console – A single, unified frontend to access, modify and view all related vRealize Operations components.

Integrated Management Disciplines – vRealize Operations has built-in intelligence to assimilate, dissect and report back on a number of key operational metrics pertaining to performance, capacity, planning and more. Essentially, vRealize Operations “learns” about your environment and is able to make recommendations, predictions and much more based on your specific workloads.

Platform Services – vRealize Operations is able to perform a number of platform management disciplines based on your specific environment. As an example, vRealize Operations can automate the addition of virtual machine memory based on monitored load, therefore proactively addressing potential issues before they surface.

Extensibility – Available from the VMware Marketplace, Management Packs extend the functionality of vRealize Operations. Examples include:

  • Microsoft Azure Management Pack from Blue Medora
  • AWS Management Pack from VMware
  • Docker Management Pack from Blue Medora
  • Dell | EMC Management Pack from Blue Medora
  • vRealize Operations Compliance Pack for PCI from VMware

The examples above demonstrate vRealize Operation’s capability to monitor AWS and Azure environments in addition to on-premises workloads, making vRealize Operations a true platform for Hybrid Cloud monitoring and operations management

Practical Example – Cluster Monitoring / Troubleshooting

In this example, we leverage one of the vRealize Operation’s built-in dashboards to check the performance of a specific cluster. A dashboard in vRealize operations terminology is a collection of objects and their state, represented in a visual fashion.

 

One of the ways vRealize understands the underlying environment is to establish and map dependencies in a logical manner. In this example, we have a top-level datacentre object (ISH), which child objects are decedents of (Cluster and hosts) this dashboard identifies key aspects of this cluster in a single page:

  • Cluster activity / utilisation
  • Health state of associated objects
  • CPU contention information
  • Memory contention information
  • Disk latency information

Without vRealize Operations it would be common for an administrator to try and collate these metrics manually, looking at individual performance charts, DRS scheduling information, and vCenter health alarms. However, with vRealize operations, this data is collected and centralised for easy and effortless exposure.

 

Practical Example – Workload Planning

In this example, we have an upcoming project that we want to forecast into our environment, particularly around disk space demand. We facilitate this by creating a “Project” in vRealize Operations, but before that, let’s look at the project UI in a bit more detail:

 

We can access this section by navigating to Environment > vSphere Object. At which point we can select the resource we’re interested in forecasting into. The chart in the middle projects the disk space demand for this specific vSphere object (a cluster, in this example). Note how we have an incline in disk space demand, which is typical of a production environment, however, we are within capacity for the time period specified (90 days).

To add a project, we click the green “plus” icon below the chart:

 

Next, we fill in details pertaining to the demand. In this case, I’m adding demand in the form of 5 virtual machines and I’m populating the specification of these VM’s based on an existing VM in my environment with an implementation date of June 19th.

 

 

If we add this project to the forecast chart, the chart changes to accommodate this change in our environment:

 

 

By adding this project we have obviously created more demand, consequently, the date in which our disk space resources will exhaust has been expedited.

By having this knowledge we can plan our capacity requirements ahead of time. In this example, I decide to add another project to add resources prior to the commissioning of the aforementioned VM’s:

Because we can combine projects into a single chart, we can see based on observed metrics what effect adding demand and capacity to our environment has.

This is one of a vast number of features in vRealize Operations.  vRealize Operations Manager can be an incredibly useful tool to have for a number of reasons. Its intelligent analytics, a breath of extensibility options and unified experience make it a compelling experience for modern cloud-based operations

 

Older posts

© 2019 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS