Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 6 of 23

K3s, Rancher and Pulumi

TLDR; Repo can be found here (Be warned, I’m at best, a hobbyist programmer and certainly not a software engineer in my day job)

I’ve been recently getting acquainted with Pulumi as an alternative to Terraform for managing my infrastructure. I decided to create a repo that would do a number of activities to stand up Rancher in a new K3s cluster, all managed by Pulumi in my vSphere Homelab, consisting of the following activities:

  • Provision three nodes from a VM Template.
  • Use cloud-init as a bootstrapping utility:
    • Install K3s on the first node, elected to initialise the cluster.
    • Leverage K3s’s Auto-Deploying Manifests feature to install Cert-Manager, Rancher and Metallb.
  • Join two additional nodes to the cluster to form a HA, embedded etcd cluster.

The Ingress Controller is exposed via a loadbalancer service type, leveraging Metallb.

After completion, Pulumi will output the IP address (needed to create a DNS record) and the specified URL:

tputs:
    Rancher IP (Set DNS): "172.16.10.167"
    Rancher url:        : "rancher.virtualthoughts.co.uk"

Why cloud-init ?

For this example, I wanted to have a zero-touch deployment model relative to the VM’s themselves – IE no SSH’ing directly to the nodes to remotely execute commands. cloud-init addresses these requirements by having a way to seed an instance with configuration data. This Pulumi script leverages this in two ways:

  1. To set the instance (and therefore host) name as part of metadata.yaml (which is subject to string replacement)
  2. To execute a command on boot that initialises the K3s cluster (Or join an existing cluster for subsequent nodes) as part of userdata.yaml
  3. To install cert-manager, rancher and metallb, also as part of userdata.yaml

Reflecting on Using Pulumi

Some of my observations thus far:

  • I really, really like having “proper” condition handling and looping. I never really liked repurposing count in Terraform as awkward condition handling.
  • Being able to leverage standard libraries from your everyday programming language makes it hugely flexible. An example of this was taking the cloud-init user and metadata and encoding it in base64 by using the encoding/base64 package.

Automated deployment of K3s and Rancher on vSphere with Terraform

Previously, my local Rancher installs were based on RKE. However, since K3S is now a supported distribution, I decided to rebuild my environment leveraging it. Additionally, it was a good opportunity to automate the process with Terraform.

TL;DR

https://github.com/David-VTUK/Rancher-K3s-vSphere contains the Terraform code required to do this.

Quick note on K3S with Embedded DB

This installation method is currently experimental. Do not leverage it in production (yet). Towards the end of August 2020, we (Rancher) plan to replace it with embedded etcd as per the roadmap. I’m a fan of simplicity, therefore when v1.19 does come out, I plan to simply tear down and rebuild my cluster using this Terraform code. However, one could equally modify it to leverage an external DB for a more production-ready setup.

Resources Created

The aforementioned Terraform code will create:

  • A single VM with NGINX installed acting as a Loadbalancer, forwarding TCP 80/443/6443 to the K3s Nodes
  • Three VM’s which will form the K3s cluster with an embedded DB. The first of which is used to initialise the cluster
  • Once the cluster is created, Cert-Manager and Rancher are installed which are probed for readiness.
file:///home/david/Downloads/Architecture.png

Prerequisites

  • Terraform version 0.13
  • Prior to running this script, a DNS record needs to be created to point at the Loadbalancer IP address, defined in the variable lb_address.
  • The VM template used must have the Cloud-Init Datasource for VMware GuestInfo project installed, which facilitates pulling meta, user, and vendor data from VMware vSphere’s GuestInfo interface. This can be achieved with:
curl -sSL https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo/master/install.sh | sh -

Or use the following Packer Template:

https://github.com/David-VTUK/Rancher-Packer/tree/master/vSphere/ubuntu_2004_cloud_init_guestinfo

Acquire Kubeconfig

  • SSH to one of the K3s nodes
  • Grab /etc/rancher/k3s/k3s.yaml
  • Replace server: https://127.0.0.1:6443 with the IP address defined in lb_address

End to end automation with CircleCI and ArgoCD Part 3 – ArgoCD

ArgoCD is a Continuous Delivery tool designed for Kubernetes, This will be used to take the generated YAML file from the CI process and apply it to two clusters.

In this section, the following part of the overall CI/CD pipeline being implemented is depicted below.

  • ArgoCD will monitor for changes in the Webapp-CD Github Repo.
  • All changes are automatically applied to the Test cluster.
  • All changes will be staged for manual approval to Prod cluster

Install ArgoCD

ArgoCD has extensive installation documentation here. For ease, a community Helm chart has also been created.

Add Clusters to ArgoCD

I’m using Rancher deployed clusters which require a bit of tweaking on the ArgoCD side. The following Gist outlines this well: https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80. However, other clusters can be added with a argocd cluster add, which will leverage the current kubeconfig file. For this environment, both Prod and Test clusters were added.

If done correctly, the clusters should be visible in Settings > Clusters in the ArgoCD web UI and argocd cluster list in the CLI:

Add Repo to ArgoCD

In ArgoCD navigate to Settings > Repositories:

You can connect to a repo in one of two ways – SSH or HTTPS. Fill in the name URL and respective authentication parameters:

With all being well, it will be displayed:

Create ArgoCD Application

From the ArgoCD UI, select either New App or Create Application

Application Settings – General

Application Name: web-test
Project: Default (For production environments a separate project would be created with specific resource requirements, but this will suffice to get started with.)
Sync Policy: Automatic

Application Settings – Source

Repository URL: Select from dropdown
Revision: Head
Path: .

Application Settings – Destination

Cluster: Test
Namespace: Default (This is used when the application does explicitly define a namespace to reside in).

After which our application has been added:

Selecting the app will display its constituent parts

Repeat the create application process but substitute the following values:

Application Name : web-prod
Sync Policy: Manual
Cluster: Prod

After which, both the prod and test applications will be shown

webapp-prod is noted as being OutOfSync – this is expected. In this environment, I don’t want changes to automatically propagate to prod, but only to test. Clicking “Sync” will rectify this:

Testing

Now everything is in place the following should occur during a new commit to the source code:

  • Automatically invoke CircleCI pipeline to Test, Build and Publish the application as a container to Dockerhub
  • Construct a YAML file using the aforementioned image
  • ArgoCD detects a change in the manifest and:
    • Applies it to Test immediately
    • Reports Prod is now out of sync.

As expected, changes to the source code have propagated into the Kubernetes clusters it is residing on.

Part 1 – Overview

Part 2 – CircleCI Configuration

« Older posts Newer posts »

© 2024 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me