Virtualisation, Storage and various other ramblings.

Category: Cloud (Page 1 of 6)

Rancher, vSphere Network Protocol Profiles and static IP addresses for k8s nodes

Note : The following currently works only with v2.3.6-rc6 v2.3.6 of Rancher and newer.

Note: As mentioned by Jonathan in the comments, disabling cloud-init’s initial network configuration is recommended. To do this, create a file:

/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

To contain:

network: {config: disabled}

In your VM template.

How networking configuration is applied to k8s nodes (or VM’s in general) in the on-premises environment is usually achieved by one of two ways – DHCP or static. For some, DHCP is not a popular option and static addresses can be time-consuming to manage, particularly when there’s no IPAM feature in Rancher. In this blog post I go through how to leverage vSphere Network Protocol Profiles in conjunction with Rancher and Cloud-Init to reliably, and predictably apply static IP addresses to deployed nodes using a single node template.

Create the vSphere Network Protocol Profile

Navigate to Datacenter > Configure > Network Protocol Profiles. and click “Add”.

Provide a name for the profile and assign it to one, or a number of port groups.

Next define the network parameters for this port group. The IP Pool and IP Pool Range are of particular importance here – we will use this pool of addresses to assign to our Rancher depoyed K8s nodes.

After adding any other network configuration items the profile will be created and associated with the previously specified port group.

Create the Rancher Node Template

In Rancher, navigate to User > Node Templates > vSphere and configure the parameters to match your environment.

In the cloud-init config, we add a script to extrapolate the ovf environment that vSphere will provide via the Network Profile and configure the underlying os. In this case, Ubuntu 18.04 using Netplan:

Code snippet:

#cloud-config
write_files:
  - path: /root/test.sh
    content: |
        #!/bin/bash
        vmtoolsd --cmd 'info-get guestinfo.ovfEnv' > /tmp/ovfenv
        IPAddress=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.address" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
        SubnetMask=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.netmask" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
        Gateway=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.route.0.gateway" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
        DNS=$(sed -n 's/.*Property oe:key="guestinfo.dns.servers" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)

        cat > /etc/netplan/01-netcfg.yaml <<EOF
        network:
          version: 2
          renderer: networkd
          ethernets:
            ens160:
              addresses: 
                - $IPAddress/24
              gateway4: $Gateway
              nameservers:
                addresses : [$DNS]
        EOF

        sudo netplan apply
runcmd:
  - bash /root/test.sh

What took me a little while to figure out is the application of this feature is essentially a glorified transport mechanism for a bunch of key/value pairs – how they are leveraged is down to external scripting/tooling. VMTools will not do this magic for us.

Next, we configure the vApp portion of the Node Template (how we consume the Network Protocol Profile:

the format is param:portgroup. ip:VDS-MGMT-DEFAULT will be an IP address from the pool we defined earlier – vSphere will take an IP out of the pool and assign it to each VM associated with this template. This can be validated from the UI:

What we essentially do with the cloud-init script is extract this and apply it as a configuration to the VM.

This could be seen as the best of both worlds – Leveraging vSphere Network Profiles for predictable IP assignment whilst avoiding DHCP and the need to implement many Node Templates in Rancher.

On-prem K8s clusters with Rancher, Terraform and Ubuntu

One of the attractive characteristics of Kubernetes is how it can run pretty much anywhere – in the cloud, in the data center, on the edge, on your local machine and much more. Leveraging existing investments in datacenter resources can be logical when deciding where to place new Kubernetes clusters, and this post goes into automating this with Rancher and Terraform.

Primer

For this exercise the following is leveraged:

  • Rancher 2.3
  • vSphere 6.7
  • Ubuntu 18.04 LTS

An Ubuntu VM will be created and configured into a template to spin up Kubernetes nodes.

Step 1 – Preparing a Ubuntu Server VM

In Rancher 2.3 Node templates for vSphere can leverage either of the following:

For the purposes of this demo, "Deploy from template" will be used, given its simplicity.

To create a new VM template, we must first create a VM. Right-click an appropriate object in vCenter and select "New Virtual Machine"

Select a source:

Give it a name:

Give it a home (compute):

Give it a home (storage):

Specify the VM hardware version:

Specify the guest OS:

Configure the VM properties, ensure the Ubuntu install CD is mounted:

After this, power up the VM and walk through the install steps. After which it can be turned into a template:

Rancher doesn’t have much in the way of requirements for the VM. For this install method a VM needs to have:

  • Cloud-Init (Installed by default on Ubuntu 18.04).
  • SSH connectivity (Rancher will provide its own SSH certificates as per Cloud-Init bootstrap) – Ensure SSH server has been installed.

A Note on Cloud-Init

For Vanilla Ubuntu Server installs, it uses Cloud-Init as part of the general Installation process. As such, cloud-init can not be re-invoked on startup by default. To get around this for templating purposes, the VM must be void of the existing cloud-init configuration prior to being turned into a template. To accomplish this, run the following:

sudo rm -rf /var/lib/cloud/instances

Before shutting down the VM and converting it into a template.

Constructing the Terraform Script

Now the VM template has been created it can be leveraged by a Terraform script:

Specify the provider: (Note – insecure = "true" Is required for vCenter servers leveraging an untrusted certificate, such as self-signed.

provider "rancher2" {
  api_url    = "https://rancher.virtualthoughts.co.uk"
  access_key = #ommited - reference a Terraform varaible/environment variable/secret/etc
  secret_key = #ommited - reference a Terraform varaible/environment variable/secret/etc
  insecure = "true"
}

Specify the Cloud Credentials:

# Create a new rancher2 Cloud Credential
resource "rancher2_cloud_credential" "vsphere-terraform" {
  name = "vsphere-terraform"
  description = "Terraform Credentials"
  vsphere_credential_config {
    username = "Terraform@vsphere.local"
    password = #ommited - reference a Terraform varaible/environment variable/secret/etc
    vcenter = "svr-vcs-01.virtualthoughts.co.uk"
  }
}

Specify the Node Template settings:

Note we can supply extra cloud-config options to further customise the VM, including adding additional SSH keys for users.

resource "rancher2_node_template" "vSphereTestTemplate" {
  name = "vSphereTestTemplate"
  description = "Created by Terraform"
  cloud_credential_id = rancher2_cloud_credential.vsphere-terraform.id
   vsphere_config {
   cfgparam = ["disk.enableUUID=TRUE"]
   clone_from = "/Homelab/vm/Ubuntu1804WithCloudInit"
   cloud_config = "#cloud-config\nusers:\n  - name: demo\n    ssh-authorized-keys:\n      - ssh-rsa [SomeKey]
   cpu_count = "4"
   creation_type = "template"
   disk_size = "20000"
   memory_size = "4096"
   datastore = "/Homelab/datastore/NFS-500"
   datacenter = "/Homelab"
   pool = "/Homelab/host/MGMT/Resources"
   network = ["/Homelab/network/VDS-MGMT-DEFAULT"]
   }
}

Specify the cluster settings:

resource "rancher2_cluster" "vsphere-test" {
  name = "vsphere-test"
  description = "Terraform created vSphere Cluster"
  rke_config {
    network {
      plugin = "canal"
    }
  }
}

Specify the Node Pool:

resource "rancher2_node_pool" "nodepool" {

  cluster_id =  rancher2_cluster.vsphere-test.id
  name = "all-in-one"
  hostname_prefix =  "vsphere-cluster-0"
  node_template_id = rancher2_node_template.vSphereTestTemplate.id
  quantity = 1
  control_plane = true
  etcd = true
  worker = true
}

After which the script can be executed.

What’s going on?

From a high level the following activities are being executed:

  1. Rancher requests VM’s from vSphere using supplied Cloud Credentials.
  2. vSphere clones the VM Templateeverywhere with the specified configuration parameters.
  3. An ISO image is mounted to the VM, which contains certificates and configuration generated by Rancher in the cloud-init format.
  4. Cloud-Init on startup reads this ISO image and applies the configuration.
  5. Rancher builds the Kubernetes cluster by Installing Docker and pulling down the images.

After which, a shiny new cluster will be created!

Pi-Hole and K8s v2 – Now with DNS over HTTPS

In a previous post, I went through the process of configuring Pi-Hole within a Kubernetes cluster for the purpose of facilitating a network-wide adblocking. Although helpful, I wanted to augment this with DNS over HTTPS.

Complete manifests can be found here. Shout out to visibilityspots for the cloudflared image on Dockerhub

Why?

DNS, as a protocol, is insecure and can be prone to manipulation and man-in-the-middle attacks. DNS over HTTPS helps address this by encrypting the data between the DNS over HTTPS client and the DNS over HTTPS-based DNS resolver. One of which is provided by Cloudflare.

Thankfully, Pi-Hole has some documentation on how to implement this for the traditional Pi-Hole setups. But for Kubernetes-based deployments, this requires a different approach.

How?

The DNS over HTTPS client is facilitated by a Cloudflare daemon. In a traditional Pi-Hole setup this is simply run alongside Pi-Hole itself, but in a containerised environment there are predominantly two ways to address this:

As a separate microservice

This approach leverages two different deployments, one for the Pi-Hole service, and one for cloudflared. While workable, I felt that this was a less optimal approach

Service-To-Service communication between Pi-Hole and Cloudflared

As another container within the Pi-Hole pod

Given the tight relationship between these containers, and the fact their respective services run on different ports, this seems like a more efficient approach.

Intra-pod communication between Pi-Hole and CloudflareD

As the containers share the same network interface, one pod can access the other over either the veth interface, or simply the localhost address. For Pi-Hole, we can facilitate this via a configmap change:

apiVersion: v1
kind: ConfigMap
metadata:
  name: pihole-env
  namespace: pihole-system
data:
  TZ: UTC
  DNS1: 127.0.0.1#5054
  DNS2: 127.0.0.1#5054

Testing

Once the respective manifest files have been deployed and clients are pointing to pi-hole as a DNS resolver, it can be tested by accessing https://1.1.1.1/help. As per the example below, DNS over HTTPS has been identified.

« Older posts

© 2020 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me