Virtualisation, Storage and various other ramblings.

Category: Virtualisation (Page 2 of 8)

Rancher, vSphere Network Protocol Profiles and static IP addresses for k8s nodes [Updated 2023]

Edit: This post has been updated to reflect changes in newer versions of Rancher.

Note: As mentioned by Jonathan in the comments, disabling cloud-init’s initial network configuration is recommended. To do this, create a file:

/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

To contain:

network: {config: disabled}

In your VM template.

How networking configuration is applied to k8s nodes (or VM’s in general) in on-premises environments is usually achieved by one of two ways – DHCP or static. For some, DHCP is not a popular option and static addresses can be time-consuming to manage, particularly when there’s no IPAM feature in Rancher. In this blog post I go through how to leverage vSphere Network Protocol Profiles in conjunction with Rancher and Cloud-Init to reliably, and predictably apply static IP addresses to deployed nodes.

Create the vSphere Network Protocol Profile

Navigate to Datacenter > Configure > Network Protocol Profiles. and click “Add”.

Provide a name for the profile and assign it to one, or a number of port groups.

Next define the network parameters for this port group. The IP Pool and IP Pool Range are of particular importance here – we will use this pool of addresses to assign to our Rancher deployed K8s nodes.

After adding any other network configuration items the profile will be created and associated with the previously specified port group.

Create a cluster

In Rancher, navigate to Cluster Management > Create > vSphere

In the cloud-init config, we add a script to extrapolate the ovf environment that vSphere will provide via the Network Profile and configure the underlying OS. In this case, Ubuntu 22.04 using Netplan:

Code snippet:

#cloud-config
write_files:
  - path: /root/test.sh
    content: |
      #!/bin/bash
      vmtoolsd --cmd 'info-get guestinfo.ovfEnv' > /tmp/ovfenv
      IPAddress=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.address" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      SubnetMask=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.netmask" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      Gateway=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.route.0.gateway" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      DNS=$(sed -n 's/.*Property oe:key="guestinfo.dns.servers" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)

      cat > /etc/netplan/01-netcfg.yaml <<EOF
      network:
        version: 2
        renderer: networkd
        ethernets:
          ens192:
            addresses:
              - $IPAddress/24
            gateway4: $Gateway
            nameservers:
              addresses : [$DNS]
      EOF

      sudo netplan apply
runcmd:
  - bash /root/test.sh
bootcmd:
  - growpart /dev/sda 3
  - pvresize /dev/sda3
  - lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
  - resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

What took me a little while to figure out is the application of this feature is essentially a glorified transport mechanism for a bunch of key/value pairs – how they are leveraged is down to external scripting/tooling. VMTools will not do this magic for us.

Next, we configure the vApp portion of the cluster (how we consume the Network Protocol Profile:

the format is param:portgroup. ip:VDS-MGMT-DEFAULT will be an IP address from the pool we defined earlier – vSphere will take an IP out of the pool and assign it to each VM associated with this template. This can be validated from the UI:

What we essentially do with the cloud-init script is extract this and apply it as a configuration to the VM.

This could be seen as the best of both worlds – Leveraging vSphere Network Profiles for predictable IP assignment whilst avoiding DHCP and the need to implement many Node Templates in Rancher.

Container Packet Inspection with NSX-T

For troubleshooting (or just being a bit nosey) we have a number of tools that allow us to inspect the traffic between two endpoints. When it comes to containers however, our approach has to be adjusted slightly. When using NSX-T as a CNI, we have some of these tools available to us out of the box.

App Overview

 

For this example, I’ve deployed a standard WordPress deployment consisting of a frontend (web) pod and a backend (DB) pod. The objective is to identify and capture the traffic between these pods.

 

Configure NSX-T Port Mirroring

Navigate to the “Port Mirroring Session” section in NSX-T via “Advanced Networking & Security” and click “Add”

Define a Local SPAN session:

Define the transport node (Source ESXI host)

Select the physical NIC(s) that participate in the n-vds configuration that we want to capture packets from.

Important: Enable “Encapsulated Packet” (Disabled by default) so we can inspect the underlying Geneve overlay packets. However, the NIC we want to mirror to must support the increased overall packet size. Therefore, adjust the MTU of the nic on the Wireshark host accordingly.

 

Select the source VNIC to capture from – This is the NIC of the Kubernetes worker node VM.

 

 

Finally, select the destination for traffic mirroring. For this example, it’s a NIC on a VM I have Wireshark installed on

Traffic Inspection

With all the hard work out of the way, we can simply provide a filter in Wireshark to show only packets originating from my WordPress Web Pod and terminating at the WordPress DB pod:

Notes:

  • The overall frame size of 1600 further validates our requirement to have the recipient of the traffic mirroring having an interface configured accordingly.
  • Because we previously defined the mirroring of encapsulated packets we can see the overlay information
  • We can see the entire packet structure consisting of:
    • Outer Ethernet (NSX-T Transport Node MAC)
    • Outer IP (NSX-T VTEP IP)
    • Geneve UDP
    • Geneve Data
    • Inner Ethernet (Pod Ethernet addresses)
    • Inner IP (Pod IP addresses)
    • Inner TCP (MYSQL connection)
    • Inner Payload (MYSQL Query) – Highlighted

 

PKS, Harbor and the importance of container registries

What are container registries and why do we need them?

A lot of the time, particularly when individuals and organisations are evaluating, testing and experimenting with containers they will use public container registries such as Docker Hub.  These public registries provide an easy-to-use, simple way to access images. As developers, application owners, system admins etc gain familiarity and experience additional operational considerations need to be explored, such as:

  • Organisation – How can we organise container images in a meaningful way? Such as by environment state (Prod/Dev/Test) and application type?
  • RBAC – How can we implement role-based access control to a container registry?
  • Vulnerability Scanning – How can we scan container images for known vulnerabilities?
  • Efficiency – How can we centrally manage all our container images and deploy an application from them?
  • Security – Some images need to kept under lock and key, rather than using an external service like Docker Hub.

Introducing VMware Harbor Registry

VMware Harbor Registry has been designed to address these considerations as enterprise-class container registry solution with integration into PKS. In this post, We’ll have a quick primer on getting up and running with Harbor in PKS and explore some of its features. To begin, we need to download PKS Harbor from the Pivotal site and import it into ops manager.

After which the tile will be added (When doing this for the first time it will have an orange bar at the bottom. Press the tile to configure).

The following need to be defined with applicable parameters to suit your environment.

  • Availability Zone and Networks – This is where the Harbor VM will reside, and the respective configuration will be dependent on your setup.
  • General – Hostname and IP address settings
  • Certificate – Generate a self-signed certificate, or BYOC (bring your own certificate)
  • Credentials – Define the local admin password
  • Authentication – Choose between
    • Internal
    • LDAP
    • UAA in PKS
    • UAA in PAS
  • Container Registry store – Choose where to store container images pushed to Harbor
    • Local file system
    • NFS Server
    • S3 Bucket
    • Google Cloud Storage
  • Clair Proxy Settings
  • Notary settings
  • Resource Config

VMware Harbor Registry – Organisation

Harbor employs the concept of “projects”. Projects are a way of collecting images for a specific application or service. When images are pushed to Harbor, they reside within a project:

 

Projects can either be private or public and can be configured during, or after, project creation:

A project is comprised of a number of components:

 

VMware Harbor Registry – RBAC

In Harbor, we have three role types we can assign to projects:

 

rbac

Image source: https://github.com/goharbor/harbor/blob/master/docs/user_guide.md#managing-projects

  • Guest – Read-only access, can pull images
  • Developer – Read/write access, can pull and push images
  • Admin – Read/Write access, as well as project-level activities, such as modifying parameters and permissions.

As a practical example, AD groups can be created to facilitate these roles:

And these AD groups can be mapped to respective permissions within the project

 

Therefore, facilitating RBAC within our Harbor environment. Pretty handy.

VMware Harbor Registry – Vulnerability Scanning

The ability to identify, evaluate and remediate vulnerabilities is a standard operation is modern software development and deployment. Thankfully Harbor addresses this with integration with Clair – an open source project that addresses the identification, categorisation and analysis of vulnerabilities within containers. As a demonstration we need to first push an image to Harbor:

After initiating a scan, Harbor can inform us of what vulnerabilities exist within this container image

We can then explore more details about these vulnerabilities, including when they were fixed:

 

Conclusion

Harbor provides us with an enterprise level, container registry solution. This blog post has only scratched the surface, and with constant development being invested into the project, expect more features and improvements.

 

« Older posts Newer posts »

© 2024 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me