Virtualisation, Storage and various other ramblings.

Category: Kubernetes (Page 1 of 5)

ArgoCD – Ordering with ApplicationSets

In a previous post, I alluded to the use of ApplicationSets for my homelab deployments. I continue to leverage them, to the point I now have quite a number of applications managed by one:

  • ArgoCD (Itself)
  • Cert-Manager
  • Cilium
  • External-snapshotter
  • Gateway API CRD’s
  • Gateway API gateways
  • Homepage
  • Kanboard
  • Kubevirt
  • Longhorn
  • OpenTelemetry Operator
  • Sealed Secrets
  • System Upgrade Controller

The problem I had, was there was no ordering, dependency management or concurrency limits, so applications would simply update as and whenever changes were pushed.

This caused a number of issues, namely:

  1. What if ArgoCD updates when another application is or vice-versa?
  2. What if Cilium updates when another application is or vice-versa?

This could cause some negative outcomes. To mitigate against this, I had a think about how I could group certain applications together, automatically.

Re-arranging the Git Repo

I decided to group applications by directory, reflecting levels of importance:

├── argocd-apps
│   ├── 00-core-infrastructure
│   ├── 01-core-services
│   ├── 02-platform-services
│   └── 03-user-workloads

The directory prefix represents the order, which I reflect in the ApplicationSet:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: addon-applications
  namespace: argocd
spec:
  strategy:
    type: RollingSync
    rollingSync:
      steps:
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["00"]
          maxUpdate: 1
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["01"]
          maxUpdate: 1
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["02"]
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["03"]
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - git:
      repoURL: 'https://github.com/David-VTUK/turing-pi-automation.git'
      revision: HEAD
      directories:
        - path: 'argocd-apps/00-core-infrastructure/*'
        - path: 'argocd-apps/01-core-services/*'
        - path: 'argocd-apps/02-platform-services/*'
        - path: 'argocd-apps/03-user-workloads/*'
  template:
    metadata:
      name: '{{ .path.basename }}'
      labels:
        orderLabel: '{{ index (splitList "-" (index (splitList "/" .path.path) 1)) 0 }}'

The end result being

  • Applications with the orderLabel 00 (folder prefix) will apply first
    • Only 1 application can update at a time, preventing conflicting apps from trying to update at the same time (for example, Cilium and ArgoCD
  • Applications with the orderLabel 01 (folder prefix) will then apply
    • Only 1 application can update at a time
  • Applications with the orderLabel 02 (folder prefix) will then apply.
    • No concurrency limit
  • Applications with the orderLabel 03 (folder prefix( will then apply.
    • No concurrency limit

Whenever I add a new application I have to decide where it sits in my hierarchy:

  1. Core Infrastructure (00)
    • Examples: Cilium, Longhorn
  2. Core Services (01)
    • Examples: ArgoCD, Cert-Manager
  3. Platform Services (02)
    • Examples: OpenTelemetry Operator, Sealed Secrets
  4. User Workloads (03)
    • Examples: Homepage, Kanboard

This also helps on the occasion I need to tear down and rebuild my homelab – dependencies are installed in the correct order.

KubeVirt on ARM64 – CDI Workaround

According to the KubeVirt documentation, CDI is not currently supported on ARM64, which is the architecture my Turing RK1 nodes use.

As a workaround, I experimented with writing an image directly to a PVC which can then be cloned/mounted to a KubeVirt VM. This example dd's an ISO image to a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fedora-workstation-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  volumeMode: Block
---
apiVersion: batch/v1
kind: Job
metadata:
  name: upload-fedora-workstation-job
spec:
  template:
    spec:
      containers:
      - name: writer
        image: fedora:latest
        command: ["/bin/bash", "-c"]
        args:
          - |
            set -e
            echo "[1/3] Installing tools..."
            dnf install -y curl xz
            echo "[2/3] Downloading and decompressing Fedora Workstation image..."
            curl -L https://download.fedoraproject.org/pub/fedora/linux/releases/41/Workstation/aarch64/images/Fedora-Workstation-41-1.4.aarch64.raw.xz | xz -d > /tmp/disk.raw
            echo "[3/3] Writing image to PVC block device..."
            dd if=/tmp/disk.raw of=/dev/vda bs=4M status=progress conv=fsync
            echo "Done writing Fedora Workstation image to PVC!"
        volumeDevices:
        - name: disk
          devicePath: /dev/vda
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        securityContext:
          runAsUser: 0
      restartPolicy: Never
      volumes:
      - name: disk
        persistentVolumeClaim:
          claimName: fedora-workstation-pvc
      - name: tmp
        emptyDir: {}

Which can then be mounted to a VM:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: my-arm-vm
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/domain: my-arm-vm
    spec:
      domain:
        cpu:
          cores: 2
        resources:
          requests:
            memory: 2Gi
        devices:
          disks:
            - name: disk0
              disk:
                bus: virtio
      volumes:
        - name: disk0
          persistentVolumeClaim:
            claimName: fedora-workstation-pvc

Customising ArgoCD ApplicationSets with Template Patches

In a recent attempt to automate my homelab cluster (ref), I now manage all of my cluster applications using ArgoCD, including cilium. I also leverage applicationSet objects in ArgoCD as an app-of-apps pattern.

After a Cilium update however, it would fail to sync:

One way to address this is to add `ServerSideApply=true` to the the resulting application manifest:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: bootstrap-applications
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - git:
      repoURL: 'https://github.com/David-VTUK/turing-pi-automation.git'
      revision: HEAD
      directories:
      - path: 'argocd-apps/helm-charts/import-from-cluster-standup/*'
  template:
    metadata:
      name: '{{ .path.basename }}'
    spec:
      project: default
      source:
        repoURL: 'https://github.com/David-VTUK/turing-pi-automation.git'
        targetRevision: HEAD
        path: '{{ .path.path }}'
        helm:
          valueFiles:
            - values.yaml
      destination:
        server: 'https://kubernetes.default.svc'
        namespace: '{{ .path.basename }}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true
          - ServerSideApply=true

The downside to this, however, is all applications from this applicationset will inherit this value, which is less than ideal.

Template Patches

TemplatePatch can be used in conjunction with ApplicationSet to selectively make changes to the resulting application based on specific criteria:

# Several lines omitted for brevity
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: bootstrap-applications
  namespace: argocd
spec:
  template:
  templatePatch: |
    {{ if eq .path.basename "cilium" }}
      spec:
        syncPolicy:
          syncOptions:
            - ServerSideApply=true # required to avoid the "annotations too long error"
            - CreateNamespace=true
    {{- end }}

After a resync the previous error is resolved, but Cilium would not be in sync:

Which is noted in a GitHub issue

To address this, templatePatch can be extended to ignore these by leveraging ignoreDifferences.

# Several lines omitted for brevity
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: bootstrap-applications
  namespace: argocd
spec:
  template:
  templatePatch: |
    {{ if eq .path.basename "cilium" }}
      spec:
        ignoreDifferences:
        - group: monitoring.coreos.com
          kind: ServiceMonitor
          name: ""
          jsonPointers:
            - /spec
        syncPolicy:
          syncOptions:
            - ServerSideApply=true # required to avoid the "annotations too long error"
            - CreateNamespace=true
    {{- end }}

« Older posts

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

RSS
Twitter
Visit Us
Follow Me