Skip to main content
Background Image

Secure Homelab, Part 3: Gitops Workflow with ArgoCD

·1063 words·5 mins
Aditya Hebballe
Author
Aditya Hebballe
OSCP Certified Penetration Tester
Table of Contents
Fedora Homelab - This article is part of a series.
Part 3: This Article

Introduction
#

As I wanted to fully implement GitOps for my homelab, ArgoCD became the cornerstone of my deployment strategy. I quickly realized that managing dozens of individual ArgoCD Application manifests: one for the ingress controller, one for monitoring, one for logging, etc would become a scaling nightmare.

To solve this, I adopted the “app of apps” pattern.

The core idea is simple: instead of manually managing every single application, I create a few high level “parent” applications. In my case, I settled on two main categories: platform and workloads.

These parent apps act as controllers that are responsible for managing all the other applications within their category.

This is how I achieved it.

The “Parent App” Definition
#

Argocd Platform App
I start by defining my platform app. This single Application manifest is the entry point for bootstrapping my entire cluster’s tooling. It’s defined in argocd/apps/0-platform.yaml:

# argocd/apps/0-platform.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: platform
  namespace: argocd
  # Add a finalizer to ensure that the apps it manages are deleted before this one is.
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: 'git@github.com:Aditya-Homelab/kind-k8s-configs.git' # Repo URL
    targetRevision: main
    path: platform # <-- This points to the directory containing all platform apps
    # Use the directory 'recurse' option to find all 'app.yaml' files
    directory:
      recurse: true

  destination:
    server: 'https://kubernetes.default.svc'
    # This namespace is a placeholder; the child apps will specify their own.
    namespace: default

  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true

How It Works: Recursive Directory Syncing
#

The real power of this approach comes from two key fields:

  1. spec.source.path: platform: This tells ArgoCD to look at the platform/ directory in my Git repository.
  2. spec.source.directory.recurse: true: This is the most important setting. It instructs ArgoCD to dive into all subdirectories within platform/ and apply every valid Kubernetes manifest it finds. This single platform app is now responsible for syncing all manifests in that directory. So, how does this create the “app of apps” pattern?

Let’s look at my Git repository’s structure:

├── argocd
│   └── apps
│       ├── 0-platform.yaml  # <-- This is my parent app
│       └── 1-workloads.yaml
├── platform
│   ├── cilium
│   │   ├── app.yaml         # <-- This is a child app
│   │   └── values.yaml
│   ├── ingress-controller
│   │   ├── app.yaml         # <-- This is a child app
│   │   └── cluster-issuer-prod.yaml
│   ├── metallb
│   │   ├── app.yaml         # <-- This is a child app
│   │   └── metallb-config.yaml
│   ├── observability
│   │   ├── kube-prometheus-stack
│   │   │   ├── app.yaml     # <-- This is a child app
│   │   │   └── values.yaml
│   │   └── ...
│   └── ...
└── workloads
    └── ...

When my parent platform app syncs, the recurse: true setting causes it to scan the platform/ directory. It finds and applies:

  1. Child ArgoCD Applications: It discovers platform/cilium/app.yaml, platform/ingress-controller/app.yaml, platform/observability/kube-prometheus-stack/app.yaml, and so on. Each of these files is an Application manifest itself, which ArgoCD then creates. This is the “app of apps” pattern in action. I now have a “Cilium” app, an “Ingress-nginx” app, and a “Kube-prometheus-stack” app, all managed by the “Platform” app.
  2. Direct Kubernetes Manifests: It also finds and applies standalone manifests like platform/ingress-controller/cluster-issuer-prod.yaml or platform/metallb/metallb-config.yaml.

This recursive approach is incredibly flexible. It gives me the clean separation of the “app of apps” pattern (letting cilium/app.yaml manage Cilium) while also giving me a simple way to deploy one-off configuration files (like Secrets, ConfigMaps, etc) just by dropping them into the appropriate folder.

This structure allows me to manage my entire cluster setup from low-level networking and storage to high-level observability all governed by a single parent app.

Setting up Argocd
#

Installing argocd
#

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This installs argocd in the argocd namespace

You can use nodeport for accessing the service and add the service through teleport like so:

  - name: "argocd"
    public_addr: "argocd.adityalabs.xyz"
    uri: "https://172.18.0.4:30724"
    insecure_skip_verify: true
    rewrite:
      headers:
        - "Host: argocd.adityalabs.xyz"

Connecting a Github Repository
#

  1. Generate an SSH Key Pair
ssh-keygen -t ed25519 -f ./argocd-deploy-key -N ""
ls
argocd-deploy-key  argocd-deploy-key.pub  argocd-linux-amd64
  1. Add the Public Key to Your Git Repository: GitHub repo settings → Deploy Keys → Name the key(ArgoCD-access-key) and paste the public key here.

  2. Add the Private Key to ArgoCD: ArgoCD needs the private key to authenticate. You must store this as a Kubernetes secret in the argocd namespace.

kubectl create secret generic my-private-repo-secret \
    --namespace argocd \
    --from-file=sshPrivateKey=./argocd-deploy-key \
    --type='Opaque'
  1. Finally connecting the repository itself: You create a standard Kubernetes Secret in the argocd namespace. The magic is a special label (argocd.argoproj.io/secret-type: repository) that tells ArgoCD’s controller that the data in this secret isn’t for a pod and that it’s the connection details for a Git repository it need to manage. ArgoCD will automatically discover any secret with this label and add it to its list of configured repositories.
# my-repo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-private-repo-declarative
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: git@github.com:YOUR-ORG/YOUR-REPO.git
  sshPrivateKey: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAA...
    ... a bunch of lines of your private key ...
    ...
    cAAAAAGh1bnRzckBvYnNjdXJpdHkBAgMEBQY=
    -----END OPENSSH PRIVATE KEY-----

Apply the above secret and that’s it! As soon as the secret is created, the ArgoCD server will detect it and add the repository to its configuration. You can go to the ArgoCD UI, navigate to Settings -> Repositories, and you will see your repo in the list with a “Successful” connection status.

Adding SSO
#

In my case i use Google SSO for argocd.

To create Oauth Credentials:

  1. Go to Google Cloud Consolehttps://console.cloud.google.com
  2. Create/select a project.
  3. Left menu → APIs & ServicesCredentials.
  4. Click Create CredentialsOAuth client ID.
  5. Set Application type = Web application.
  6. Set the redirect URI as https://argocd.xyz/api/dex/callback

Edit the configmap:

kubectl edit cm argocd-cm -n argocd
  dex.config: |
    connectors:
      - type: github
        id: github
        name: GitHub
        config:
          baseURL: https://github.com
          clientID: Your-Client-ID
          clientSecret: Your-Secret-Key
          redirectURI: https://argocd.xyz/api/dex/callback
          orgs:
          - name: Your-Org
            teams:
            - admin
  url: https://argocd.xyz

Now for RBAC we can set up policies according to our Github Orgs and teams:

kubectl edit configmap argocd-rbac-cm -n argocd
data:
  policy.csv: |
    g, github:Homelab:team:admin, role:admin
  policy.default: role:readonly

This gives admin role to anyone from Homelab org and admin team for example.

Conclusion
#

Now we have argocd and our github repo setup for deploying our kubernetes resources. We will use this as the backbone for our entire homelab kubernetes setup from now.

Fedora Homelab - This article is part of a series.
Part 3: This Article

Related

Secure Homelab, Part 2: Building a Multi-Node K8s Cluster with Kind and Cilium
·1661 words·8 mins
Introduction # Now that our homelab is setup, we are going to be running a kubernetes cluster. For our purposes we have a couple of options but we will be going with Kind. If you are curious these are the various ways you can run a kubernetes cluster locally:
Setting Up a Secure Fedora Homelab with Teleport & Cloudflare
·1570 words·8 mins
Introduction # Have you ever wanted your own server at home to run applications, host files, or experiment with new technologies? A homelab is the perfect way to do just that. But what about accessing your homelab securely from anywhere in the world? That’s where things can get complicated.
Homelab: Attacking Splunk+Active Directory Part-2
·1079 words·6 mins
Introduction # In this part, we will attack the Windows 11 machine (target-pc) from our Kali machine and also use Atomic Red Team on the target-pc to simulate various attacks. We’ll then analyze the logs generated in Splunk to see how these attacks appear in the data.