Daniel Bodky
Cloud-Native Clutter


Cloud-Native Clutter

ArgoCD, GitOps, and an Octopus

ArgoCD, GitOps, and an Octopus

Opening the series will be ArgoCD, a complete, CNCF-graduated GitOps solution for Kubernetes

Daniel Bodky's photo
Daniel Bodky
·Jan 15, 2023·

12 min read

I've always liked the idea of GitOps - manifesting the declarative state of your workloads as a git repository, versioned, with a transparent history and Kubernetes objects being described as YAML. You could even define additional bits and pieces of your deployment within the repository and have them deployed automatically, e.g., Ingress definitions, ServiceMonitors, or other third-party definitions not being an immediate part of your workloads. However, GitOps can be hard to adopt and establish a flourishing culture for, and ArgoCD can help immensely with that.

It provides handrails where needed, makes auditing and monitoring information available for you, and is extendable with additional tooling like Argo Workflows for cloud-native, containerized jobs or Argo Events, a framework for event-driven workflow automation.

I also like clean, shiny UIs that convey the status and meaning of complex situations, such as Kubernetes deployments. ArgoCD comes with one of those, displaying all the information you'd need on your screen.

And last but not least, its mascot is an adorable orange octopus!

How does ArgoCD work?

First, you tell ArgoCD where to look for a bundle of microservices and additional configuration (think Ingresses, Policies, or other CRDs) and how to deploy these resources to a cluster. Most of the time, this will be an upstream git repository (e.g., on GitHub or a GitLab instance) where ArgoCD can look up the defined resources and a target revision. Alternatively, ArgoCD supports references to Helm charts natively.

ArgoCD will then use this information to fetch the needed information from your upstream source, render the resources if necessary (it supports and automatically detects Kustomize and Helm in addition to raw Kubernetes-compatible YAML), and compare the result to the version currently deployed to your cluster. If the two versions differ, ArgoCD will issue a Sync and will try to deploy the new version to the cluster.

In addition to the upstream repository, you can configure many more things when deploying your GitOps project, like sync options, resource pruning or ArgoCD's self-heal mechanisms.

A diagram displaying ArgoCD's workflow

We can configure all of this in one of three ways provided by ArgoCD:

  • from the web UI

  • from ArgoCD's CLI utility, argocd

  • manually creating the needed CRDs

What's a CRD? I'm glad you asked!

ArgoCD's CustomResourceDefinitions

Under the hood, ArgoCD works like so many other Kubernetes-native tools - by introducing additional API object types to the Kubernetes API, which then, in turn, can be utilized by end users, CLI tools, or the Kubernetes API itself. These additional API object types are called CustomResourceDefinitions (CRD) in the Kubernetes world, and ArgoCD introduces three of those:

  • Applications

  • ApplicationSets

  • AppProjects

Applications contain all the settings mentioned above, which are needed to define where to get the resources, how to parse them, where to deploy them to and what additional automatisms to create in/cluster.

ApplicationSets enable us to deploy Applications across multiple clusters, allow self-service for developer teams, and make it easier to maintain and deploy from so-called mono repos. For more detailed case studies on use-cases for them, feel free to take a look at ArgoCD's stance on the topic. ApplicationSets utilize a provided set of generators to render different versions of the same base configuration, e.g., targetting different namespaces, clusters, or target revisions for deployment.

AppProjects are somewhat comparable to Namespaces within Kubernetes - they provide a configurable environment for Applications to be deployed to, making available a preset of upstream repositories, trusted TLS certificates, available SSH keys for fetching data and GPG keys for commit verification.

AppProjects are also responsible for the configuration of restrictions, e.g., a subset of Kubernetes resources that are allowed to be deployed by ArgoCD within a specific project, which permissions a configured role has within a project, or how it maps to user groups by third-party identity providers.

Of course, ArgoCD has many more configuration possibilities in its belt, including SSO integration, metrics endpoints, and utilization of webhooks. If you're interested in learning more, head over to the official documentation for cluster operators.

Case Study - Deploying This Blog With ArgoCD

Enough bland theory; let's get our hands dirty! As mentioned before, I'm a big fan of ArgoCD and the idea of GitOps in general - in fact, when setting up my blog (again...) I decided I wanted it deployed to Kubernetes by ArgoCD.

For those unaware, this blog is mirrored to https://dbodky.me - so when talking about deploying this blog to Kubernetes with ArgoCD, I mean the mirror, not Hashnode 😉

The blog is being built with Hugo, generating static HTML/CSS resources which I serve with an (unprivileged) NGINX container image. I build my own image, generate and collect the assets needed for the blog, add some configuration for NGINX, and push it to a private repository on DockerHub.

In order to bring this blog to life on Kubernetes, we need a few different resources. Let's compile a list of things I would want ArgoCD to deploy upon changes:

  • a Deployment, configuring a template of my blog image to manage and run N times in parallel.

  • a Service, providing a means of connecting to the N instances of my blog without having to know their random (and ephemeral) IPs.

  • an Ingress, allowing me to connect to the Service from the internet without having to remember any IP; instead, I can use the configured DNS entry.

  • a NetworkPolicy, similar to firewall rules in traditional IT landscapes. For example, I don't need my blog to connect to anything at all, so I can block Egress traffic altogether.

  • a Secret, containing the credentials for my private DockerHub repository, so Kubernetes can actually pull and deploy the image.

This list could easily be extended - for example, I could decide that I wanted to monitor my workload one day and add a ServiceMonitor (a Prometheus CRD). Or I might want to put additional policies in place. But for now, this suffices.

Configuring the Project

As mentioned already, all ArgoCD Applications reside inside of ArgoCD AppProjects - so the first thing I did was to go ahead and configure the default project to my needs:

kubectl -o yaml -n argocd get approject default
apiVersion: argoproj.io/v1alpha1
kind: AppProject
  name: default
  namespace: argocd
  - group: '*'
    kind: '*'
  - name: in-cluster
    namespace: '*'
    server: https://kubernetes.default.svc
    - kind: Secret
      name: hugo-tls-cert
    - kind: Secret
      name: docker-credentials
    warn: true
  - keyID: 9E12D1B1F1A84FA8
  - https://github.com/mocdaniel/dbodky-me

In this definition, we already see some of a project's configurable settings:

  • I added my GPG signature key project-wide to check commits for validity before syncing and deploying them.

  • I added a source repository. Behind the scenes, ArgoCD also created a Secret holding sensible information related to this source repository, like username or personal access token.

  • I added orphaned resources to ignore (I'll get back to that later).

  • the settings regarding clusterResourceWhitelist and destinations are the default values ArgoCD provides - ArgoCD is allowed to deploy any resource to the cluster in which this ArgoCD instance is running for applications within the default project.

Creating this AppProject resource from scratch is a bit cumbersome. You'd need additional resources to go along with it, e.g., the Secret, which holds credentials for the referenced source repository. So normally, you would configure these things either via web UI or from the CLI, using argocd:

argocd proj add-signature-key default 9E12D1B1F1A84FA8
argocd proj add-source default https://github.com/argoproj/argo-cd

Creating the Application

Once I had my project configured, I had to go on and configure an Application which would hold the configuration specific to my blog deployment. I configured everything from the UI, but the result looks something like this as a Kubernetes resource:

kubectl -o yaml -n argocd get application hugo

apiVersion: argoproj.io/v1alpha1
kind: Application
    notifications.argoproj.io/subscribe.on-sync-succeeded.slack: argocd
  name: Hugo
  namespace: argocd
    namespace: Hugo
    server: https://kubernetes.default.svc
  project: default
      - name: replicas
        value: "3"
      - name: dockerSecret
        value: docker-credentials
      - values.yaml
    path: helm
    repoURL: https://github.com/mocdaniel/dbodky-me
    targetRevision: HEAD
      selfHeal: true
    - CreateNamespace=true
    - ServerSideApply=true

As you can see, there's a large block of YAML defining the source of the manifests to deploy - in this case:

  • I'm targeting a Helm repository, not Kustomize or plain Kubernetes resources

  • the Helm chart is located in the repository defined in repoURL, at the path helm

  • the defined values.yml can be found at path/values.yml

For those who haven't worked with Helm before, it might be helpful information that users (and ArgoCD!) can override deployment parameters defined in values.yml - I defined overrides for two such parameters right in my Application definition:

  • the number of blog instances to spin up (replicas)

  • the name of a secret containing my Docker credentials needed for DockerHub (dockerSecret)

But that's not everything yet. I also defined a destination (namespace blog on the local server), several settings related to the syncPolicy of the project (when to sync? what to sync? how to sync?), and - of course - the project this application is going to be part of.

There's also this ominous annotation notifications.argoproj.io..., but once again, we'll get to this later.

This has been a lot of YAML to digest, so let's take a break and look at where we're at from the web UI!

ArgoCD's application overview, showing a single application called 'hugo' in a healthy state

This looks great! The application is in a synced and healthy state, which means in the time it took me to open the web UI and log in after submitting the Application definition, ArgoCD went ahead and did the following:

  1. Read the application definition

  2. Look at the defined source, and parse the Helm chart

  3. Deploy the Helm chart into my cluster, all at once

  4. Observe the pending changes until eventually all defined resources are actually deployed and in a healthy state, according to ArgoCD's observations.

Let's take a closer look - what exactly does ArgoCD see when observing the Application? This!

From left to right, we look at a tree, with the hugo application we defined above at its root. It spins up all the moving bits and pieces of this blog, as defined at the beginning of this section:

  • a Deployment, which in turn manages its ReplicaSets and their Pods.

  • a Service, which in turn manages its Endpoints.

  • an Ingress, which triggers the creation of a Certificate by lets-encrypt.

  • a Secret containing credentials for the DockerHub repository in which the blog's image is stored.

But wait, what's that SealedSecret, which in fact is the parent of our Secret, if we look closely? Keep reading!

Secrets in ArgoCD

GitOps ideology and Secret Management don't go together well at first glance - how can we persist and version the entire definition and configuration of our workloads in a VCS while not compromising our secrets by uploading them to said VCS? A dilemma, but only for a moment. The solution - at least from ArgoCD's perspective - is simple: Don't concern yourself with secrets at all!

ArgoCD describes itself as unopinionated about how secrets are managed, which is a nice way of saying we don't provide a solution for secrets management. However, their arguments for this decision are valid. It also allows us to bring our own solution, depending on 3rd party systems we might or might not use, conditions and policies we have for secret management within our code base or organization, and many more circumstances that might conflict with a single, internalized way of managing secrets.

Hence, the SealedSecret you saw in the screenshot above. It's a CRD provided by Bitnami's sealed-secrets-controller, a Kubernetes controller that encrypts your secrets in a way that only the controller itself can ever decrypt them again.

This is achieved by sharing a public key with the accompanying CLI tool kubeseal, which uses this key to asymmetrically encrypt a secret locally. The encrypted secret can then be committed to version control, as no one can read its contents. Neat!

Of course, there exist lots of other solutions for secrets management in Kubernetes with popular integrations for secrets used in Helm, stored in Hashicorp Vault, etc.; ArgoCD maintains a list of popular ways people do secrets management with ArgoCD - go, have a look!

Observability in ArgoCD

The last unanswered question arising from this article is the ominous annotation notifications.argoproj.io/subscribe.on-sync-succeeded.slack: argocd, which is part of the Application manifest shown a few sections earlier.

Dissecting the different parts of the annotation, its meaning gets clearer: It tells ArgoCD that upon successful sync of the Application it is annotating, a notification is to be sent to a so-called Service called Slack, to a channel called argocd.

ArgoCD allows us to define Subscriptions, Templates, and Triggers, which, when combined, enable us to send notifications to Services when certain events are registered.

ArgoCD supports many different Triggers out of the box for various events and use cases; it also comes with Templates for sending out notifications, aggregating and summarizing information about the observed incidents, and providing helpful additional information.

Unfortunately, these objects aren't available as CRDs but must be configured within a ConfigMap accessed by ArgoCD at runtime. For more information on the topic, please look at the official documentation on notifications.

In addition, ArgoCD makes metrics available for other monitoring and observability tools - these cover a range of different observations for all of ArgoCD's services. An overview can be found in the documentation's chapter on metrics.


You made it to the end! The blog successfully deployed to my Kubernetes cluster, and I can sleep untroubled, knowing that my secrets are committed to version control without a chance of being compromised (unless the sealed-secrets-controller does), and I even get handy notifications in case anything should go wrong!

Now, publishing the following blog posts will be as easy as bumping the version of my Helm chart in the end, and - et voilà - ArgoCD will detect the changes, sync them, and we'll be live!

We covered the fundamentals of GitOps and ArgoCD in this blog and followed them up with one of many viable approaches to secrets management, as well as a short glimpse into notification configuration within ArgoCD, but there's so much more!

If you want to explore things like SSO, the several addons ArgoCD offers for areas like workflows or event handling, or try and recreate a setup similar to mine, go ahead and do so in your favorite local Kubernetes cluster! Alternatively, try Civo, where you can install ArgoCD and some addons from the Civo Marketplace to get started right away. That's what I did in the beginning, anyways. ;)


I don't claim that this is the single best and completest article on ArgoCD, not even close! So to really round things up, here's a list of resources around ArgoCD I found particularly interesting or helpful in the past:

Did you find this article valuable?

Support Daniel Bodky by becoming a sponsor. Any amount is appreciated!

Learn more about Hashnode Sponsors
Share this