MultiCluster Deployment using ArgoCD
ArgoCD is a GitOps-based tool and is very renowned for continuous delivery.
What is GitOps?¶
Let’s say we have a special notebook where we jot down all the steps to build a Lego castle. Every time we want to build that castle, we simply follow the instructions in our notebook step by step. Now, imagine GitOps is like having a magic pencil that automatically updates the notebook whenever we change something in our castle plans.
So, instead of having to remember to update the notebook every time we can change our mind about how many towers the castle should have or where the drawbridge should go, the magic pencil does it for us. That way, whenever we want to build our castle, we just look at our notebook, and it always has the latest, most accurate instructions. GitOps does something similar for building and managing computer programs and systems.
What is ArgoCD?¶
Imagine having a toy box with lots of toys inside. Sometimes, when we want to organize toys neatly, like putting all the cars in one corner and all the dolls in another. Argo CD acts like a magic toy organizer for computer programs instead of toys.
Imagine a big collection of computer programs, like games or apps, and want to keep them organized and updated. Argo CD helps to do that. It makes sure that all the programs are in the right place and that they’re always up-to-date, just like the way we want our toys to be neatly arranged and ready to play with. So, it’s like having a helper that keeps everything in order!
Architecture of ArgoCD¶
Argo CD is a tool used for continuous delivery (CD) of applications to Kubernetes clusters. Its architecture is designed to help automate the deployment and management of applications in Kubernetes environments. Here’s a simplified explanation of its architecture:
- Control Plane: Argo CD has a control plane component, which is responsible for managing the overall deployment process. This includes handling user authentication, managing application configurations, monitoring the cluster’s state, and coordinating deployments.
2. Git Repository: Argo CD relies heavily on Git repositories to store application manifests and configuration files. These repositories contain the desired state of the applications you want to deploy. Whenever changes are made to the repository (e.g., updating application configurations), Argo CD detects these changes and automatically syncs them with the Kubernetes cluster.
3. Application CRDs: Argo CD introduces custom resource definitions (CRDs) called `Application` resources. These resources represent the applications you want to deploy and manage with Argo CD. Each `Application` CRD specifies details such as the source repository, target cluster, synchronization settings, and deployment parameters.
4. Controller: The Argo CD controller is responsible for reconciling the state of `Application` resources with the actual state of the Kubernetes cluster. It continuously monitors changes in the Git repository and orchestrates the deployment process based on the desired state specified in the `Application` CRDs.
5. Sync Engine: The sync engine is a core component of Argo CD responsible for comparing the desired state of applications (defined in Git) with the current state of the Kubernetes cluster. It performs synchronization tasks to ensure that the cluster matches the desired state specified in the `Application` resources.
6. User Interface (UI): Argo CD provides a web-based user interface that allows users to visualize and manage their applications, application configurations, and deployment status. The UI interacts with the control plane and presents users with an intuitive dashboard for managing their applications.
In summary, Argo CD’s architecture revolves around managing application configurations stored in Git repositories, reconciling the desired state with the actual state of Kubernetes clusters, and providing users with a user-friendly interface for continuous delivery and application management.
+----------------------------------------+
| User Interface (UI) |
+----------------------------------------+
| ^
| |
v |
+-------------------------------+
| Control Plane |
| - User Authentication |
| - Application Configuration |
| - Cluster Monitoring |
| - Deployment Coordination |
+-------------------------------+
| ^
| |
v |
+-------------------------------+
| Git Repository |
| - Application Manifests |
| - Configuration Files |
| - Desired State |
+-------------------------------+
| ^
| |
v |
+-------------------------------+
| Argo CD Controller |
| - Reconciles Application |
| State with Cluster State |
+-------------------------------+
| ^
| |
v |
+-------------------------------+
| Sync Engine |
| - Compares Desired State |
| with Current Cluster State |
| - Synchronization Tasks |
+-------------------------------+
| ^
| |
v |
+-------------------------------+
| Kubernetes |
| - Application Deployment |
| - Cluster Management |
+-------------------------------+
Note: If you wish to recreate this project, you will incur charges for creating EKS Clusters on AWS
PROJECT¶
Prerequisites:
Install kubectl from https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
Install eksctl from https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html
Install AWS CLI from https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
Install ArgoCD CLI from https://argo-cd.readthedocs.io/en/stable/cli_installation/#installation
EKS Setup:¶
(I took three separate tabs on the Terminal: black terminal — hub cluster; yellow terminal — Spoke Cluster -1; Blue terminal — Spoke Cluster -2)
eksctl create cluster --name hub-cluster --region us-west-1
Explanation: This command initiates the process of provisioning a Kubernetes cluster on AWS in the specified region that is us-west-1 with the given name (Hub cluster).
Hub Cluster
eksctl create cluster --name spoke-cluster-1 --region us-west-1
eksctl create cluster --name spoke-cluster-2 --region us-west-1
Explanation: The above two commands create multiple Kubernetes clusters, this time named “spoke-cluster-1”, and “spoke-cluster-2”, in the “us-west-1” region using eksctl.
Note: This cluster creation will take approximately 20 - 30 min.
Spoke Cluster-1
Spoke Cluster -2
The reason behind the naming convention:¶
GitOps controller like ArgoCD comes in two modes of deployment, namely:
- Hub-Spoke Model —
In GitOps, the hub-spoke model refers to a deployment architecture where there’s a central Git repository (the “hub”) that contains declarative descriptions of the desired state of the entire system or application. Each target environment, such as development, staging, or production (the “spokes”), pulls these configurations from the central repository and applies them to ensure that the actual state of the system matches the desired state.
Here’s how the hub-spoke model typically works in GitOps:
1. Hub: The central Git repository serves as the single source of truth for the desired state of the system. It contains configuration files, such as Kubernetes manifests, Helm charts, or any other infrastructure-as-code declarations that describe the application’s infrastructure and configuration.
2. Spokes: Each target environment, or deployment cluster (e.g., development, staging, production), acts as a “spoke.” These environments continuously reconcile their state with the desired state specified in the Git repository. They pull the configuration from the repository and apply it to ensure that the actual infrastructure and application state match what’s defined in the repository.
Changes made to the repository trigger a reconciliation process in each spoke environment, automatically updating the infrastructure and application configurations to reflect the changes.
— The spokes typically run agents or controllers (e.g., Flux, Argo CD) responsible for syncing the state between the Git repository and the target environment.
The hub-spoke model offers several benefits in GitOps:
- Centralized control: The central Git repository provides a single source of truth, enabling centralized control and visibility over the entire system’s configuration.
- Consistency: All environments are configured and managed consistently, reducing configuration drift and ensuring that changes are applied uniformly across environments.
- Traceability and versioning: Changes to the system’s configuration are tracked and versioned in the Git repository, providing traceability and enabling rollback to previous configurations if needed.
- Scalability: The model scales well for managing multiple environments and clusters, making it suitable for complex deployment scenarios.
Overall, the hub-spoke model in GitOps simplifies the management and deployment of cloud-native applications by leveraging Git as the primary mechanism for configuration management and continuous delivery.
2. Standalone Model.
In GitOps, the standalone model refers to a deployment architecture where each environment or cluster operates independently and manages its configuration without relying on a central Git repository for synchronization. Instead of pulling configuration from a central repository, each environment maintains its own configuration locally or through other means.
In the standalone model:
1. Independent Environments: Each environment, such as development, staging, or production, operates independently and manages its configuration separately from other environments. There is no central repository that dictates the desired state for all environments.
2. Local Configuration Management: Configuration changes are typically managed locally within each environment. This could involve manual configuration updates, scripts, or other tools specific to the environment.
3. Limited GitOps Practices: While Git might still be used for version control, it plays a less central role in configuration management and deployment. Configuration changes might not be triggered by Git commits or follow the GitOps principles of declarative infrastructure management.
4. Potential for Configuration Drift: Without a central source of truth, there is a higher risk of configuration drift between environments. Changes made in one environment might not be consistently applied to others, leading to inconsistencies and potential operational issues.
The standalone model contrasts with the hub-spoke model, where a central Git repository serves as the source of truth for configuration management and synchronization across environments. While the standalone model may offer simplicity and flexibility for smaller deployments or less complex environments, it can also lead to challenges in managing consistency, traceability, and scalability, especially in larger or more dynamic infrastructures.
Since my hub-cluster, spoke-cluster-1, and spoke-cluster-2 are ready let’s proceed.
hub-cluster
spoke-cluster-1
spoke-cluster-2
Let us get all the contexts with the following commands:
kubectl config get-contexts
kubectl config get-contexts | grep us-west-1 (in case we want particularly)
A context in Kubernetes is a combination of a cluster, user, and namespace, defining the current working environment for managing Kubernetes resources.
Now let‘s switch the context to the hub cluster using the following command:
kubectl config use-context iam-root-account@hub-cluster.us-west-1.eksctl.io
Now it’s time to Install Argo CD on this hub-cluster. The command to install Argo CD can be found in the following link:
https://argo-cd.readthedocs.io/en/stable/getting_started/
Let’s see if our pods are up and running with the following command:
kubectl get pods -n argocd
Now that all the pods are up and running, let’s check configmap as well:
kubectl get cm -n argocd
Run ArgoCD in HTTP mode (insecure):
To achieve this select the “ argocd-cmd-params-cm ” from the list and let’s edit using the following command:
kubectl edit configmap argocd-cmd-params-cm -n argocd
Now we need to add the following data “ server.insecure: “true” and this can be found at https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cmd-params-cm.yaml
In the search look for “ cmd- ” and look for “ docs/operator-manual/argocd-cmd-params-cm-yml ”.
Now in this yaml file search for “ insecure ”.
copy the “ server.insecure: “true” content and add it to the configmap file.
Now save the file.
Let’s double-check to see if the pods and service are up and running using the following commands:
kubectl get pods -n argocd
kubectl get svc -n argocd
Now we need to expose argocd-server from “ ClusterIP” to “ NodePort”.
To achieve this use the following command and change the type from “ ClusterIP” to “ NodePort”.
kubectl edit svc argocd-server -n argocd
Now that we have changed the type from “ ClusterIP ” to “ NodePort ” we can access it using the EC2 instance public ip and port number. For this navigate to AWS Console >> EC2 >> Instances (running). { Make sure you are in the “us-west-1” region }.
Select the “hub-cluster” instance and copy the public ip.
We need to open this port. So head to the security group of the EC2 Instance>>select edit inbound rules>> add rule>> allow “ All Traffic ”.(in this case port number is 30590).
Now let’s try to access our Argo CD server so navigate back to the browser and refresh
Click “ Advanced ”
Click “ Proceed to 13.56.151.188 (unsafe) ”. We now should be able to access the Argo CD UI.
Woh…now that we have Argo CD up and running let us now log in. To log in the default “ Username ” is “ admin ”. For the password, we need to use the following command:
kubectl get secrets -n argocd
In the list obtained, we need to edit the “ argocd-initial-admin-secret ”. This can be done by the following command:
kubectl edit secret argocd-intial-admin-secret -n argocd
then copy the password as shown
now we need to decode it using the following
echo password | base64 --decode
Now we have the password to log into the Argo CD UI. (Do not copy the “ % ”). Copy and paste it in the password field. (Make sure to save the password as we will be using it another time).
Adding Clusters:¶
Now that we were able to log in to the “ Argo CD UI ”. let us add the two clusters “ spoke-cluster-1 ” and “ spoke-cluster-2 ”. But the catch here is we cannot add clusters directly on the “ Argo CD UI ”, so we need to add using “ Argo CD CLI ”.
So, we need to install “Argo CD CLI”. The command to install “ Argo CD CLI” can be found at https://argo-cd.readthedocs.io/en/stable/cli_installation/
since mine is a Mac I have installed using the “ brew install argocd ”.
Now we need to log in to “ Argo CD CLI ”. So the command is
argocd login EC2 Instance publicip:port
The “ username ” is “ admin ” and the “ password ” (the same one which we obtained earlier using the decode command).
As we were able to log in now we need to add clusters. The following command can be used to add:
argocd cluster add iam-root-account@spoke-cluster-1.us-west-1.eksctl.io -- server 13.56.151.188:30590
Breakdown:
argocd: This is the command-line tool for Argo CD.cluster add: This subcommand that is used to add a new cluster to Argo CD.iam-root-account@spoke-cluster-1.us-west-1.eksctl.io: This is the identifier for the cluster being added. It typically includes the IAM user or role, followed by the cluster name and region. (can be found from contexts)--server 13.56.151.188:30590: This specifies the address of the Argo CD server and the port to which the cluster should connect for communication.
Now we need to add the second cluster
Now that we have added clusters let’s go to the UI and check.
As we have added clusters let us deploy our app. In this case, the Guest-book app is located in GitHub.
Now click on “ NEW APP ”. Fill in the details as shown below.
Once filling all the details click “ Create ”.
Now let us deploy our app on the second cluster as well.
Now let us see the same using the CLI. So use the following command:
kubectl config use-context iam-root-account@spoke-cluster-1.us.-west.1.eksctl.io
We have successfully deployed our app as we can witness service, deployment, and replica set.
Now let us edit the configmap. Default it is “ Abhishek ”.
Now let us edit the same “ configmap.yml ” the source of truth that is in “ GitHub ”. So, I changed it from “ abhishek ” to “ sarat ” in the “ ui_properties_file_name ”.
Generally, it takes “3 minutes” to automatically reflect changes, but let us sync it manually. So, click “ SYNC APPS ”>> “ SYNC ”.
Select both apps.
Now that we have synced, let’s check the same using CLI using the
kubectl edit configmap
So when we update the single source of truth that is “ GitHub ” then ArgoCD will successfully update the cluster.
Now let us manually modify the “configmap” instead of modifying the source of truth. I edited the “ ui_properties_file_name: user-interface.properties ”. (from “ sarat ” to “ user ”).
It clearly skipped the change.
This can also be seen on UI.
Now click on “ sync ”>> “ synchronize ”.
Now because we used “ sync ” the change was rolled back in the configmap. (from “ user ” to “ sarat ”).
So, Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It ensures that the actual state of the Kubernetes clusters matches the desired state defined in declarative configuration files stored in a Git repository. It continuously monitors the cluster’s state and reconciles any differences to maintain the desired state, thereby promoting consistency and reliability in the Kubernetes infrastructure.
Now it is time to delete all the clusters.
You can reach me at
LinkedIn:https://www.linkedin.com/in/sachmo/
GitHub: https://github.com/csarat424
More from Sarat Chandra Motamarri¶
Recommended from Medium¶
[
See more recommendations
](https://medium.com/?source=post_page---read_next_recirc--699f4506d4e9---------------------------------------)














































































