-
Notifications
You must be signed in to change notification settings - Fork 574
Adding EKS Capabilities -ACK,ARGO CD, KRO #1781
Description
Outline
This proposal describes a new EKS Capabilities Learning Path section for the EKS Workshop, targeting a platform engineer / DevOps persona. The fast path showcases three complementary EKS capabilities — AWS Controllers for Kubernetes (ACK), Argo CD, and the Kubernetes Resource Orchestrator (KRO) — in a sequential, app-centric narrative using the retail sample application.
The three labs build on each other to tell a coherent story: provision real AWS infrastructure from Kubernetes (ACK), deliver application changes automatically via GitOps (Argo CD), then orchestrate the complete multi-resource stack declaratively (KRO).
Estimated time: 60 minutes
Key Learning Outcomes:
- Use ACK to provision a real AWS DynamoDB table and migrate the
cartsmicroservice from its local DynamoDB pod to a cloud-managed table - Deploy the
catalogmicroservice via GitOps using Argo CD, syncing from a pre-provisioned AWS CodeCommit repository - Use KRO to define the complete
cartsstack (DynamoDB table → ConfigMap → Deployment → Service) as a singleResourceGroup, demonstrating topological ordering - Understand how ACK, Argo CD, and KRO complement each other within a production EKS environment
Provide the flow of the lab exercise, including what sample application components will be used
The fast path uses the retail sample application and focuses on two microservices from the existing base application:
carts— the shopping cart service, which already uses DynamoDB as its persistence provider (confirmed inmanifests/base-application/carts/configMap.yaml). By default it points to a local DynamoDB pod; ACK and KRO labs will migrate it to a real AWS DynamoDB table.catalog— the product catalog service. A clean, self-contained microservice used to demonstrate a full Argo CD GitOps delivery cycle.
Lab 1: Provision AWS infrastructure with ACK (20 minutes)
What's been set up for you: The
cartsmicroservice is running with a local DynamoDB pod (carts-dynamodb). An IRSA IAM role for the ACK DynamoDB controller is pre-provisioned inprepare-environment.
Goal: Use ACK to provision a real AWS DynamoDB table and switch carts from the local pod to the cloud-managed table.
- Install the ACK DynamoDB controller via Helm into the
ack-systemnamespace - Apply an ACK
Tablemanifest to create a DynamoDB table in the cluster's AWS account - Verify table creation via
kubectl get tablesand the AWS Console - Update the
cartsConfigMap to pointRETAIL_CART_PERSISTENCE_DYNAMODB_ENDPOINTat the real AWS DynamoDB endpoint and remove the local pod reference - Restart the
cartsDeployment and verify it is reading/writing to the ACK-provisioned table
Why DynamoDB? The
cartsservice already declaresRETAIL_CART_PERSISTENCE_PROVIDER: dynamodbin its ConfigMap — ACK simply upgrades it from a local emulator to a real AWS table with no code changes required. DynamoDB also provisions near-instantly, keeping the lab timeline realistic.
Lab 2: Continuous delivery with Argo CD (20 minutes)
What's been set up for you: The Argo CD EKS Capability has been enabled on the cluster as part of prepare-environment. A CodeCommit repository is pre-provisioned and seeded with the catalog Kubernetes manifests. An IAM Capability Role granting Argo CD access to the CodeCommit repository is pre-configured.
Goal: Replace the manual kubectl apply workflow for catalog with a fully automated GitOps delivery pipeline using the EKS-managed Argo CD capability.
- Verify the Argo CD capability is active on the cluster
- Access the hosted Argo CD UI via the capability endpoint; authenticate using AWS Identity Center (no port-forwarding or admin password retrieval required)
- Register the CodeCommit repository as a source in Argo CD (the IAM Capability Role handles authentication — no SSH keys or Git credentials to manage)
- Create an Argo CD Application resource pointing to the CodeCommit repository and the catalog manifest path, with automated sync policy enabled
- Trigger an initial sync; verify the
catalogDeployment and Service are live in the cluster - Simulate a GitOps update: push a change to the
catalogimage tag in CodeCommit, observe Argo CD auto-sync and rolling update - Verify the updated
catalogmicroservice is running via the application UI
Lab 3: Orchestrate dependencies with KRO (20 minutes)
What's been set up for you: ACK and Argo CD are installed from the previous labs. The
cartsservice is running against the real DynamoDB table.
Goal: Use KRO to define the complete carts stack as a single declarative ResourceGroup, demonstrating how KRO enforces topological ordering across resource dependencies.
- KRO is an EKS add-on (EKS capability)
- Define a
ResourceGroupthat declares:- An ACK DynamoDB
Table(dependency layer 1) - A Kubernetes
ConfigMapwith the table endpoint injected (dependency layer 2) - The
cartsDeploymentconsuming the ConfigMap (dependency layer 3) - The
cartsService(dependency layer 4)
- An ACK DynamoDB
- Apply the
ResourceGroupand observe KRO provisioning resources in the correct topological order - Use
kubectl get resourcegroupsandkubectl describeto inspect KRO's sequencing behavior - Verify the full
cartsstack is running end-to-end
Key concept: KRO's topological ordering guarantees the DynamoDB table exists and its endpoint is available before the
cartsDeployment starts — a critical pattern for stateful workloads.
What additional AWS infrastructure or EKS addons/components will need to be created or installed to support this lab?
The fast path will reuse the existing shared EKS cluster. The following will be pre-provisioned in prepare-environment fastpaths/gitops:
- OIDC provider for the cluster (required for IRSA)
- IRSA IAM role for the ACK DynamoDB controller (with
dynamodb:CreateTable,dynamodb:DescribeTable,dynamodb:DeleteTablepermissions) - AWS CodeCommit repository pre-seeded with the
catalogKubernetes manifests - IRSA IAM role for Argo CD to authenticate to CodeCommit (using Git credential helper pattern)
- Helm chart repos for
ack-dynamodb-controller,argo-cd, andkropre-configured in the environment
Participants will not need to provision any EC2 instances, RDS databases, or additional node groups.
What additional software or configuration will be required in the Cloud9 IDE?
The following will be pre-configured or installed during the lab:
- Helm 3 (pre-installed in workshop environment)
argocdCLI (installed during Lab 2)- AWS CLI configured with cluster region (pre-configured)
kubectlwith kubeconfig for the shared cluster (pre-configured)
No additional Cloud9 plugins or IDE configuration required.
Are enhancements to the retail sample application required to support this lab exercise?
No code changes are required. The carts microservice already supports DynamoDB as a persistence provider via its ConfigMap. The only change needed is a ConfigMap update (pointing at the real AWS endpoint) — which is part of the lab steps, not a pre-requisite code change.
The catalog manifests need to be exported to the CodeCommit repository — no functional code changes required.
Console-driven vs. CLI-driven paths
Following the workshop's dual-path pattern, we will provide:
- CLI path (primary): All steps using
kubectl,helm, andargocdCLI - Console path (secondary callout): Screenshots showing the equivalent Argo CD UI sync, ACK resource visibility in the AWS Console, and the DynamoDB table in the AWS Console
Metadata
Metadata
Assignees
Labels
Type
Projects
Status