Skip to content

Adding EKS Capabilities -ACK,ARGO CD, KRO #1781

@dshamanthreddy

Description

@dshamanthreddy

Outline

This proposal describes a new EKS Capabilities Learning Path section for the EKS Workshop, targeting a platform engineer / DevOps persona. The fast path showcases three complementary EKS capabilities — AWS Controllers for Kubernetes (ACK), Argo CD, and the Kubernetes Resource Orchestrator (KRO) — in a sequential, app-centric narrative using the retail sample application.

The three labs build on each other to tell a coherent story: provision real AWS infrastructure from Kubernetes (ACK), deliver application changes automatically via GitOps (Argo CD), then orchestrate the complete multi-resource stack declaratively (KRO).

Estimated time: 60 minutes

Key Learning Outcomes:

  • Use ACK to provision a real AWS DynamoDB table and migrate the carts microservice from its local DynamoDB pod to a cloud-managed table
  • Deploy the catalog microservice via GitOps using Argo CD, syncing from a pre-provisioned AWS CodeCommit repository
  • Use KRO to define the complete carts stack (DynamoDB table → ConfigMap → Deployment → Service) as a single ResourceGroup, demonstrating topological ordering
  • Understand how ACK, Argo CD, and KRO complement each other within a production EKS environment

Provide the flow of the lab exercise, including what sample application components will be used

The fast path uses the retail sample application and focuses on two microservices from the existing base application:

  • carts — the shopping cart service, which already uses DynamoDB as its persistence provider (confirmed in manifests/base-application/carts/configMap.yaml). By default it points to a local DynamoDB pod; ACK and KRO labs will migrate it to a real AWS DynamoDB table.
  • catalog — the product catalog service. A clean, self-contained microservice used to demonstrate a full Argo CD GitOps delivery cycle.

Lab 1: Provision AWS infrastructure with ACK (20 minutes)

What's been set up for you: The carts microservice is running with a local DynamoDB pod (carts-dynamodb). An IRSA IAM role for the ACK DynamoDB controller is pre-provisioned in prepare-environment.

Goal: Use ACK to provision a real AWS DynamoDB table and switch carts from the local pod to the cloud-managed table.

  • Install the ACK DynamoDB controller via Helm into the ack-system namespace
  • Apply an ACK Table manifest to create a DynamoDB table in the cluster's AWS account
  • Verify table creation via kubectl get tables and the AWS Console
  • Update the carts ConfigMap to point RETAIL_CART_PERSISTENCE_DYNAMODB_ENDPOINT at the real AWS DynamoDB endpoint and remove the local pod reference
  • Restart the carts Deployment and verify it is reading/writing to the ACK-provisioned table

Why DynamoDB? The carts service already declares RETAIL_CART_PERSISTENCE_PROVIDER: dynamodb in its ConfigMap — ACK simply upgrades it from a local emulator to a real AWS table with no code changes required. DynamoDB also provisions near-instantly, keeping the lab timeline realistic.


Lab 2: Continuous delivery with Argo CD (20 minutes)

What's been set up for you: The Argo CD EKS Capability has been enabled on the cluster as part of prepare-environment. A CodeCommit repository is pre-provisioned and seeded with the catalog Kubernetes manifests. An IAM Capability Role granting Argo CD access to the CodeCommit repository is pre-configured.

Goal: Replace the manual kubectl apply workflow for catalog with a fully automated GitOps delivery pipeline using the EKS-managed Argo CD capability.

  • Verify the Argo CD capability is active on the cluster
  • Access the hosted Argo CD UI via the capability endpoint; authenticate using AWS Identity Center (no port-forwarding or admin password retrieval required)
  • Register the CodeCommit repository as a source in Argo CD (the IAM Capability Role handles authentication — no SSH keys or Git credentials to manage)
  • Create an Argo CD Application resource pointing to the CodeCommit repository and the catalog manifest path, with automated sync policy enabled
  • Trigger an initial sync; verify the catalog Deployment and Service are live in the cluster
  • Simulate a GitOps update: push a change to the catalog image tag in CodeCommit, observe Argo CD auto-sync and rolling update
  • Verify the updated catalog microservice is running via the application UI

Lab 3: Orchestrate dependencies with KRO (20 minutes)

What's been set up for you: ACK and Argo CD are installed from the previous labs. The carts service is running against the real DynamoDB table.

Goal: Use KRO to define the complete carts stack as a single declarative ResourceGroup, demonstrating how KRO enforces topological ordering across resource dependencies.

  • KRO is an EKS add-on (EKS capability)
  • Define a ResourceGroup that declares:
    • An ACK DynamoDB Table (dependency layer 1)
    • A Kubernetes ConfigMap with the table endpoint injected (dependency layer 2)
    • The carts Deployment consuming the ConfigMap (dependency layer 3)
    • The carts Service (dependency layer 4)
  • Apply the ResourceGroup and observe KRO provisioning resources in the correct topological order
  • Use kubectl get resourcegroups and kubectl describe to inspect KRO's sequencing behavior
  • Verify the full carts stack is running end-to-end

Key concept: KRO's topological ordering guarantees the DynamoDB table exists and its endpoint is available before the carts Deployment starts — a critical pattern for stateful workloads.


What additional AWS infrastructure or EKS addons/components will need to be created or installed to support this lab?

The fast path will reuse the existing shared EKS cluster. The following will be pre-provisioned in prepare-environment fastpaths/gitops:

  • OIDC provider for the cluster (required for IRSA)
  • IRSA IAM role for the ACK DynamoDB controller (with dynamodb:CreateTable, dynamodb:DescribeTable, dynamodb:DeleteTable permissions)
  • AWS CodeCommit repository pre-seeded with the catalog Kubernetes manifests
  • IRSA IAM role for Argo CD to authenticate to CodeCommit (using Git credential helper pattern)
  • Helm chart repos for ack-dynamodb-controller, argo-cd, and kro pre-configured in the environment

Participants will not need to provision any EC2 instances, RDS databases, or additional node groups.


What additional software or configuration will be required in the Cloud9 IDE?

The following will be pre-configured or installed during the lab:

  • Helm 3 (pre-installed in workshop environment)
  • argocd CLI (installed during Lab 2)
  • AWS CLI configured with cluster region (pre-configured)
  • kubectl with kubeconfig for the shared cluster (pre-configured)

No additional Cloud9 plugins or IDE configuration required.


Are enhancements to the retail sample application required to support this lab exercise?

No code changes are required. The carts microservice already supports DynamoDB as a persistence provider via its ConfigMap. The only change needed is a ConfigMap update (pointing at the real AWS endpoint) — which is part of the lab steps, not a pre-requisite code change.

The catalog manifests need to be exported to the CodeCommit repository — no functional code changes required.


Console-driven vs. CLI-driven paths

Following the workshop's dual-path pattern, we will provide:

  • CLI path (primary): All steps using kubectl, helm, and argocd CLI
  • Console path (secondary callout): Screenshots showing the equivalent Argo CD UI sync, ACK resource visibility in the AWS Console, and the DynamoDB table in the AWS Console

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions