Deploy Dify Enterprise on AWS using CDK.
| Component | Helm Chart Value | Count | vCPU | Memory (GB) | Storage (GB) | Notes |
|---|---|---|---|---|---|---|
| S3 | persistence | 1 | ||||
| Redis DB | externalRedis | 1 | 2 | 6.38 | ||
| RDS Postgres DB | externalPostgres | 2 | 2 | 8 | ||
| K8S Worker Node | 1 | 4 | 16 | 100 | ||
| OpenSearch | vectorDB | 1 | 2 | 16 | 100 |
| Component | Helm Chart Value | Count | vCPU | Memory (GB) | Storage (GB) | Notes |
|---|---|---|---|---|---|---|
| S3 | persistence | 1 | ||||
| Redis DB | externalRedis | 1 | 2 | 12.93 | ||
| RDS Postgres DB | externalPostgres | 1 | 4 | 32 | ||
| K8S Worker Node | 6 | 8 | 32 | 100 | ||
| OpenSearch | vectorDB | 2 | 8 | 64 | 100 |
The whole process is expected to take 60 mins
-
Install and configure the AWS CLI:
aws configure
-
git clone https://github.com/langgenius/aws-cdk-for-dify.git
-
npm install
-
cp .env.example .env
Modify the environment variable values in the
.envfile.-
ENVIRONMENT: Specifies the deployment environment; must be eithertestorprod. -
CDK_DEFAULT_REGION: The AWS region where Dify Enterprise will be deployed. -
CDK_DEFAULT_ACCOUNT: Your AWS account ID. -
DEPLOY_VPC_ID: The ID of an existing VPC for deployment. If not set, CDK will create one for you. -
Note: If using an existing VPC
-
Make sure you have 2 or more public subnets for Application Load Balancer, and 2 or more private subnets can access internet(associated with NAT) to pull docker image from internet.
-
Add Tag for the subnets(Otherwise, Step 6 will show WARN about auto-tag failed, which will result in the Application Load Balancer failing to be created successfully.):
Subnet Type Tag key tag value public kubernetes.io/role/elb 1 private kubernetes.io/role/internal-elb 1
-
-
Subnets Configuration (
DEPLOY_VPC_IDrequired, comma-separated without spaces, private subnets is recommended by AWS security best practices):EKS_CLUSTER_SUBNETS: Subnet IDs for the EKS control plane. Requires at least 2 subnets in different Availability Zones (AZs).EKS_NODES_SUBNETS: Subnet IDs for the EKS worker nodes. Requires at least 2 subnets in different AZs.REDIS_SUBNETS: Subnet IDs for Redis deployment.RDS_SUBNETS: subnet ids for RDS database. (At least 2 with different AZs)OPENSEARCH_SUBNETS: Subnet IDs for OpenSearch deployment.OPENSEARCH_ADMINNAME: OpenSearch Domain master ame.OPENSEARCH_PASSWORD: OpenSearch Domain master password.
-
AWS_EKS_CHART_REPO_URL: (For AWS China regions ONLY) The AWS EKS Helm chart repository URL. -
RDS_PUBLIC_ACCESSIBLE: Set totrueto make RDS publicly accessible (NOT RECOMMENDED).
Note:
- If you are using AWS China regions, you must configure the
AWS_EKS_CHART_REPO_URLfor proper functionality. Please contact Dify Team for the URL. - It is recommended to use an existing VPC for easier resource access.
-
-
Initialize the CDK environment:
npm run init
-
-
Deploy Dify Enterprise:
npm run deploy
-
-
- Navigate to the EKS Cluster panel, select the "Access" menu, and click on "Manage access":

- In the "Manage access" dialog, select "EKS API and ConfigMap," then click "Save Changes."
- In the IAM Access Entries panel, click "Create access entry":

- Add your IAM user and assign the following permissions:
AmazonEKSAdminPolicyAmazonEKSAdminViewPolicyAmazonEKSClusterAdminPolicy
- Navigate to the EKS Cluster panel, select the "Access" menu, and click on "Manage access":
-
aws eks update-kubeconfig --region <cn-northwest-1> --name <Dify-Testing-DifyStackTest-EKS>
Adjust the
regionandnameparameters according to your deployment:- region: The AWS region where your cluster is deployed.
- name: The EKS cluster name .
-
Change the Helm
values.yamlfile. To enable it, modify thepersistencesection as follows, replace {your-region-name} and {your-s3-bucket-name} with the name of resource created by CDK, make sure you have turnuseAwsManagedIamon:persistence: type: "s3" s3: endpoint: "https://s3.{your-region-name}.amazonaws.com" region: "{your-region-name}" bucketName: "{your-s3-bucket-name}" useAwsManagedIam: true
-
Change the Helm
values.yamlfile, removepostgresection, and modify theexternalPostgressection as follows, replace {your-postgres-endpoint} and {your-postgres-password} with data stored in the Secret Manager of AWS:postgre: enabled: false externalPostgres: enabled: true address: "{your-postgres-endpoint}" port: 5432 credentials: dify: database: "postgres" username: "clusteradmin" password: "{your-postgres-password}" sslmode: "disable" enterprise: database: "postgres" username: "clusteradmin" password: "{your-postgres-password}" sslmode: "disable"
-
Change the Helm
values.yamlfile, removeredissection, and modify theexternalRedissection as follows, replace {your-redis-host} withPrimary endpointin console of ElastiCache-Redis OSS cachesNote: remove the port number of the
Primary endpointredis: enabled: false externalRedis: enabled: true host: "{your-redis-host}" port: 6379 username: "" password: "" useSSL: false
-
Change the Helm
values.yamlfile, modify theexternalTypesection as follows:- replace
{openSearch_endpont}with aws Opensearch instant's Domain endpoint, removehttps://and use the left. - replace the
<OPENSEARCH_ADMINNAME>and<OPENSEARCH_PASSWORD>with the value you have set in.env
vectorDB: useExternal: true externalType: "opensearch" externalOpenSearch: host: "{openSearch_endpont}" port: 443 user: "<OPENSEARCH_ADMINNAME>" password: "<OPENSEARCH_PASSWORD>" useTLS: true
- replace
-
You need to set the docker image pull secret before installing Dify Enterprise.
Note: If you haven't got the username and password, please contact us, email address:[email protected]
-
If you are just testing the deploy process, you can just ignore this step and go on.
Go to AWS
ACMto require the certificates of the domain you declared in helm chart configuration filevalues.yaml:consoleApiDomain: "console.xxxx.com" consoleWebDomain: "console.xxxx.com" serviceApiDomain: "api.xxxx.com" appApiDomain: "app.xxxx.com" appWebDomain: "app.xxxx.com" filesDomain: "upload.xxxx.com" enterpriseDomain: "enterprise.xxxx.com"
they are:
console.xxxx.com api.xxxx.com app.xxxx.com upload.xxxx.com enterprise.xxxx.com
go to your domain service provider (e.g. cloud flare, aws route53, etc.) and setup the
cnameto prove your ownership of these domains.change the values.yaml param: global.useTLS
useTLS: true -
It is recommended to use an AWS Application Load Balancer (ALB) for your ingress configuration in the Helm
values.yamlfile. To enable it, modify theingresssection as follows:For testing:
ingress: enabled: true className: "alb" annotations: { # Existing annotations ... # Add the following annotations alb.ingress.kubernetes.io/target-type: "ip", alb.ingress.kubernetes.io/scheme: "internet-facing", }
For production:
refer to: AWS load balancer controller setup
ingress: enabled: true className: "alb" annotations: { # set file upload size limit alb.ingress.kubernetes.io/target-type: "ip", alb.ingress.kubernetes.io/scheme: "internet-facing", alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' }
-
Dify Enterprise is deployed using Helm.
helm repo add dify https://langgenius.github.io/dify-helm helm repo update helm upgrade -i dify -f values.yaml dify/dify
-
After setting up, get the temperally external IP of the ALB.
ping {aws_alb_dns}Then add the following line to your
/etc/hostsfile.4.152.1.216 console.dify.local 4.152.1.216 app.dify.local 4.152.1.216 api.dify.local 4.152.1.216 upload.dify.local 4.152.1.216 enterprise.dify.local
Go to your Domain service provider, and set up DNS to the ALB address(Go to AWS-EC2-Load Balancer to get the dns of alb):
domain cname api.xxxx.com <alb_dns> app.xxxx.com <alb_dns> upload.xxxx.com <alb_dns> enterprise.xxxx.com <alb_dns> console.xxxx.com <alb_dns> -
Warning: You have to set up Dify Console
http://console.dify.localfirst before login to Dify Enterprise Dashboardhttp://enterprise.dify.local.-
Dify Console: visit
http://console.dify.localand finish the installation. -
Enterprise Dashboard: visit
http://enterprise.dify.local, and login with the default email and password. You can change the password after login.email: [email protected] password: difyai123456
-
The process may take 20 mins.
Destroy the deployment for the environment specified in the .env file under ENVIRONMENT.
npm run destroyNote: login to AWS console, review the VPC, and make sure the VPC created by CDK has destroyed, if not, please Check Cloudformation and perform the deletion operation.
To customize deployment configurations, modify the test.ts file for the testing environment or the prod.ts file for the production environment.
