First read the original README.md
This guide explains how to integrate BigConfig into a Terraform project. The repository commits follow the format step #: [description of the change]. To understand the integration process, follow the steps sequentially starting from Step 1.
- Babashka 1.12.210 or above (available via asdf or brew; note that the latest Nix version is currently 1.12.209).
Initial preparation.
Create a BigConfig project within the .big-config directory using the Terraform template:
clojure -Tbig-config terraform :target-dir .big-configThe current Terraform template contains a minor bug where the resources/alpha/ directory is not created automatically. Even when using functions to generate configuration files, this directory is required.
-
Add
rama-cluster/single/main.tfto the list of configuration files insideresources/alpha. For now, this file is copied verbatim; it will be functionalized in a later step. -
Remove the lock properties for now.
-
Remove
data-fnandkw->contenttemporarily, as main.tf is currently being copied verbatim. -
Add a proxy
bb.ednto allow invoking.big-config/bb.ednfrom the project root. -
Configure the transform step to act as a verbatim copy of the
rootfolder. -
Update the
target-dirto be the parent directory of.big-config.
# From:
bin/rama-cluster.sh plan --singleNode cesar-ford
# To:
bb render exec -- alpha prod bin/rama-cluster.sh plan --singleNode cesar-ford
# Using an alias to achieve a zero-cost build step: (https://bigconfig.it/start-here/getting-started/#zero-cost-build-step)
alias rama-cluster="bb render exec -- alpha prod bin/rama-cluster.sh"Convert the main.tf file into single.clj:
cat rama-cluster/single/main.tf | hcl2json | jet --from json --to edn --pretty --keywordize > .big-config/src/single.cljOnce converted, delete the original main.tf and generate the main.tf.json programmatically.
Note: The conversion from HCL to EDN is not infallible. Terraform may report syntax errors, which will be addressed in Step 9. Once
main.tf.jsonis fully functional, you can perform refactoring without needing to run Terraform repeatedly.
Resolve the syntax errors introduced during the HCL-to-EDN conversion. Once fixed, the build will be "green" (successful) again.
Add Malli to validate RamaOpts, eventually replacing rama.tfvars and auth.tfvars.
rama.tfvars and auth.tfvars have been merged and replaced by the JSON version. rama-cluster.sh has been updated to work with rama.tfvars.json instead of rama.tfvars.
Note: The conversion from HCL to EDN is not infallible.
A bug was identified in the provisioner block where file and remote-exec actions were being grouped together, losing their intended order. Changing the provisioner block from a map to a vector enables us to preserve the correct order of operations.
First Terraform templatefile replaced with Selmer templates (setup-disk.sh).
Create a BigConfig project within the .multi directory using the Multi template:
clojure -Tbig-config multi :target-dir .multiWe want to replace the Terraform provisioner and cloud-init with an ansible step. The Ansible template is also a good starting point.
bb render-ansible exec -- gamma prod ansible-playbook main.ymlUsing a cheaper instance for development. start.sh has been moved to Ansible. The Ansible inventory is populated from .rama/[cluster name]/outputs.json. The cluster name is hard-coded for now.
(defn data-fn
[{:keys [profile] :as data} _]
(let [file-path "/Users/amiorin/.rama/cesar-ford/outputs.json"
rama-ip (-> (json/parse-string (slurp file-path) true)
:rama_ip
:value)]
(merge data
{:rama-ip rama-ip
:region "eu-west-1"
:aws-account-id (case profile
"dev" "111111111111"
"prod" "222222222222")})))Moving the provisioning from Terraform to Ansible requires sharing the cluster name and the EC2 instance's IP address with the Ansible workflow. The cluster name is provided via bin/rama-cluster.sh. We parse the command line invocation to extract the cluster name, integrating the Terraform workflow with the Ansible one.
# From:
bin/rama-cluster.sh deploy --singleNode cesar-ford
# To:
bb cluster deploy --singleNode cesar-fordRefactor the code into a workflow that combines Terraform and Ansible.
Ansible is now fully integrated with Terraform. The system parses bb cluster deploy --singleNode cesar-ford to retrieve the cluster name and ~/.rama/[cluster-name]/outputs.json to retrieve the IP address. The Ansible inventory is created dynamically, and Ansible is invoked only if the action is deploy and Terraform succeeds.
The system now provides a custom BigConfig DSL to define linear workflows in the shell.
# Syntax:
bb cluster <action>+ --singleNode cesar-ford [terraform-args] -- [ansible-args]Available actions: plan, deploy, destroy, and ansible. This separation allows you to split hardware provisioning (Terraform) from software configuration (Ansible). During development, you can run the ansible action in isolation; during integration, you can rebuild the entire cluster by combining destroy, deploy, and ansible.
Add ssh subcommand
bb ssh cesar-fordFollow these steps to configure AWS, Tailscale, SSH Agent, and Caddy for use with Rama.
- Create AWS user with AdministratorAccess.
- Enable MFA.
- Generate an Access Key and save it to
~/.aws/credentials.
- Provision a
t4g.nanoinstance. - Open the following ports in the Security Group (0.0.0.0/0): UDP:41641, ICMP, and TCP:22.
- Install Tailscale on the instance.
- Enable ip forwarding.
- Advertise routes:
sudo tailscale set --advertise-routes=172.31.0.0/20,172.31.32.0/20,172.31.48.0/20,172.31.16.0/20- Accept the subnets in the Tailscale admin console.
- Disable "Source/Destination Check" for the t4g.nano instance in AWS.
# Required Variables
region = "us-west-2"
username = "ec2-user"
vpc_security_group_ids = ["sg-0e93b1629988a79fd"]
# Manual Setup Required:
# Ensure this directory exists and contains the zip file.
rama_source_path = "/Users/amiorin/.rama/cache/rama-1.4.0.zip"
zookeeper_url = "https://dlcdn.apache.org/zookeeper/zookeeper-3.8.5/apache-zookeeper-3.8.5-bin.tar.gz"
# Amazon Linux 2023 (ARM)
ami_id = "ami-0e723566181f273cd"
instance_type = "m6g.medium"
# Optional Variables
license_source_path = "" # Must be an empty string if not used
volume_size_gb = 100
use_private_ip = true
# private_ssh_key = "" # Set to null if not using a specific keyWith the IP address 172.31.43.17 and cluster name cesar-ford, you can deploy the monitoring suite and access it via Caddy. Ensure ~/.rama is in your system PATH.
rama-cesar-ford deploy --action launch --systemModule monitoring --tasks 4 --threads 2 --workers 1
caddy reverse-proxy --to http://172.31.43.17:8888
open -a "Google Chrome" https://localhost