- Terragrunt Deployment Project
Use these instructions to deploy the cloud resources to AWS and MongoDB from your local machine. Ensure you follow the
pre-requisites before proceeding.
The section at the end of this readme talks about how automate the deployment using GitHub Actions. I recommend
deploying from a local machine first before automating the deployment.
This project is configured to deploy resources in the AWS eu-west-1 region.
Use these instructions to deploy the cloud resources to AWS and MongoDB from your local machine. Ensure you follow the
pre-requisites before proceeding.
The section at the end of this readme talks about how automate the deployment using GitHub Actions. I recommend
deploying from a local machine first before automating the deployment.
This project is configured to deploy resources in the AWS eu-west-1 region.
See the Terraform modules provided for more detail on the resources deployed.
The resources deployed by this project are:
- Networking resources: VPC, security groups, public and private subnets, internet gateway and nat gateway
- DynamoDB table: for storing communication history data
- MongoDB cluster: for storing gateways
- Aurora Postgres Serverless V2: security groups and subnets for Temporal
- EKS cluster: node groups, autoscaling groups and security groups for creating the Kubernetes cluster
Read more about Terragrunt here.
See this repository for another example.
In summary:
- Terragrunt uses the Terraform modules provided to deploy the cloud resources to different AWS accounts, regions and environments.
- The
_envcommon
directory contains the common module configuration for all environments. - Which AWS account, region and environment resources are deployed to is determined by the folder structure in this
directory. It uses the hierarchy: environment -> account -> region -> modules.
- The first folder is the account configuration, some users might have separate AWS accounts to separate
environments. It contains the
account.hcl
file specifying the account id and the credentials profile to use. - The next folder is the region within the account to deploy the resources to e.g. eu-west-1, us-east-1. It contains
the
region.hcl
file specifying the region to deploy to. - The last folder contains all the modules to deploy and which development environment the modules belong to in
the
env.hcl
file.- Each module directory contains a
terragrunt.hcl
file that specifies the source of the module and any configuration variables to pass to the module that are not included in the common configuration.
- Each module directory contains a
- The first folder is the account configuration, some users might have separate AWS accounts to separate
environments. It contains the
- If you want to deploy to a new region within the dev account, simply create a new folder with the new region name and
the
region.hcl
file.- Within new region directory, create an
env.hcl
file and create a "modules" directory. Then copy the modules you want to deploy to that region and configure any input variables.
- Within new region directory, create an
- Terragrunt automatically works out the dependencies between modules and deploys them in the correct order.
Note
It is not possible to run a plan command before deploying all resources. You could run a plan command and deploy each module individually if you want to see the changes before deploying.
- Terraform CLI installed.
- Terragrunt CLI installed.
- An AWS account with the necessary permissions to create
resources.
- See this guide on how to get your credentials and store them in the correct format.
- AWS CLI installed and configured with the
necessary credentials.
- Ensure your public and private keys are stored in the
~/.aws/credentials
file under a profile called[saml]
for the Terragrunt project to access them.
- Ensure your public and private keys are stored in the
- Remote state storage - an S3 bucket and DynamoDB table to store the Terraform state (Recommended).
- Deploy using the tf-states module provided. Otherwise, deploy manually using the
following steps.
- This stores your Terraform state files in an S3 bucket and uses a DynamoDB table to lock the state files to prevent developers from making concurrent changes.
- Create a new S3 bucket
called
terraform-state-<account_name>-<account_id>-<aws_region>
replacing the name, id and region with the appropriate values as configured account and region hcl files. - Create a new DynamoDB table
called
terraform-locks
. - To change the name of the S3 bucket and DynamoDB table to use existing buckets, tables, or to use a local state configuration, update the remote state configuration block in the terragrunt.hcl file.
- Deploy using the tf-states module provided. Otherwise, deploy manually using the
following steps.
- A MongoDB Atlas account with the necessary permissions to create a cluster.
- Create public and private keys within an
organisation using these instructions.
- You may need to create an organisation if you do not have one.
- Make sure the keys generated have the
Ogranization Member
role. - Note down the public and private keys and see the configuration section below on how to use them.
- Set the MongoDB Atlas and AWS credentials as described in the configuration section below.
- Set the Temporal database username and password using the configuration section below.
- Set the account number in the
account.hcl
file in thedev
directory to the AWS account number you are deploying to.
There are two ways to set the MongoDB Atlas credentials, using a variable file or environment variables
- Using a variable file
- In the
/dev/eu-west-1/modules/gateway-db/
directory create aterraform.tfvars
file. - Add the following content to the file and replace the placeholders with the actual values retrieved from the
pre-requisites section:
mongo_private_key = "<private_key>" mongo_public_key = "<public_key>"
- In the
- Using environment variables. Set these variables for CI/CD pipelines.
- Set the following environment variables in your terminal:
export TF_VAR_mongo_private_key="<private_key>" export TF_VAR_mongo_public_key="<public_key>"
- Set the following environment variables in your terminal:
An AWS credential file should be used for local deployments and environment variables for CI/CD pipelines.
- Using a credential file:
- Your AWS credentials should be stored in the
~/.aws/credentials
file under a profile called[saml]
. Ensure you have the correct credentials stored in this file. - See this guide for help
- Your AWS credentials should be stored in the
- Using environment variables. Set these variables for CI/CD pipelines.
- Set the following environment variables in your terminal, replacing the placeholders with the actual values
retrieved from the pre-requisites section:
export AWS_ACCESS_KEY_ID="<access_key>" export AWS_SECRET_ACCESS_KEY="<secret_key>"
- Set the following environment variables in your terminal, replacing the placeholders with the actual values
retrieved from the pre-requisites section:
The Temporal database credentials can be set using a variable file or environment variables for CI/CD pipelines. You can pick any username and password you want.
- Using a variable file
- In the
/dev/eu-west-1/modules/temporal-db/
directory create aterraform.tfvars
file. - Add the following content to the file and replace the placeholders with the actual values retrieved from the
pre-requisites section:
temporal_db_password = "<password>" temporal_db_username = "<username>"
- In the
- Using environment variables. Set these variables for CI/CD pipelines.
- Set the following environment variables in your terminal:
export TF_VAR_temporal_db_username ="<password>" export TF_VAR_temporal_db_password ="<username>"
- Set the following environment variables in your terminal:
This section describes the optional configurations you can set for each module. The _envcommon directory contains the common configuration for all cloud accounts and environments. Further configurations can be set in the Terraform modules provided.
File location | Parameter Name | Description | Default Value |
---|---|---|---|
dev/account.hcl | account_name | The name of the account matching the folder | dev |
dev/account.hcl | aws_account_id | The account id of the AWS account you want to deploy the resources to | 326610803524 |
dev/account.hcl | aws_profile | The aws profile to use (should be set to saml) | saml |
_envcommon/gateway-db.hcl | mongo_db_project_name | The name of the project to create in mongo DB | CSP |
default-tags.hcl | default-tags | A json object containing tags to apply to all cloud resources in key value pairs. The RepoURL is overridden automatically by modules to point to the URL of the GitHub repository for referencing which module created the resource. |
"ManagedBy": "Terraform", "RepoURL": "Undefined" |
dev/eu-west-1/modules/eks/terragrunt.hcl | kms_key_administrators | This is an array containing the IAM ARNs of roles or users that should have access to the EKS cluster | arn:.../...AdministratorAccess |
on_demand_nodes | The configuration for the on demand nodes used for critical resources such as Temporal that must run 24/7. Specify here the architecture, instance, minimum number of nodes, maximum number of nodes and desired size. |
t4g.large, min_size 1, max size 5, desired size 1 | |
spot_nodes | The configuration for the spot nodes used for resources other than Temporal to save costs. Specify here the architecture, instance, minimum number of nodes, maximum number of nodes and desired size. |
t4g.large, min_size 2, max size 5, desired size 2 | |
dev/eu-west-1/modules/history-db/terragrunt.hcl | billing_mode | How the history table should be billed: See here for more detail | PAY_PER_REQUEST |
dev/eu-west-1/modules/networking/terragrunt.hcl | cidr_block | The CIDR block to allocate to the VPC | 172.32.0.0/16 |
private and public subnets | The IP addresses to assign for each public and private subnet within the VPC | 172.32.1.0/24 172.32.2.0/24 172.32.3.0/24 172.32.4.0/24 |
|
dev/eu-west-1/modules/temporal-db/terragrunt.hcl | sg_eks_db_cidr | Configures the security group to allow the specified CIDR block or IP addresses within the VPC to access the temporal database | 172.32.0.0/16 |
engine_version | The postgres database version to deploy. It must be compatible with Temporal's advanced visibility and AWS postgres serverless versions. |
13.12 |
This section describes how to plan, apply and destroy the cloud resources using Terragrunt.
Ensure you have completed the pre-requisites and configuration steps above before proceeding.
- Clone this repository to your local machine.
- Complete the pre-requisites and configuration steps above.
- Navigate to the
deployment/terragrunt/dev/eu-west-1/modules
directory. - Run
terragrunt run-all apply
to deploy all the modules. Terragrunt will automatically deploy the modules in the correct order.- Once you have deployed the resources, you can use
terragrunt run-all plan
to see the changes before applying them next time if needed.
- Once you have deployed the resources, you can use
- Type
yes
when prompted to confirm the deployment. - To destroy all the resources, run
terragrunt run-all destroy
and typeyes
when prompted to confirm the destruction.
I recommend this only when troubleshooting, most of the time you should apply all modules at once to ensure dependencies are also updated.
- Clone this repository to your local machine.
- Complete the pre-requisites and configuration steps above.
- Navigate to the
deployment/terragrunt/dev/eu-west-1/modules
directory. - Navigate into the module you want to deploy such as
cd networking
. - Run
terragrunt apply
to deploy the module. Terragrunt will automatically deploy the module.- Once you have deployed the resources, you can use
terragrunt plan
to see the changes before applying them next time if needed
- Once you have deployed the resources, you can use
- Type
yes
when prompted to confirm the deployment. - To destroy the resources, run
terragrunt destroy
and typeyes
when prompted to confirm the destruction.
Follow this order when deploying the modules:
networking
history-db
gateway-db
temporal-db
eks
ExpiredToken: The security token included in the request is expired
- Your AWS credentials have expired. Update the [saml] profile in the~/.aws/credentials
file with the new credentials.ParentFileNotFoundError: Could not find a account.hcl in any of the parent folders
- You need to run theterragrunt run-all apply
command from themodules
directory orterragrunt apply
from within a specific module folder.fatal: '$GIT_DIR' too big
- This can occur if the filepath is too long. Set the system environment variable:TERRAGRUNT_DOWNLOAD
to a temporary directory with a shorter path. E.g. "C:\temp" on Windows.
Once you have deployed the infrastructure, you can automate the deployment using GitHub Actions.
See the GitHub Action workflows provided in this repository for an example.
See the deployment readme to connect to the EKS cluster, finish configuring the cluster and deploy the services.