Full example setup of self-hosted on-demand workers on AWS ECS+EC2 using a custom Linux container to host the worker.
Heads up! We have recently announced a terminology change around this area of the product and some references to old terminology may be temporarily around until we complete the migration. For practical purposes, the following terms all mean the same and are rebranded as "Worker": runtime, (execution) environment, workforce agent, worker
This project consists of three main components in their respective subfolders.
terraform
that sets up the ECS cluster and associated resources on AWS.container
(Docker) that hosts Robocorp Worker.provisioner
, a serverless application that communicates with control room and manages the containers.
If you want to keep up to date with updates to this repository easily, it is recommended to create a fork or otherwise setup tracking of the upstream changes.
- Robocorp Control Room organization with On-Demand Workers enabled. This feature is part of the Enterprise tier, or may otherwise be activated by Robocorp staff.
- Computer with Terraform v1.3+ and Docker Desktop installed. The scripts in this repository will work as-is on MacOS and Linux, but can be easily adapted for Windows. Additional instructions for specific operating systems: Amazon Linux 2
- AWS Account with administrative privileges. We recommended reserving a dedicated AWS account for this workload within your AWS Organization according to AWS Best Practices.
- Basic knowledge of above mentioned tooling, AWS infrastructure and Robocorp technology stack.
The Terraform project deploys AWS resources on the target account, and will start to incur costs once applied. The default configuration will cost < $100/month, with main cost drivers being two small EC2 instances and two NAT gateways.
- Configure your Terraform state backend in
state-backend.tf
. The default configuration uses a S3 bucket and easiest option is to create a bucket on your account and change the bucket name in the configuration. Detailed instructions are out the scope of this example; please see Terraform documentation or utilize other readily available examples on the Internet. - Apply the terraform project. It will set up a VPC, ECS+EC2 cluster, ECR repository and a Secrets Manager Secret + SSM parameter for passing configuration to the provisioner component. The default configuration
deploys the resources on the
us-east-1
region.cd terrafrom
terraform init
terraform apply
The container image should contain everything needed for running the intended workloads. The default Dockerfile contains an environment suitables for web automation.
- Build the docker container image
cd container
docker build --platform linux/amd64 .
- Upload the image to an image repository. To use the default repository set up by
Terraform above, run the following:
./upload.sh <sha_from_build_output> v2.1
, wherev2.1
is the image tag name.- You can choose any tagging scheme, but the provisioner in this example uses image tag v2.1 by default.
The provisioner is responsible for processing requests from Control Room and launching worker containers when requested. It is implemented as a serverless application and deployed as AWS Lambda functions behind AWS API Gateway.
- Change working directory to the
provisioner
foldercd provisioner
- If you used something else than
v2.0
as the image tag name, configure the provisioner to use the tag given above.- edit
ECR_IMAGE_TAG
environment variable inserverless.yaml
- edit
- Deploy the provisioner application.
npm ci
npm run deploy
- Record the URL of the
worker-request
endpoint; it will be needed when configuring Control Room.
Final step is to configure the setup in Control Room.
- Log in and navigate to Workers page of the workspace where the bots are located
- Click "Add" in the "Self-hosted On-demand Workers" list
Name
: Enter a descriptive nameURL
: copy-paste URL from the provisioner setup step aboveSecret
: secret has been generated and set up with the Terraform project setup. Open AWS secrets manager on the region where the project is deployed and retrieve value of thesecret
key inrc-odw-1-config
.
- The self-hosted worker can now be selected in Step configuration.
NodeJS for setting up the provisioner must be installed separately on AWS Linux 2. AWS guide instructs using Node Version Manager (nvm), which is a good choice.
- Follow the official NVM installation instructions
- Close and reopen the terminal to apply the added configuration
- Make sure
nvm
is foundnvm version
- If you get error
nvm not found
, try to activate it manually:. ~/.nvm/nvm.sh
- Install NodeJS v16 (AWS Linux 2 does not yet support v18):
nvm install 16
- Verify that npm is installed
node --version
- Proceed with provisioner setup as described above