This example demonstrates how to configure CPU, memory, and instance types for different provider types to match your workload requirements.
Note: This example creates a VPC for self-containment and testing purposes. Creating a VPC is not required—providers can use the default VPC or an existing VPC.
- How to configure CPU and memory for Fargate providers
- How to configure compute types for CodeBuild providers
- How to configure memory (which determines CPU) for Lambda providers
- How to configure instance types for EC2 providers
- How to configure instance types and task CPU/memory for ECS providers
- CPU: 256 (.25 vCPU), 512 (.5 vCPU), 1024 (1 vCPU), 2048 (2 vCPU), 4096 (4 vCPU)
- Memory: Must match valid combinations with CPU (e.g., 2 vCPU supports 4-16 GB)
- CPU and memory must be valid Fargate combinations
- Compute Types:
SMALL: 2 vCPU, 3 GB RAMMEDIUM: 4 vCPU, 7 GB RAMLARGE: 8 vCPU, 15 GB RAMX2_LARGE: 72 vCPU, 145 GB RAM
- Memory: 128 MB to 10 GB (determines CPU proportionally)
- Ephemeral Storage: Up to 10 GB for /tmp directory
- More memory = more CPU power automatically
- Instance Types: Any EC2 instance type (e.g.,
m6i.large,c6i.xlarge) - Choose based on CPU, memory, network, and storage needs
- Instance Type: For cluster instances (e.g.,
m6i.large) - Task CPU: 1024 units = 1 vCPU (fractions supported)
- Task Memory: In MiB (e.g., 2048 = 2 GB)
After deploying, use the appropriate provider label in your workflows based on your compute needs:
name: Build
on: [push]
jobs:
build-small:
runs-on: [self-hosted, lambda]
steps:
- uses: actions/checkout@v5
- name: Build
run: npm run build
build-large:
runs-on: [self-hosted, codebuild]
steps:
- uses: actions/checkout@v5
- name: Build
run: npm run build- Deploy the stack:
cdk deploy - Follow the setup instructions in the main README.md to configure GitHub integration
- Use the appropriate provider label in your workflows based on your compute requirements