Skip to content

Can't set credit specifications #3419

@lutz500

Description

@lutz500

Description:
I have been trying to configure cpu_credits = "standard" for my EKS managed node group to avoid unexpected costs. However, despite multiple attempts using different approaches, the setting does not seem to be applied correctly in the launch template.

Steps Taken:
eks_managed_node_group_defaults – Tried setting the value here, but no change.

eks_managed_node_group specific settings – Applied the setting at the node group level, but again, it wasn’t reflected in the launch template.

After applying these configurations, I checked the Terraform state, and the value for cpu_credits = "standard" appears to be passed correctly. However, AWS does not seem to apply this setting when creating or updating the node group.

Expected Behavior:
The cpu_credits = "standard" setting should be properly reflected in the launch template and applied when the node group is created or updated. This is to prevent unexpected costs associated with burstable EC2 instances.

Does anybody has the same issue or am i doing smth wrong?

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "20.37.1"
  cluster_name    = local.cluster_name
  cluster_version = local.cluster_version

  cluster_endpoint_private_access      = var.cluster_endpoint_private_access
  cluster_endpoint_public_access       = var.cluster_endpoint_public_access
  cluster_endpoint_public_access_cidrs = var.cluster_endpoint_public_access_cidrs

  cluster_addons = {
    coredns = {
      name                        = "coredns"
      addon_version               = "v1.11.4-eksbuild.2"
      resolve_conflicts_on_update = "OVERWRITE"
      configuration_values        = "{\"nodeSelector\":{\"Name\":\"base_workload\"}}"
    }
    kube-proxy = {
      name                        = "kube-proxy"
      addon_version               = "v1.32.0-eksbuild.2"
      resolve_conflicts_on_update = "OVERWRITE"
    }
    vpc-cni = {
      name                        = "vpc-cni"
      addon_version               = "v1.19.2-eksbuild.1"
      resolve_conflicts_on_update = "OVERWRITE"
      service_account_role_arn    = module.vpc_cni_irsa.iam_role_arn
      configuration_values        = "{\"env\":{\"WARM_IP_TARGET\":\"5\"}}"
    }
    ebs-csi = {
      name                        = "aws-ebs-csi-driver"
      addon_version               = "v1.41.0-eksbuild.1"
      resolve_conflicts_on_update = "OVERWRITE"
      service_account_role_arn    = module.ebs_csi_irsa.iam_role_arn
      configuration_values        = "{\"controller\":{\"nodeSelector\":{\"Name\":\"base_workload\"}}}"
    }
  }

  create_kms_key = false
  cluster_encryption_config = {
    provider_key_arn = aws_kms_key.eks.arn
    resources        = ["secrets"]
  }

  vpc_id                                = var.vpc_id
  subnet_ids                            = var.private_subnet_ids
  cluster_additional_security_group_ids = var.cluster_additional_security_group_ids

  enable_irsa              = var.enable_irsa
  openid_connect_audiences = var.openid_connect_audiences

  tags = {
    environment = var.aws_account_name
    type        = "infrastructure"
  }

  # ----------------------------------------------------------------------------------
  # EKS managed node groups
  # ----------------------------------------------------------------------------------
  eks_managed_node_group_defaults = {
    ami_type  = "AL2023_x86_64_STANDARD"
    disk_size = 20

    credit_specification = {
      cpu_credits = "standard"
    }

    # We are using the IRSA created below for permissions
    # However, we have to deploy with the policy attached FIRST (when creating a fresh cluster)
    # and then turn this off after the cluster/node group is created. Without this initial policy,
    # the VPC CNI fails to assign IPs and nodes cannot join the cluster
    # See https://github.com/aws/containers-roadmap/issues/1666 for more context
    iam_role_attach_cni_policy = true
  }

  eks_managed_node_groups = {
    base_workload_large = {

      # general config
      name            = "base-workload-large"
      use_name_prefix = true

      # ami config
      # ami_id = ""

      # security
      vpc_security_group_ids = [aws_security_group.eks_managed_node_group.id, aws_security_group.allow_access_to_rds.id, module.sg_allow_ng_to_ng.security_group_id]

      credit_specification = {
        cpu_credits = "standard"
      }

      # resource sizing
      instance_types = ["t3a.large", "t3.large"]
      capacity_type  = "ON_DEMAND"
      min_size       = 0
      max_size       = 8
      desired_size   = 1
      update_config = {
        max_unavailable_percentage = 70 # or set `max_unavailable`
      }
      # start up
      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on_demand'"
      labels = {
        Environment = var.aws_account_name
        Name        = "base_workload_large"
      }
      enable_monitoring = true
    }
}

When looking in the terraform state the setting seems to be passed

{
      "module": "module.eks.module.eks_managed_node_group[\"base_workload_large\"]",
      "mode": "managed",
      "type": "aws_launch_template",
      "name": "this",
      "provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
      "instances": [
        {
          "index_key": 0,
          "schema_version": 0,
          "attributes": {
            "arn": "arn:aws:ec2:eu-central-1:XXXXXXX:launch-template/lt-03549467eea5cd1e4",
            "block_device_mappings": [],
            "capacity_reservation_specification": [],
            "cpu_options": [],
            "credit_specification": [
              {
                "cpu_credits": "standard"
              }
            ],
            "default_version": 1,
            "description": "Custom launch template for base-workload-large EKS managed node group",
            "disable_api_stop": false,
            "disable_api_termination": false,
            "ebs_optimized": "",
            "elastic_gpu_specifications": [],
            "elastic_inference_accelerator": [],
...
}

But in AWS the launch template doesn't contain the setting:
Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions