-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Open
Labels
Description
Description
Referencing: EKS Managed Node Group AL2023 Example
Trying to inject some items into the NodeConfig, however it was not working. I tried the basic example by simply trying to inject a shutdown grace period and found that didn't work.
The launch template shows the NodeConfig.
I see both NodeConfigs in user-data.txt.i on the node.
If your request is for a new feature, please use the Feature request
template.
- [x ] ✋ I have searched the open/closed issues and my issue is not listed.
Versions
-
Module version [Required]: 20.37.0
-
Terraform version:
1.12.2 -
Provider version(s):
- provider registry.terraform.io/hashicorp/aws v5.100.0
- provider registry.terraform.io/hashicorp/cloudinit v2.3.7
- provider registry.terraform.io/hashicorp/kubernetes v2.37.1
- provider registry.terraform.io/hashicorp/local v2.5.3
- provider registry.terraform.io/hashicorp/null v3.2.4
- provider registry.terraform.io/hashicorp/random v3.7.2
- provider registry.terraform.io/hashicorp/template v2.2.0
- provider registry.terraform.io/hashicorp/time v0.13.1
- provider registry.terraform.io/hashicorp/tls v4.1.0
Reproduction Code [Required]
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.37.0"
cluster_name = local.eks_cluster_name
cluster_version = "1.31"
subnet_ids = data.terraform_remote_state.vpc.outputs.vpc_private_subnet_ids
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
enable_irsa = true
cluster_endpoint_public_access = false
cluster_endpoint_private_access = true
cluster_service_ipv4_cidr = var.cluster_service_ipv4_cidr
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
prefix_separator = ""
iam_role_name = local.eks_cluster_name
cluster_security_group_name = local.eks_cluster_name
cluster_security_group_description = "EKS cluster security group."
access_entries = local.combined_access_entries
cluster_encryption_config = {
provider_key_arn = aws_kms_key.eks.arn
resources = ["secrets"]
}
kms_key_administrators = var.kms_key_administrators
eks_managed_node_groups = local.eks_managed_node_groups_per_az_config
}
locals {
eks_managed_node_groups_per_az_config = [
for subnet in data.terraform_remote_state.vpc.outputs.vpc_private_subnet_ids :
{
name = "${var.env_name}-core-group-${trimprefix(subnet, "subnet-")}"
subnet_ids = [subnet]
iam_role_name = "${var.env_name}-core-group-${trimprefix(subnet, "subnet-")}"
launch_template_name = "${var.env_name}-core-group-${trimprefix(subnet, "subnet-")}"
capacity_type = "ON_DEMAND"
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["t3a.large"]
min_size = 0
desired_size = 1
max_size = 10
key_name = "al2023-test"
node_repair_config = {
enabled = true
}
timeouts = {
"create" : "60m",
"update" : "60m",
"delete" : "60m"
}
cloudinit_pre_nodeadm = [
{
content_type = "application/node.eks.aws"
content = <<-EOT
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
kubelet:
config:
shutdownGracePeriod: 30s
EOT
}
]
tags = {
"Environment" = var.env_name
"k8s.io/cluster-autoscaler/enabled" = true
"k8s.io/cluster-autoscaler/${local.eks_cluster_name}" = "owned"
"k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/core-asg" = "true"
}
}
]
}
Steps to reproduce the behavior:
Use cloudinit_pre_nodeadm to apply a NodeConfig.
Expected behavior
The NodeConfig gets merged with the existing config as the example implies
Actual behavior
The NodeConfig does not get applied, but is evidently present in userdata on the node
MauroSoli