Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate EKS module on aws provider 5.59.0 #7

Open
Mieszko96 opened this issue Jul 31, 2024 · 5 comments
Open

Migrate EKS module on aws provider 5.59.0 #7

Mieszko96 opened this issue Jul 31, 2024 · 5 comments

Comments

@Mieszko96
Copy link

Mieszko96 commented Jul 31, 2024

Describe the bug
Hey i was testing upgrade procedure on aws provider 5.57.0 and it worked more or less fine.
Only i needed run terraform apply 3 times
19.21 -> migrrate
migrate -> 20.00 but access_entires were created but policy was not applied
20.00 -> 20.00 without changed was adding this policy

it was more or less fine, but i needed to change priority to diffrent subject and in meantime we upgraded aws provider to 5.59.0
and this migrate not works for me anymore.

from upgrading 19.21 -> migrate module i'm getting error in terraform plan

│ 
│   with helm_release.cert_manager,
│   on cert_manager.tf line 8, in resource "helm_release" "cert_manager":
│    8: resource "helm_release" "cert_manager" {
│ 
╵
╷
│ Error: Get "http://localhost/api/v1/namespaces/velero": dial tcp [::1]:80: connect: connection refused
│ 
│   with kubernetes_namespace.velero,
│   on velero.tf line 65, in resource "kubernetes_namespace" "velero":
│   65: resource "kubernetes_namespace" "velero" {
module.eks.aws_eks_cluster.this[0] must be replaced
+/- resource "aws_eks_cluster" "this" {
      ~ arn                           = "test" -> (known after apply)
      ~ certificate_authority         = [
          - {
              - data = "hided"
            },
        ] -> (known after apply)
      + cluster_id                    = (known after apply)
      ~ created_at                    = "2024-07-31 08:59:06.64 +0000 UTC" -> (known after apply)
      - enabled_cluster_log_types     = [] -> null
      ~ endpoint                      = "test" -> (known after apply)
      ~ id                            = "test" -> (known after apply)
      ~ identity                      = [
          - {
              - oidc = [
                  - {
                      - issuer = "test"
                    },
                ]
            },
        ] -> (known after apply)
        name                          = "test"
      ~ platform_version              = "eks.16" -> (known after apply)
      ~ status                        = "ACTIVE" -> (known after apply)
      ~ tags                          = {
          + "terraform-aws-modules" = "eks"
        }
      ~ tags_all                      = {
          + "terraform-aws-modules" = "eks"
            # (10 unchanged elements hidden)
        }
        # (3 unchanged attributes hidden)

      ~ access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

specially this

      ~ access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

To Reproduce

  1. install module 19.21 using aws provider 5.59.0
  2. update EKS module to
    source = "github.com/clowdhaus/terraform-aws-eks-v20-migrate.git?ref=3f626cc493606881f38684fc366688c36571c5c5"
  3. run terraform init/plan
@Mieszko96
Copy link
Author

and it's only when cluster was initially created on aws version 5.58.0 or higher.

my pov all my important clusters were created before, so no problem on my side, but i still think this module needs some upgrade. IF it's possible cuz in aws provider they added this

bootstrap_cluster_creator_admin_permissions set to true as default not false as it was before

@bryantbiggs
Copy link
Member

bootstrap_cluster_creator_admin_permissions set to true as default not false as it was before

bootstrap_cluster_creator_admin_permissions is not available on version v19.21 or anything less than v20.0 of the EKS module, so I'm not following what this issue is describing

@Mieszko96
Copy link
Author

Mieszko96 commented Aug 5, 2024

hashicorp/terraform-provider-aws#38295 this PR changed default values for brand new cluster.
and because of it when using your module to migrate cluster, it wants to recreate cluster.

access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

if you don't believe me.

  1. create cluster from scratch using EKS module 19.21 and aws provider 5.58.0 or higher
  2. use you migration procedure

Not sure if it can be fixed somehow in this repo or in aws provider only

@viachaslauka
Copy link

We have the same behaviour. Clusters were provisioned before v5.58.0, later provider's version was upgraded (currently it is v5.68.0), and now updating to v20.x causes cluster to be re-created due to hardcoded bootstrap_cluster_creator_admin_permissions

       ~ access_config {
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement

@vladkens
Copy link

bootstrap_cluster_creator_admin_permissions initially added in 5.33 as true, then in 5.58 changed to false. 5.57 last version with true. Changelog on this flag:
https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#5580-july-11-2024

I had API_AND_CONFIG_MAP enabled in my cluster. Found some recommendations here and here to drop access-entries but actually this is not related to this issues and without proper setup will lose access to Node groups.

What actually helped me is terraform state patching:

terraform state push > state.json
# edit this file and replace bootstrap_cluster_creator_admin_permissions to false
terraform state push -force state.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants