Skip to content

Provider Inconsistent Result After Apply - PV spec.nfs.readOnly #2768

@sithon512

Description

@sithon512

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.13.0
Kubernetes provider version: v2.36.0
Kubernetes version: v1.32.4

Affected Resource(s)

  • kubernetes_manifest.postgres_backup_manifests["persistent_volume"]

Terraform Configuration Files

locals {
  postgres_backup_manifests = {
    configmap               = yamldecode(file("./files/manifests/postgres-backup/configmap.yaml"))
    cronjob                 = yamldecode(file("./files/manifests/postgres-backup/cronjob.yaml"))
    persistent_volume_claim = yamldecode(file("./files/manifests/postgres-backup/persistent_volume_claim.yaml"))
    persistent_volume       = yamldecode(file("./files/manifests/postgres-backup/persistent_volume.yaml"))
  }
}

resource "kubernetes_manifest" "postgres_backup_manifests" {
  for_each = local.postgres_backup_manifests

  manifest = each.value
}

Additionally, the manifest for the persistent volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-backup-archive
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 100Gi
  nfs:
    path: /redacted/path/to/archive
    readOnly: false
    server: XXX.XXX.XXX.XXX
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-manual

Debug Output

Complete debug output contained secrets that I am not willing to share publicly. If more information than what is contained here is required, please let me know and I'll work privately with whomever is attempting to debug.

Panic Output

N/A

Steps to Reproduce

  1. terraform plan -out .tfplan
  2. terraform apply .tfplan

Expected Behavior

What should have happened?

Apply correctly, no error messages.

Actual Behavior

What actually happened?

The apply functioned correctly and the appropriate resources were created. The terraform provider then threw an error complaining about the value of .object.spec.nfs.readOnly changing on the persistent_volume resource. In my persistent volume manifest above, you can see where I specify the readOnly field, but it is optional. According to the kubernetes docs (here), that value only actually matters if you're setting it to true (which I'm not, I'm setting it to contrast with other persistent volumes so that it's clear that this one isn't readonly). So the kubernetes API server is probably just skipping it in the response and the provider is getting confused.

The resulting persistent volume is tainted after every apply, so it causes permanent drift. I ultimately left off the optional readonly (bc it's not the end of the world) and that resolved the issue.

Important Factoids

References

  • N/A

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions