Skip to content

Unable To install Dynamic NFS Provisioner on GKE with Backend Storage Class of pd.csi.storage.gke.io provisioner #157

Open
@sanke-t

Description

@sanke-t

Describe the bug: I am unable to install Dynamic NFS Provisioner on GKE with GKE storage class to ensure my pvc is backed to a disk which will withstand node drains, upgrades and failures. The nfs-pv-<volume_name> pod is stuck in ContainerCreating status. Pod description shows the error-

MountVolume.MountDevice failed for volume "pvc-4ec6bfba-b5e7-47af-a6ec-5c1afd82e2b7" : rpc error: code = Internal desc = Failed to format and mount device from ("/dev/disk/by-id/google-pvc-4ec6bfba-b5e7-47af-a6ec-5c1afd82e2b7_regional") to ("/var/lib/kubelet/plugins/kubernetes.io/csi/pd.csi.storage.gke.io/2a9b9ac5e5e297142fb243da54292afd54d46280869358967bea09e4cc96b1ba/globalmount") with fstype ("ext4") and options ([]): mount failed: exit status 32

Expected behaviour: The pvc should be mounted and nfs-pv-<volume_name> pod should be in running status

Steps to reproduce the bug:

  1. Install Dynamic NFS Provisioner via Helm on a GKE cluster
  2. Apply the following k8s manifests-

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cms-pv-claim spec: storageClassName: openebs-gcp-pd-rwx accessModes: - ReadWriteMany resources: requests: storage: 200Gi

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-gcp-pd-rwx annotations: openebs.io/cas-type: nfsrwx cas.openebs.io/config: | - name: NFSServerType value: "kernel" - name: BackendStorageClass value: "regionalpd-storageclass" provisioner: openebs.io/nfsrwx reclaimPolicy: Delete

`kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:

  • matchLabelExpressions:
    • key: topology.gke.io/zone
      values:
      • asia-south1-a
      • asia-south1-b`

Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem

The output of the following commands will help us better understand what's going on:

  • kubectl get pods -n <openebs_namespace> --show-labels
  • kubectl get pvc -n <openebs_namespace>
  • kubectl get pvc -n <application_namespace>

https://gist.github.com/sanke-t/7a4d8cc41f1840c79c6261da92d62003

Anything else we need to know?:
The same setup works fine with BackendStorageClass as openebs-hostpath

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels): 3.5.0
  • Kubernetes version (use kubectl version): v1.24.10-gke.2300
  • Cloud provider or hardware configuration: GKE, Zonal Cluster, 1 node (2 vCPU, 4 GB RAM)
  • OS (e.g: cat /etc/os-release): Ubuntu 22.04.2 LTS
  • kernel (e.g: uname -a): 5.15.0-1027-gke
  • others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions