Skip to content

CSI Driver fails to use existing LVM Volume Group on a Fibre Channel multipath device #120

@sbahmani

Description

@sbahmani

I'm trying to use the CSI driver to manage an existing LVM Volume Group that is set up on a Fibre Channel SAN LUN. The LUN is managed by multipathd on all my Kubernetes nodes.

The driver does not seem to provision volumes in this configuration. PVCs remain in a Pending state with the WaitForFirstConsumer message, but even after a pod is created, the volume is never provisioned.

What happened?
I have a 2TB SAN LUN available on all 20 of my worker nodes as /dev/mapper/mpathb. I have successfully created a Volume Group on this device named csi-lvm-vg.

I installed the Helm chart with devicePattern set to "" and vgName set to csi-lvm-vg, which I believe is the correct configuration for using a pre-existing VG.

When I create a PersistentVolumeClaim using the auto-generated csi-driver-lvm-linear StorageClass, the PVC never gets bound. No Logical Volume is created in my csi-lvm-vg.

What did you expect to happen?
I expected the CSI driver to recognize the existing Volume Group (csi-lvm-vg) on the multipath device. When a PVC is created and a pod consumes it, I expected the driver to automatically create a new Logical Volume of the requested size within that VG, allowing the PVC to bind successfully.

How to reproduce it (as minimally and precisely as possible)?
On all nodes: Provision a SAN LUN and configure multipathd so the device is available (e.g., as /dev/mapper/mpathb).

On one node: Create the LVM structures on the shared device:

sudo pvcreate /dev/mapper/mpathb
sudo vgcreate csi-lvm-vg /dev/mapper/mpathb
Install the Helm chart with the following values.yaml:
# csi-values.yaml
lvm:
  vgName: csi-lvm-vg
  devicePattern: ""
helm install csi-driver-lvm csi-driver-lvm/csi-driver-lvm \
  --namespace kube-system \
  --values csi-values.yaml

Create a PVC using the automatically generated StorageClass:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim-on-multipath
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-driver-lvm-linear
  resources:
    requests:
      storage: 10Gi

Create a consumer Pod:

apiVersion: v1
kind: Pod
metadata:
  name: storage-consumer-pod
spec:
  containers:
  - name: app
    image: nginx
  volumes:
  - name: my-storage
    persistentVolumeClaim:
      claimName: test-claim-on-multipath

Observe: The PVC remains Pending and kubectl describe pvc shows the WaitForFirstConsumer event repeatedly. The pod remains in a Pending state, waiting for the volume. No LV is created in the VG.

Anything else we need to know?
My suspicion is that the driver might have an issue interacting with a VG that resides on a /dev/mapper/mpath* device instead of a simple disk like /dev/sdb. Is this a supported configuration? Are there any special permissions or host configurations required for the CSI pods to correctly manage LVM on a multipath device?

Environment
CSI Driver version: [v0.6.1]

Kubernetes version: [e.g., v1.31.0]

Node OS: [Ubuntu 24.04]

multipath-tools version: [v0.9.4]

lvm2 version: [2.03.16]

Any guidance or suggestions would be greatly appreciated. Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions