Description
Describe the bug: I'm currently using Velero as a backup service for my Kubernetes cluster. When I try to restore openebs nfs volumes, they do not get restored with the data from the snapshot, but are empty instead.
Expected behaviour: I'd expect that the restoration would work and restore all of the data made during the backup time.
Steps to reproduce the bug:
Assuming we have a cluster, then
- Install NFS dynamic provisioner
- Install Velero
- Create an NFS volume and put some data in
- Create a Velero backup
- Destroy the volume
- Restore from the backup
Anything else we need to know?:
The reason why it's happening is that NFS dynamic provisioner depends on the PVC's UIDs and uses them to create the backend volumes. Upon Velero restoration, the UIDs are not preserved (it's actually impossible to preserve them, as Kubernetes does not allow to specify the UID of an object that is deployed). Meaning that the volumes are actually restored for a short time until the provisioner's garbage collector picks them up as trash and deletes them again.
So what happens is Velero restores the NFS volume claim and also the backend persistent volume claims. As the NFS volume claim has a different ID, the provisioner creates new persistent volumes and the garbage collector then collects the old (just restored ones) and deletes them.
Environment details:
- OpenEBS version (use
kubectl get po -n openebs --show-labels
): 0.10.0 - Kubernetes version (use
kubectl version
): 1.26.3 - Cloud provider or hardware configuration: Azure/AWS
- OS (e.g:
cat /etc/os-release
): Ubuntu - kernel (e.g:
uname -a
): Linux 2023 x86_64 x86_64 x86_64 GNU/Linux