You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With this deployment and velero 15.1.0 I created a backup with tth 2hours.
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
ora-nhc-demo4 Completed 0 0 2024-11-26 11:22:15 +0000 UTC 52m default <none>
Once the backup expired velero kept trying to sync the expired backup:
time="2024-11-27T11:36:21Z" level=info msg="Found 1 backups in the backup location that do not exist in the cluster and need to be synced" backupLocation=velero-realtime/default controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:138"
time="2024-11-27T11:36:21Z" level=info msg="Attempting to sync backup into cluster" backup=ora-nhc-demo4 backupLocation=velero-realtime/default controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:146"
time="2024-11-27T11:36:21Z" level=info msg="plugin process exited" backupLocation=velero-realtime/default cmd=/plugins/velero-plugin-for-aws controller=backup-sync id=43286 logSource="pkg/plugin/clientmgmt/process/logrus_adapter.go:80" plugin=/plugins/velero-plugin-for-aws
I started digging around. mc (minio client) shows the folder in the bucket as empty.
bash-5.1# mc ls realtime/velero/backups/ora-nhc-demo4/
bash-5.1#
If I exec into the minio container I see something very different:
bash-5.1$ ls -al /export/velero/backups/ora-nhc-demo4/
total 56
drwxr-sr-x. 14 1000 1000 4096 Nov 26 11:22 .
drwxr-sr-x. 17 1000 1000 4096 Nov 27 08:41 ..
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-csi-volumesnapshotclasses.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-csi-volumesnapshotcontents.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-csi-volumesnapshots.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-itemoperations.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-logs.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-podvolumebackups.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-resource-list.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-results.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-volumeinfo.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4-volumesnapshots.json.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 ora-nhc-demo4.tar.gz
drwxr-sr-x. 2 1000 1000 4096 Nov 26 11:22 velero-backup.json
Turns out versioning on the bucket is suspended not un-versioned , even though the helm values say versioning: false
bash-5.1# mc version info realtime/velero
realtime/velero versioning is suspended
I listed the default values of the minio helm chart and behold the magic of satan:
buckets: []
# # Name of the bucket
# - name: bucket1
# # Policy to be set on the
# # bucket [none|download|upload|public]
# policy: none
# # Purge if bucket exists already
# purge: false
# # set versioning for
# # bucket [true|false]
# versioning: false # remove this key if you do not want versioning feature
# # set objectlocking for
# # bucket [true|false] NOTE: versioning is enabled by default if you use locking
# objectlocking: false
# - name: bucket2
# policy: none
# purge: false
# versioning: true
# # set objectlocking for
# # bucket [true|false] NOTE: versioning is enabled by default if you use locking
# objectlocking: false
# versioning: false # remove this key if you do not want versioning feature <--- 👎 👎 👎 Remove the line, setting it to false will suspend versioning, not disable it completely.
What did you expect to happen:
Velero should either work with versioned s3 buckets or not pickup the metadata from the versioned blobs if a file was deleted (ex:mc works correctly with versions and does not list any deleted file that has "active" versions)
The following information will help us better understand what's going on:
Cloud provider or hardware configuration: Open Telekom Cloud
OS (e.g. from /etc/os-release):
Velero deployed via helm
NAME CHART VERSION APP VERSION DESCRIPTION
vmware-tanzu/velero 8.1.0 1.15.0 A Helm chart for velero
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
👍 for "I would like to see this bug fixed as soon as possible"
👎 for "There are more important bugs to focus on right now"
The text was updated successfully, but these errors were encountered:
Thanks for the sharing, this is a helpful troubleshooting.
Velero is calling standard S3 API, any objects returned by it are regarded valid by Velero, so Velero has no way to filter out some objects in this scenarios.
Looks like the problem is on minio side --- the S3 API implementation doesn't work in the same behavior of mc.
What steps did you take and what happened:
This will probably not make it past triage, but I hit this issue and solved it, someone else might benefit from this.
I am using helm to install minio, have been reusing the same values.yaml for a couple of years now.
With this deployment and velero 15.1.0 I created a backup with tth 2hours.
Once the backup expired velero kept trying to sync the expired backup:
I started digging around.
mc
(minio client) shows the folder in the bucket as empty.If I exec into the minio container I see something very different:
Turns out versioning on the bucket is
suspended
notun-versioned
, even though the helm values sayversioning: false
I listed the default values of the minio helm chart and behold the magic of satan:
# versioning: false # remove this key if you do not want versioning feature
<--- 👎 👎 👎 Remove the line, setting it to false will suspend versioning, not disable it completely.What did you expect to happen:
Velero should either work with versioned s3 buckets or not pickup the metadata from the versioned blobs if a file was deleted (ex:
mc
works correctly with versions and does not list any deleted file that has "active" versions)The following information will help us better understand what's going on:
https://transfer.kronsoft.cloud/yp7er0/bundle-2024-11-27-11-42-30.tar.gz -> expires in 7d
Anything else you would like to add:
Environment:
velero version
):velero client config get features
):features: <NOT SET>
kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: