You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes we observe long latency during rpull, and the latency is proportional to the image layers. (We suspect it's due to network issue when containerd/overlaybd talks to cloud provider during rpull to get layer metadata).
Currently our mitigation is to first flatten/squash the original image into a single layer image, and then convert that single layer image. This solves the long rpull issue.
It would be great if the userspace convertor can provide a feature that generates a single-layer overlaybd image directly.
Currently, there is only a low-level tool 'overlaybd-merge' released (v1.0.13)
To merge multiple overlaybd layers, using './overlaybd-merge ${path/to/config.v1.json} ${merged_layer}' will generate a single compacted layer.
So, the userspace convertor or ctr still needs some time to integrate this tool and generate a new image manifest and image config.
What is the version of your Accelerated Container Image
No response
What would you like to be added?
Support generating a single layer overlaybd image in userspace convertor
Why is this needed for Accelerated Container Image?
Our production images have many layers (~70) and we're not using
Layer/Manifest Deduplication
.Sometimes we observe long latency during rpull, and the latency is proportional to the image layers. (We suspect it's due to network issue when containerd/overlaybd talks to cloud provider during rpull to get layer metadata).
Currently our mitigation is to first flatten/squash the original image into a single layer image, and then convert that single layer image. This solves the long rpull issue.
It would be great if the userspace convertor can provide a feature that generates a single-layer overlaybd image directly.
cc @shuaichang
Are you willing to submit PRs to contribute to this feature?
The text was updated successfully, but these errors were encountered: