-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3_object
fails to copy in AWS when source is larger than 5GiB
#2117
Comments
s3_object
fails to copy in AWS when source is larger than 5GiB
@alinabuzachis many thanks for assigning on this one. I absolutely understand that you undoubtedly have a lot to do - but I just wondered if you could give an indication on whether/when this might sit on your roadmap? Ideally, we would push a fix up from our side but we're not currently in a great position to do this. I'm trying to determine whether we should invest in a temp work-around, or just put up with manually syncing some of our larger data until a fix is in place. Thanks again. |
I had the same problem on 7.6.0. However, everything works as expected after updating to 9.1.1! |
@fuggla There was a separate bug that could trigger out of memory errors (#2107), which was fixed in 8.0.1. @colin-nolan I've pushed #2509 which should address this issue. The underlying problem for this issue is that the API doesn't allow server-side copies for 5G+ files, this can only be fixed by using download/upload instead of copy_object. #2509 switches over to the boto3's S3 "copy" method which switches between the two mechanisms depending on the size of the file. If you're able to verify this fixes your issue it would be appreciated. |
SUMMARY fixes: #2117 The copy_object API call has a built-in limit of 5G when copying objects. The copy method is aware of this limit and performs a multipart download/upload instead when the 5G limit has been exceeded. ISSUE TYPE Bugfix Pull Request COMPONENT NAME s3_object ADDITIONAL INFORMATION See also: boto/boto3#1715 Reviewed-by: Bikouo Aubin (cherry picked from commit 21306fd)
This is a backport of PR #2509 as merged into main (21306fd). SUMMARY fixes: #2117 The copy_object API call has a built-in limit of 5G when copying objects. The copy method is aware of this limit and performs a multipart download/upload instead when the 5G limit has been exceeded. ISSUE TYPE Bugfix Pull Request COMPONENT NAME s3_object ADDITIONAL INFORMATION See also: boto/boto3#1715 Reviewed-by: Mark Chappell
Summary
amazon.aws.s3_object
fails to copy files within AWS when they are larger than 5GiB. The use-case where we encountered this issue was when copying between buckets (mode: copy
withcopy_src
set) - but it likely effects all copy usage.I'd guess the switch to a multi-part upload strategy is required for files over 5GiB.
Issue Type
Bug Report
Component Name
s3_object
Ansible Version
Collection Versions
AWS SDK versions
Configuration
OS / Environment
N/A
Steps to Reproduce
Expected Results
Expected to copy any files over 5GiB to the destination bucket in an idempotent manor.
Actual Results
Task failure, resulting in the traceback:
Code of Conduct
The text was updated successfully, but these errors were encountered: