Upload directly from sql to s3
·
93 commits
to master
since this release
Features
- Consistent memory, cpu and network usage whether your database has 1 Million or 1 Trillion records.
- Multipart upload if file size > 5 Mb
- Buffer size is set to 5 Mb by default.
- No need for writable disk as we directly upload to s3.
- Log levels set:
Error
,Warn
,Info
,Debug
,Verbose
- Various bug fixes.
- Breaking API changes except for
WriteFile(file, rows) and Write(w io.writer)
methods. - New methods:
UploadToS3(rows) and Upload()
to upload to s3
Caveats
- Maximum of 10000 part uploads are allowed by AWS. Hence, (5Mb x 10000)
50Gb
of gzipped data is supported by default settings. - Increase buffer size if you want to reduce parts or have more than 50Gb of gzipped data.
- Currently only supports upload to AWS S3 API compatible storage.