Redshift batch inserts using COPY FROM operation #25866
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Fixes #24546
This PR aims to allow the use of
COPY FROM
statements when sinking data into Redshift.The Redshift connector inherits
BaseJdbcConnector
which uses batched INSERT statements to execute sink operations. Even when using non transactional mode, this can only push about 1000 rows per second. This change stages the rows to a parquet file first, then issues aCOPY FROM
statement to load the table. We are noticing 250K rows per second or more using this method.This has been running in production for 2+ months on our own branch.
This functionality needs to be enabled by specifying the following config option:
The following options are also required when specifying the above:
A suggested IAM Policy to for this role and user:
Additional context and related issues
REDSHIFT_S3_COPY_ROOT
similar toREDSHIFT_S3_UNLOAD_ROOT
Release notes
( ) This is not user-visible or is docs only, and no release notes are required.
( ) Release notes are required. Please propose a release note for me.
(x) Release notes are required, with the following suggested text: