Skip to content

Flink: Dynamic Iceberg Sink: Add HashKeyGenerator / RowDataEvolver / TableUpdateOperator #13277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mxm
Copy link
Contributor

@mxm mxm commented Jun 8, 2025

This change adds the following components for the Flink Dynamic Iceberg Sink:

HashKeyGenerator

A hash key generator which will be used in DynamicIcebergSink class (next PR) to implement one of Iceberg's DistributionModes (NONE, HASH, RANGE).

The HashKeyGenerator is responsible for creating the appropriate hash key for Flink's keyBy operation. The hash key is generated depending on the user-provided DynamicRecord and the table metadata. Under the hood, we maintain a set of Flink KeySelectors which implement the appropriate Iceberg DistributionMode. For every table, we randomly select a consistent subset of writer subtasks which receive data via their associated keys, depending on the chosen DistributionMode.

Caching ensures that a new key selector is also created when the table metadata (e.g. schema, spec) or the user-provided metadata changes (e.g. distribution mode, write parallelism).

RowDataEvolver

RowDataEvolver is responsible to change the input RowData to make it compatible with the target schema. This is done when:

  1. The input schema has fewer fields than the target schema.
  2. The table types are wider than the input type.
  3. The field order differs for source and target schema.

The resolution is as follows:

In the first case, we would add a null values for the missing field (if the field is optional). In the second case, we would convert the data for the input field to a wider type, e.g. int (input type) => long (table type). In the third case, we would rearrange the input data to match the target table.

DynamicUpdateOperator

A dedicated operator to updating the schema / spec for the table associated with a DynamicRecord.

…TableUpdateOperator

This change adds the following components for the Flink Dynamic Iceberg Sink:

*** HashKeyGenerator

A hash key generator which will be used in DynamicIcebergSink class (next PR) to
implement one of Iceberg's DistributionModes (NONE, HASH, RANGE).

The HashKeyGenerator is responsible for creating the appropriate hash key for Flink's keyBy
operation. The hash key is generated depending on the user-provided DynamicRecord and the table
metadata. Under the hood, we maintain a set of Flink {@link KeySelector}s which implement the
appropriate Iceberg {@link DistributionMode}. For every table, we randomly select a consistent
subset of writer subtasks which receive data via their associated keys, depending on the chosen
DistributionMode.

Caching ensures that a new key selector is also created when the table metadata (e.g. schema,
spec) or the user-provided metadata changes (e.g. distribution mode, write
parallelism).

*** RowDataEvolver

RowDataEvolver is responsible to change the input RowData to make it compatible
with the target schema. This is done when:

1. The input schema has fewer fields than the target schema.
2. The table types are wider than the input type.
3. The field order differs for source and target schema.

The resolution is as follows:

In the first case, we would add a null values for the missing field (if the field is optional).
In the second case, we would convert the data for the input field to a wider type, e.g. int (input type) => long (table type).
In the third case, we would rearrange the input data to match the target table.

*** DynamicUpdateOperator

A dedicated operator to updating the schema / spec for the table associated with
a DynamicRecord.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant