Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions core/src/main/java/org/apache/iceberg/BaseRowDelta.java
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,10 @@ protected BaseRowDelta self() {

@Override
protected String operation() {
if (addsDataFiles() && !addsDeleteFiles() && !deletesDataFiles()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the discussion here, I think this should be enough:

   if (addsDataFiles() && !addsDeleteFiles()) { 
     return DataOperations.APPEND;
   }

I think if (!addsDataFiles() && addsDeleteFiles()) is correct as the RowDelta API only provides addRows(DataFile) and addDeletes(DeleteFile). deletesDataFiles() would apply for the RewriteFiles API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking of a case like this, let's say engine wanna delete a record from a file which just had one record in that case an engine might just optimize by saying lets remove the whole file instead of producing a delete ? in this case condition like below protects us :

if (addsDataFiles() && !addsDeleteFiles() && !deletesDataFiles()) {

my understanding is we do support support removing data file in row delta post this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point @singhpk234!

In this case the other condition is bugous:

    if (addsDeleteFiles() && !addsDataFiles()) {
      return DataOperations.DELETE;
    }

We could add back data with removeDeletes.

To make things even more problematic, in V3 we will have DV updates which is likely a deleteDeleteFile(DV1) + an addDeleteFile(DV2). But nothing guarantees in this case that only new deletes happen. Theoretically this could bring back new records.

So this means that @mxm's change is correct, but likely the other path where we fall back to DataOperations.DELETE is problematic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review @singhpk234 and @pvary! Should we change the DELETE path to something like:

    if (addsDeleteFiles() && !addsDataFiles() && !deletesDeleteFiles()) {
      return DataOperations.DELETE;
    }

Or do you prefer to do this in a separate PR? Originally, this PR is meant to optimize the APPEND case only.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For V2 tables, the condition above works correctly for deletes. However, for V3 tables, if there are already deletes for a data file (i.e., an existing DV), we always remove the old DV and add a new one. This means the condition above is never triggered. On the other hand, we typically don’t re-add rows, so the DELETE operation remains the correct outcome.

My main concern with changing this behavior is that it could be unexpected for users. That’s why I’d prefer to separate the two cases.

In the meantime, if you have some time, could you please test my theory? (Totally understand if you’re busy—I’m just hopeful 😄)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, it seems rare that we would delete a delete file to resurrect deleted data, but you are right about V3+ DV delete files triggering the above condition due to removing the old delete file and adding the rewritten one, e.g.:

Maybe we just leave things as-is for DELETE? My intention anyways was to fix the APPEND case :)

return DataOperations.APPEND;
}

if (addsDeleteFiles() && !addsDataFiles()) {
return DataOperations.DELETE;
}
Expand Down
24 changes: 24 additions & 0 deletions core/src/test/java/org/apache/iceberg/TestRowDelta.java
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,25 @@ public void addOnlyDeleteFilesProducesDeleteOperation() {
assertThat(snap.deleteManifests(table.io())).hasSize(1);
}

@TestTemplate
public void addOnlyDataFilesProducesAppendOperation() {
SnapshotUpdate<?> rowDelta = table.newRowDelta().addRows(FILE_A).addRows(FILE_B);

commit(table, rowDelta, branch);
Snapshot snap = latestSnapshot(table, branch);
assertThat(snap.sequenceNumber()).isEqualTo(1);
assertThat(snap.operation()).isEqualTo(DataOperations.APPEND);
assertThat(snap.dataManifests(table.io())).hasSize(1);

validateManifest(
snap.dataManifests(table.io()).get(0),
dataSeqs(1L, 1L),
fileSeqs(1L, 1L),
ids(snap.snapshotId(), snap.snapshotId()),
files(FILE_A, FILE_B),
statuses(Status.ADDED, Status.ADDED));
}

@TestTemplate
public void testAddRemoveRows() {
SnapshotUpdate<?> rowDelta =
Expand Down Expand Up @@ -599,6 +618,7 @@ public void testOverwriteWithRemoveRows() {

long deltaSnapshotId = latestSnapshot(table, branch).snapshotId();
assertThat(latestSnapshot(table, branch).sequenceNumber()).isEqualTo(1);
assertThat(latestSnapshot(table, branch).operation()).isEqualTo(DataOperations.OVERWRITE);
assertThat(table.ops().current().lastSequenceNumber()).isEqualTo(1);

// overwriting by a filter will also remove delete files that match because all matching data
Expand Down Expand Up @@ -642,6 +662,7 @@ public void testReplacePartitionsWithRemoveRows() {

long deltaSnapshotId = latestSnapshot(table, branch).snapshotId();
assertThat(latestSnapshot(table, branch).sequenceNumber()).isEqualTo(1);
assertThat(latestSnapshot(table, branch).operation()).isEqualTo(DataOperations.OVERWRITE);
assertThat(table.ops().current().lastSequenceNumber()).isEqualTo(1);

// overwriting the partition will also remove delete files that match because all matching data
Expand Down Expand Up @@ -688,6 +709,7 @@ public void testDeleteByExpressionWithRemoveRows() {
branch);

assertThat(latestSnapshot(table, branch).sequenceNumber()).isEqualTo(1);
assertThat(latestSnapshot(table, branch).operation()).isEqualTo(DataOperations.OVERWRITE);
assertThat(table.ops().current().lastSequenceNumber()).isEqualTo(1);

// deleting with a filter will also remove delete files that match because all matching data
Expand Down Expand Up @@ -726,6 +748,7 @@ public void testDeleteDataFileWithRemoveRows() {

long deltaSnapshotId = latestSnapshot(table, branch).snapshotId();
assertThat(latestSnapshot(table, branch).sequenceNumber()).isEqualTo(1);
assertThat(latestSnapshot(table, branch).operation()).isEqualTo(DataOperations.OVERWRITE);
assertThat(table.ops().current().lastSequenceNumber()).isEqualTo(1);

// deleting a specific data file will not affect a delete file in v2 or less
Expand Down Expand Up @@ -786,6 +809,7 @@ public void testFastAppendDoesNotRemoveStaleDeleteFiles() {

long deltaSnapshotId = latestSnapshot(table, branch).snapshotId();
assertThat(latestSnapshot(table, branch).sequenceNumber()).isEqualTo(1);
assertThat(latestSnapshot(table, branch).operation()).isEqualTo(DataOperations.OVERWRITE);
assertThat(table.ops().current().lastSequenceNumber()).isEqualTo(1);

// deleting a specific data file will not affect a delete file
Expand Down