You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For HMS with HIVE-26882, we can avoid using table lock during commit to Iceberg table.
This improves performance of concurrent write to iceberg table and reduce the chance of having an unreleased lock stuck in HMS.
// Release lock step has failed. Not throwing this exception, after commit has already succeeded.
// So, that underlying iceberg API will not do the metadata cleanup, otherwise table will be in unusable state.
// If configured and supported, the unreleased lock will be automatically released by the metastore after not hearing a heartbeat for a while,
// or otherwise it might need to be manually deleted from the metastore backend storage.
log.error(e, "Failed to release lock %s when committing to table %s", lockId, table.getTableName());
}
}
apache/iceberg#6570 implemented iceberg.engine.hive.lock-enabled = false. All writers including Trino, Spark and other engines should honor this setting to avoid using different locking mechanism, which could result to data corruption.
An unreleased lock could result in the following error:
Query 20240528_062551_35616_6hrf3 failed: Timed out waiting for lock 46108 for query 20240528_062551_35616_6hrf3
io.trino.spi.TrinoException: Timed out waiting for lock 46108 for query 20240528_062551_35616_6hrf3
at io.trino.plugin.hive.metastore.thrift.ThriftHiveMetastore.acquireLock(ThriftHiveMetastore.java:1784)
at io.trino.plugin.hive.metastore.thrift.ThriftHiveMetastore.acquireTableExclusiveLock(ThriftHiveMetastore.java:1765)
at io.trino.plugin.iceberg.catalog.hms.HiveMetastoreTableOperations.commitToExistingTable(HiveMetastoreTableOperations.java:66)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.commit(AbstractIcebergTableOperations.java:171)
at org.apache.iceberg.BaseTransaction.lambda$commitSimpleTransaction$3(BaseTransaction.java:417)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
at org.apache.iceberg.BaseTransaction.commitSimpleTransaction(BaseTransaction.java:413)
at org.apache.iceberg.BaseTransaction.commitTransaction(BaseTransaction.java:308)
at io.trino.plugin.iceberg.IcebergMetadata.finishInsert(IcebergMetadata.java:1016)
...
The text was updated successfully, but these errors were encountered:
If you enable the lock-free commit on table level, then you have to make sure, that every writer of the table uses Iceberg 1.3.0 version or later, so they will use the appropriate locking mechanism. For more details check the end of this paragraph: https://iceberg.apache.org/docs/nightly/configuration/#hadoop-configuration
Edit: Don't forget that you need the correct HMS version too.
For HMS with HIVE-26882, we can avoid using table lock during commit to Iceberg table.
This improves performance of concurrent write to iceberg table and reduce the chance of having an unreleased lock stuck in HMS.
trino/plugin/trino-iceberg/src/main/java/io/trino/plugin/iceberg/catalog/hms/HiveMetastoreTableOperations.java
Lines 116 to 127 in db64b88
apache/iceberg#6570 implemented
iceberg.engine.hive.lock-enabled = false
. All writers including Trino, Spark and other engines should honor this setting to avoid using different locking mechanism, which could result to data corruption.An unreleased lock could result in the following error:
The text was updated successfully, but these errors were encountered: