You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to create an external table in Trino (v475) backed by Azure Data Lake Storage Gen2 using the abfs or abfss URI schemes. However, I'm encountering the following errors:
trino:default> SHOW SCHEMAS FROM azure;
Schema
--------------------
default
information_schema
(2 rows)
Query 20250603_164407_00003_g6hjr, FINISHED, 1 node
Splits: 5 total, 5 done (100.00%)
0.73 [2 rows, 35B] [2 rows/s, 48B/s]
trino:default> CREATE TABLE azure.default.existing_customer_data (
-> id bigint,
-> name varchar(100),
-> email varchar(255),
-> created_date date
-> ) WITH (
-> format = 'PARQUET',
-> external_location = 'abfs://[email protected]/tables/customer_data'
-> );
Query 20250603_164415_00004_g6hjr failed: Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "abfs"
trino:default> CREATE TABLE azure.default.existing_customer_data (
-> id bigint,
-> name varchar(100),
-> email varchar(255),
-> created_date date
-> ) WITH (
-> format = 'PARQUET',
-> external_location = 'abfss://[email protected]/tables/customer_data'
-> );
Query 20250603_164427_00005_g6hjr failed: Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "abfss"
trino:default> CREATE TABLE azure.default.existing_customer_data (
-> id bigint,
-> name varchar(100),
-> email varchar(255),
-> created_date date
-> ) WITH (
-> format = 'PARQUET',
-> external_location = 'wasbs://[email protected]/tables/customer_data'
-> );
Query 20250603_164634_00006_g6hjr failed: External location is not a valid file system URI: wasbs://[email protected]/tables/customer_data
trino:default>
Azure Access Configuration
Using Service Principal credentials
Storage account has Storage Blob Data Contributor role assigned to the Service Principal
Observations
The SHOW SCHEMAS FROM azure command works, indicating the catalog is recognized.
The error appears to stem from Trino or the underlying Hive connector not recognizing or supporting the abfs / abfss file system schemes.
A similar issue was reported here #25863 and #25919
Is this a known limitation or bug in Trino’s Hive connector for Azure?
Is there additional configuration or plugin/jar required to support abfs[s] in this setup?
The text was updated successfully, but these errors were encountered:
With above, used both latest trino docker image and the 466 docker image.
I was trying different ways to connect to azure storage and thought to register an already-existing Iceberg table. So I pre-created the table using Spark in azure using a spark docker container. Post this tried below which is also resulting in failed: Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "abfss"
From the trino CLI -
ruser@rubuntu:~/iceberg$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d50f7f360a6 trinodb/trino:466 "/usr/lib/trino/bin/…" About a minute ago Up About a minute (healthy) 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp trino
c9527a5e07ee apache/hive:3.1.3 "sh -c /entrypoint.sh" About a minute ago Up 6 seconds 10000/tcp, 0.0.0.0:9083->9083/tcp, [::]:9083->9083/tcp, 10002/tcp hive-metastore
26ad9f3453f3 postgres:13 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp, [::]:5432->5432/tcp postgres
azureuser@rntubuntu:~/iceberg$ ./trino --server http://localhost:8080 --catalog hive --schema default
trino:default> CALL azure.system.register_table(
-> schema_name => 'default',
-> table_name => 'customers',
-> table_location => 'abfss://[email protected]/tables/default/customers'
-> );
Query 20250604_125938_00001_295cs failed: Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "abfss"
trino:default>
I'm attempting to create an external table in Trino (v475) backed by Azure Data Lake Storage Gen2 using the abfs or abfss URI schemes. However, I'm encountering the following errors:
Configuration Details:
catalog/azure.properties
Docker Compose (Partial)
Azure Access Configuration
Using Service Principal credentials
Storage account has Storage Blob Data Contributor role assigned to the Service Principal
Observations
The SHOW SCHEMAS FROM azure command works, indicating the catalog is recognized.
The error appears to stem from Trino or the underlying Hive connector not recognizing or supporting the abfs / abfss file system schemes.
A similar issue was reported here #25863 and #25919
Is this a known limitation or bug in Trino’s Hive connector for Azure?
Is there additional configuration or plugin/jar required to support abfs[s] in this setup?
The text was updated successfully, but these errors were encountered: