Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Java 11 support so connector can be used on Synapse? #689

Open
wyatt-troia-msft opened this issue Oct 17, 2024 · 0 comments
Open

Java 11 support so connector can be used on Synapse? #689

wyatt-troia-msft opened this issue Oct 17, 2024 · 0 comments

Comments

@wyatt-troia-msft
Copy link

wyatt-troia-msft commented Oct 17, 2024

The README says Java 8 is required to use the connector, but Azure Synapse will discontinue support for its Java 8 runtime (Spark 3.3) on March 31, 2025. After that, Synapse will only support Java 11 (in its Spark 3.4 runtime).

Are there plans to support Java 11? What are my options for continuing to use azure-event-hubs-spark if not?

We get this error when trying to use the connector on a Synapse Spark 3.4 pool. We're not sure if the error is related to the Java version. There is a related issue open.

The class that can't be found extends the org.apache.spark.eventhubs.utils.AadAuthenticationCallback class, as described here.

java.lang.ClassNotFoundException: com.microsoft.commerce.priceproducer.common.auth.EventHubAadAuthenticationCallBack at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527) at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:315) at org.apache.spark.eventhubs.EventHubsConf.$anonfun$aadAuthCallback$4(EventHubsConf.scala:641) at scala.Option.map(Option.scala:230) at org.apache.spark.eventhubs.EventHubsConf.aadAuthCallback(EventHubsConf.scala:639) at org.apache.spark.eventhubs.client.ClientConnectionPool.$anonfun$borrowClient$6(ClientConnectionPool.scala:73) at org.apache.spark.eventhubs.utils.RetryUtils$.$anonfun$retryJava$1(RetryUtils.scala:91) at org.apache.spark.eventhubs.utils.RetryUtils$.org$apache$spark$eventhubs$utils$RetryUtils$$retryHelper$1(RetryUtils.scala:116) at org.apache.spark.eventhubs.utils.RetryUtils$.retryScala(RetryUtils.scala:149) at org.apache.spark.eventhubs.utils.RetryUtils$.retryJava(RetryUtils.scala:91) at org.apache.spark.eventhubs.client.ClientConnectionPool.org$apache$spark$eventhubs$client$ClientConnectionPool$$borrowClient(ClientConnectionPool.scala:69) at org.apache.spark.eventhubs.client.ClientConnectionPool$.borrowClient(ClientConnectionPool.scala:170) at org.apache.spark.eventhubs.client.EventHubsClient.client(EventHubsClient.scala:62) at org.apache.spark.eventhubs.client.EventHubsClient.liftedTree1$1(EventHubsClient.scala:187) at org.apache.spark.eventhubs.client.EventHubsClient.partitionCountLazyVal$lzycompute(EventHubsClient.scala:184) at org.apache.spark.eventhubs.client.EventHubsClient.partitionCountLazyVal(EventHubsClient.scala:183) at org.apache.spark.eventhubs.client.EventHubsClient.partitionCount(EventHubsClient.scala:176) at org.apache.spark.sql.eventhubs.EventHubsSource.partitionCount(EventHubsSource.scala:81) at org.apache.spark.sql.eventhubs.EventHubsSource.$anonfun$maxOffsetsPerTrigger$4(EventHubsSource.scala:96) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.eventhubs.EventHubsSource.$anonfun$maxOffsetsPerTrigger$2(EventHubsSource.scala:96) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.eventhubs.EventHubsSource.<init>(EventHubsSource.scala:96) at org.apache.spark.sql.eventhubs.EventHubsSourceProvider.createSource(EventHubsSourceProvider.scala:84) at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:289) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.$anonfun$applyOrElse$1(MicroBatchExecution.scala:89) at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:86) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant