Skip to content

Spark V3.2 Deploy Mode client 时 执行SPARK SQL 任务报错  #1186

Open
@poeao

Description

@poeao

Search before asking

  • I had searched in the issues and found no similar question.

  • I had searched my question on the internet but i didn't get any help.

  • I had read the documentation: Taier doc but it didn't help me.

Description

Spark v3.2 配置项如下 :

spark.submit.deployMode : client

执行如下 Spark SQL 任务:

-- name TestSpark
-- type SparkSQL
-- author [email protected]
-- create time 2024-07-18 10:27:21
-- desc test

-- show databases;

--- show tables in tmp

select count(1) from tmp.nono_translate_id_log


发生如下异常:

[10:34:22] 第1条任务开始执行
[10:34:24] 等待运行.....
[10:34:26] 等待运行.....
[10:34:27] 等待运行.....
[10:34:29] 等待运行.....
[10:34:29] 运行失败
[10:34:31] ====================基本日志====================
2024-07-18 10:34:23:submit job is success
====================appLogs====================
Uncaught exception: org.apache.spark.SparkException: Invalid Spark URL: spark://YarnScheduler@%7B%22appName%22%3A%22run_sql_task_1721270063116%22%2C%22userName%22%3A%22mumu%40h2.com%22%2C%22sql%22%3A%22UEsDBBQACAgIAEtU8lgAAAAAAAAAAAAAAAABAAAAMC2NQQ7CMAwEv%2BJjywHRJqQHHmOFxEVITlxi%5Cn5%2F%2BllNNqRxpNV4JMa%2BxsD0iNohFYfDKBElMy1A%2BjUdnwR9FPbnFL8Lcp3F2Y%2FQxq0ihDVJCWjjlF%5CnuMDapMDwv0l6tWEaT2plu1apgtZiVf5W8Z2R5TUerR1QSwcI0%2F4SoHMAAACVAAAA%5Cn%22%2C%22sparkSessionConf%22%3A%7B%22hive.default.fileformat%22%3A%22orc%22%7D%7D:0
at org.apache.spark.rpc.RpcEndpointAddress$.apply(RpcEndpointAddress.scala:66)
at org.apache.spark.rpc.netty.NettyRpcEnv.asyncSetupEndpointRefByURI(NettyRpcEnv.scala:140)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:553)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:267)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:934)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:933)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:933)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:965)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)

在 spark.submit.deployMode : cluster 模式下,是可以正常运行的 。

求指点, 谢谢

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions