-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-51146][SQL][FOLLOWUP] Respect system env SPARK_CONNECT_MODE
in places that access the api mode config
#49930
Conversation
cc @HyukjinKwon |
BTW, I'm wondering if we are going to fix the broken |
@dongjoon-hyun I'd like the cut rc1 on time to kick off the release cycle. We can fail rc1 if there are many pending work, which will likely happen. |
92cfb59
to
012052f
Compare
Ya, I agree with you, @cloud-fan .
|
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitCommandBuilder.java
Outdated
Show resolved
Hide resolved
…mmandBuilder.java
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitCommandBuilder.java
Outdated
Show resolved
Hide resolved
…mmandBuilder.java
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitCommandBuilder.java
Outdated
Show resolved
Hide resolved
…mmandBuilder.java
All tests passed except for python code style check, and the last commit just reformats the code. I'm merging it to master/4.0, thanks for the review! |
…in places that access the api mode config ### What changes were proposed in this pull request? This is a followup of the additional Spark Connect distribution work. Some places that access the api mode config do not use the config framework and do not respect the defined default value. We need to update them manually to get the default value properly by looking at the special system env `SPARK_CONNECT_MODE` set in scripts of the spark connect distribution. ### Why are the changes needed? to make the additional Spark Connect distribution work as expected. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? manually tested locally. The expectation is: 1. if `--remote` is specified, then we pick connect mode no matter what the api mode config is 2. if `--master` is specified, then we pick classic or connect mode depending on the api mode config. If the config is not specified, the default value varies between the default distribution and the Spark connect distribution. 3. if neither `--remote` nor `--master` is specified, it's equal to `--master local[*]`, and then it's the same as rule 2. ### Was this patch authored or co-authored using generative AI tooling? no Closes #49930 from cloud-fan/release. Lead-authored-by: Wenchen Fan <[email protected]> Co-authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]> (cherry picked from commit f601eb7) Signed-off-by: Wenchen Fan <[email protected]>
What changes were proposed in this pull request?
This is a followup of the additional Spark Connect distribution work. Some places that access the api mode config do not use the config framework and do not respect the defined default value. We need to update them manually to get the default value properly by looking at the special system env
SPARK_CONNECT_MODE
set in scripts of the spark connect distribution.Why are the changes needed?
to make the additional Spark Connect distribution work as expected.
Does this PR introduce any user-facing change?
no
How was this patch tested?
manually tested locally. The expectation is:
--remote
is specified, then we pick connect mode no matter what the api mode config is--master
is specified, then we pick classic or connect mode depending on the api mode config. If the config is not specified, the default value varies between the default distribution and the Spark connect distribution.--remote
nor--master
is specified, it's equal to--master local[*]
, and then it's the same as rule 2.Was this patch authored or co-authored using generative AI tooling?
no