You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 4, 2023. It is now read-only.
In order to share the Spark cluster between more spark-driver applications we need to tune the parameters for the spark-submit command related to cores per executors (--executor-cores, total-executor-core, --executor-memory...). The example start a one node Spark cluster with 8 cores.
The same should be considered for memory.
In the current status, the first spark-driver gets all 8 available cores and another one cannot run.
The text was updated successfully, but these errors were encountered:
ppatierno
changed the title
Tuning cores and memory needed by executors
Tuning cores and memory needed by Spark executors
Apr 23, 2018
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In order to share the Spark cluster between more spark-driver applications we need to tune the parameters for the
spark-submit
command related to cores per executors (--executor-cores
,total-executor-core
,--executor-memory
...). The example start a one node Spark cluster with 8 cores.The same should be considered for memory.
In the current status, the first spark-driver gets all 8 available cores and another one cannot run.
The text was updated successfully, but these errors were encountered: