Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
771 views
in Technique[技术] by (71.8m points)

apache spark - What conditions should cluster deploy mode be used instead of client?

The doc https://spark.apache.org/docs/1.1.0/submitting-applications.html

describes deploy-mode as :

--deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)

Using this diagram fig1 as a guide (taken from http://spark.apache.org/docs/1.2.0/cluster-overview.html) :

enter image description here

If I kick off a Spark job :

./bin/spark-submit 
  --class com.driver 
  --master spark://MY_MASTER:7077 
  --executor-memory 845M 
  --deploy-mode client 
  ./bin/Driver.jar

Then the Driver Program will be MY_MASTER as specified in fig1 MY_MASTER

If instead I use --deploy-mode cluster then the Driver Program will be shared among the Worker Nodes ? If this is true then does this mean that the Driver Program box in fig1 can be dropped (as it is no longer utilized) as the SparkContext will also be shared among the worker nodes ?

What conditions should cluster be used instead of client ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

No, when deploy-mode is client, the Driver Program is not necessarily the master node. You could run spark-submit on your laptop, and the Driver Program would run on your laptop.

On the contrary, when deploy-mode is cluster, then cluster manager (master node) is used to find a slave having enough available resources to execute the Driver Program. As a result, the Driver Program would run on one of the slave nodes. As its execution is delegated, you can not get the result from Driver Program, it must store its results in a file, database, etc.

  • Client mode
    • Want to get a job result (dynamic analysis)
    • Easier for developing/debugging
    • Control where your Driver Program is running
    • Always up application: expose your Spark job launcher as REST service or a Web UI
  • Cluster mode
    • Easier for resource allocation (let the master decide): Fire and forget
    • Monitor your Driver Program from Master Web UI like other workers
    • Stop at the end: one job is finished, allocated resources are freed

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...