Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
985 views
in Technique[技术] by (71.8m points)

dataframe - What is the maximum size for a broadcast object in Spark?

When using Dataframe broadcast function or the SparkContext broadcast functions, what is the maximum object size that can be dispatched to all executors?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

broadcast function :

Default is 10mb but we have used till 300 mb which is controlled by spark.sql.autoBroadcastJoinThreshold.

AFAIK, It all depends on memory available. so there is no definite answer for this. what I would say is, it should be less than large dataframe and you can estimate large or small dataframe size like below...

import org.apache.spark.util.SizeEstimator

logInfo(SizeEstimator.estimate(yourlargeorsmalldataframehere))

based on this you can pass broadcast hint to framework.

Also have a look at scala doc from sql/execution/SparkStrategies.scala

which says....

  • Broadcast: if one side of the join has an estimated physical size that is smaller than the user-configurable [[SQLConf.AUTO_BROADCASTJOIN_THRESHOLD]] threshold or if that side has an explicit broadcast hint (e.g. the user applied the
    [[org.apache.spark.sql.functions.broadcast()]] function to a DataFrame), then that side of the join will be broadcasted and the other side will be streamed, with no shuffling
    performed. If both sides are below the threshold, broadcast the smaller side. If neither is smaller, BHJ is not used.
  • Shuffle hash join: if the average size of a single partition is small enough to build a hash table.
  • Sort merge: if the matching join keys are sortable.
  • If there is no joining keys, Join implementations are chosen with the following precedence:
    • BroadcastNestedLoopJoin: if one side of the join could be broadcasted
    • CartesianProduct: for Inner join
    • BroadcastNestedLoopJoin

Also have a look at other-configuration-options

SparkContext.broadcast (TorrentBroadcast ) :

broadcast shared variable also has a property spark.broadcast.blockSize=4M AFAIK there is no hard core limitation I have seen for this as well...

for Further information pls. see TorrentBroadcast.scala


EDIT :

However you can have look at 2GB issue Even though that was officially not declared in docs (I was not able to see anything of this kind in docs). pls look at SPARK-6235 which is "IN PROGRESS" state & SPARK-6235_Design_V0.02.pdf .


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...