Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
994 views
in Technique[技术] by (71.8m points)

performance - How to know which count query is the fastest?

I've been exploring query optimizations in the recent releases of Spark SQL 2.3.0-SNAPSHOT and noticed different physical plans for semantically-identical queries.

Let's assume I've got to count the number of rows in the following dataset:

val q = spark.range(1)

I could count the number of rows as follows:

  1. q.count
  2. q.collect.size
  3. q.rdd.count
  4. q.queryExecution.toRdd.count

My initial thought was that it's almost a constant operation (surely due to a local dataset) that would somehow have been optimized by Spark SQL and would give a result immediately, esp. the 1st one where Spark SQL is in full control of the query execution.

Having had a look at the physical plans of the queries led me to believe that the most effective query would be the last:

q.queryExecution.toRdd.count

The reasons being that:

  1. It avoids deserializing rows from their InternalRow binary format
  2. The query is codegened
  3. There's only one job with a single stage

The physical plan is as simple as that.

Details for Job

Is my reasoning correct? If so, would the answer be different if I read the dataset from an external data source (e.g. files, JDBC, Kafka)?

The main question is what are the factors to take into consideration to say whether a query is more efficient than others (per this example)?


The other execution plans for completeness.

q.count

q.count

q.collect.size

q.collect.size

q.rdd.count

q.rdd.count

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I did some testing on val q = spark.range(100000000):

  1. q.count: ~50 ms
  2. q.collect.size: I stopped the query after a minute or so...
  3. q.rdd.count: ~1100 ms
  4. q.queryExecution.toRdd.count: ~600 ms

Some explanation:

Option 1 is by far the fastest because it uses both partial aggregation and whole stage code generation. The whole stage code generation allows the JVM to get really clever and do some drastic optimizations (see: https://databricks.com/blog/2017/02/16/processing-trillion-rows-per-second-single-machine-can-nested-loop-joins-fast.html).

Option 2. Is just slow and materializes everything on the driver, which is generally a bad idea.

Option 3. Is like option 4, but this first converts an internal row to a regular row, and this is quite expensive.

Option 4. Is about as fast you will get without whole stage code generation.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...