Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
439 views
in Technique[技术] by (71.8m points)

python - how to get the name of column with maximum value in pyspark dataframe

How do we get the name of the column pyspark dataframe ?

   Alice  Eleonora  Mike  Helen       MAX
0      2         7     8      6      Mike
1     11         5     9      4     Alice
2      6        15    12      3  Eleonora
3      5         3     7      8     Helen

I need something like this. name of the columns no the max values, i am able to get the max values, i need the name

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You can chain conditions to find which columns is equal to the maximum value:

cond = "psf.when" + ".when".join(["(psf.col('" + c + "') == psf.col('max_value'), psf.lit('" + c + "'))" for c in df.columns])
import pyspark.sql.functions as psf
df.withColumn("max_value", psf.greatest(*df.columns))
    .withColumn("MAX", eval(cond))
    .show()

    +-----+--------+----+-----+---------+--------+
    |Alice|Eleonora|Mike|Helen|max_value|     MAX|
    +-----+--------+----+-----+---------+--------+
    |    2|       7|   8|    6|        8|    Mike|
    |   11|       5|   9|    4|       11|   Alice|
    |    6|      15|  12|    3|       15|Eleonora|
    |    5|       3|   7|    8|        8|   Helen|
    +-----+--------+----+-----+---------+--------+

OR: explode and filter

from itertools import chain
df.withColumn("max_value", psf.greatest(*df.columns))
    .select("*", psf.posexplode(psf.create_map(list(chain(*[(psf.lit(c), psf.col(c)) for c in df.columns])))))
    .filter("max_value = value")
    .select(df.columns + [psf.col("key").alias("MAX")])
    .show()

OR: using a UDF on a dictionary:

from pyspark.sql.types import *
argmax_udf = psf.udf(lambda m: max(m, key=m.get), StringType())
df.withColumn("map", psf.create_map(list(chain(*[(psf.lit(c), psf.col(c)) for c in df.columns]))))
    .withColumn("MAX", argmax_udf("map"))
    .drop("map")
    .show()

OR: using a UDF with a parameter:

from pyspark.sql.types import *
def argmax(cols, *args):
    return [c for c, v in zip(cols, args) if v == max(args)][0]
argmax_udf = lambda cols: psf.udf(lambda *args: argmax(cols, *args), StringType())
df.withColumn("MAX", argmax_udf(df.columns)(*df.columns))
    .show()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...