Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.2k views
in Technique[技术] by (71.8m points)

pyspark - Spark load data and add filename as dataframe column

I am loading some data into Spark with a wrapper function:

def load_data( filename ):
    df = sqlContext.read.format("com.databricks.spark.csv")
        .option("delimiter", "")
        .option("header", "false")
        .option("mode", "DROPMALFORMED")
        .load(filename)
    # add the filename base as hostname
    ( hostname, _ ) = os.path.splitext( os.path.basename(filename) )
    ( hostname, _ ) = os.path.splitext( hostname )
    df = df.withColumn('hostname', lit(hostname))
    return df

specifically, I am using a glob to load a bunch of files at once:

df = load_data( '/scratch/*.txt.gz' )

the files are:

/scratch/host1.txt.gz
/scratch/host2.txt.gz
...

I would like the column 'hostname' to actually contain the real name of the file being loaded rather than the glob (ie host1, host2 etc, rather than *).

How can I do this?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You can use input_file_name which:

Creates a string column for the file name of the current Spark task.

from  pyspark.sql.functions import input_file_name

df.withColumn("filename", input_file_name())

Same thing in Scala:

import org.apache.spark.sql.functions.input_file_name

df.withColumn("filename", input_file_name)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

56.9k users

...