Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
932 views
in Technique[技术] by (71.8m points)

scala - Writing RDD partitions to individual parquet files in its own directory

I am struggling with step where I want to write each RDD partition to separate parquet file with its own directory. Example will be:

    <root>
        <entity=entity1>
            <year=2015>
                <week=45>
                    data_file.parquet

Advantage of this format is I can use this directly in SparkSQL as columns and I will not have to repeat this data in actual file. This would be good way to get to get to specific partition without storing separate partitioning metadata someplace else.

?As a preceding step I have all the data loaded from large number of gzip files and partitioned based on the above key.

Possible way would be to get each partition as separate RDD and then write it though I couldn't find any good way of doing it.

Any help will be appreciated. By the way I am new to this stack.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I don't think the accepted answer appropriately answers the question.

Try something like this:

df.write.partitionBy("year", "month", "day").parquet("/path/to/output")

And you will get the partitioned directory structure.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...