Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
814 views
in Technique[技术] by (71.8m points)

hadoop - Access files that start with underscore in apache spark

I am trying to access gz files on s3 that start with _ in Apache Spark. Unfortunately spark deems these files invisible and returns Input path does not exist: s3n:.../_1013.gz. If I remove the underscore it finds the file just fine.

I tried adding a custom PathFilter to the hadoopConfig:

package CustomReader

import org.apache.hadoop.fs.{Path, PathFilter}

class GFilterZip extends PathFilter {
  override def accept(path: Path): Boolean = {
    true
  }
}
// in spark settings
sc.hadoopConfiguration.setClass("mapreduce.input.pathFilter.class", classOf[CustomReader.GFilterZip], classOf[org.apache.hadoop.fs.PathFilter])

but I still have the same problem. Any ideas?

System: Apache Spark 1.6.0 with Hadoop 2.3

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Files started with _ and . are hidden files.

And the hiddenFileFilter will be always applied. It is added inside method org.apache.hadoop.mapred.FileInputFormat.listStatus

check this answer, which files ignored as input by mapper?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...