I am trying to append all files contents from a streaming folder using spark but it cretaes a lot of part files every time a microbatch is triggered.Below is my code .
SparkSession session = SparkSession.builder().appName("SparkJava").getOrCreate();
JavaSparkContext sparkContext = new JavaSparkContext(session.sparkContext());
StructType personSchema = new StructType().add("firstName", "string").add("lastName", "string").add("age",
"long");
//3 - Create a Dataset representing the stream of input files
Dataset<Patient> personStream = session.readStream().schema(personSchema).json("file:///C:/jsons1")
.as(Encoders.bean(Patient.class));
//When data arrives from the stream, these steps will get executed
//4 - Create a temporary table so we can use SQL queries
personStream.createOrReplaceTempView("people");
String sql = "SELECT * FROM people";
Dataset<Row> ageAverage = session.sql(sql);
StreamingQuery query = ageAverage.coalesce(1).writeStream().outputMode(OutputMode.Append())
.option("path", "file:///C:/output").format("json").trigger(Trigger.ProcessingTime("10 seconds")).option("checkpointLocation", "file:///C:/output")
.partitionBy("age").start();
Please suggest a way to combine all file contents from source folder to one file in output folder
question from:
https://stackoverflow.com/questions/65948943/append-a-file-using-spark-structured-streaming-java 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…