Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
392 views
in Technique[技术] by (71.8m points)

python - Spark Cloudfiles Autoloader BlobStorage Azure java.io.IOException: Attempted read from closed stream

I learn to use the new autoloader streaming method on SPARK 3 and I have this issue. Here i'm trying to listen simple json files but my stream never start.

My code (creds removed) :

from pyspark.sql.types import StructType, StringType, IntegerType

azdls_connection_string = "My connection string"
queue_name = "queu-name"

stream_schema = StructType() 
  .add("timestamp", StringType(), False) 
  .add("temperature", IntegerType(), False)

ressource_group = ""
cloudfiles_subid_telem = ""
cloudfiles_clientid_telem = ""
cloudfiles_clientsecret_telem = ""
tenantid = ""
conainer_name = "mydb"
abs_fs = "abfss://" + conainer_name + "@" + dls_name + ".dfs.core.windows.net"

read_stream = (
  spark.readStream.format("cloudFiles")
  .option("cloudFiles.useNotifications", True)
  .option("cloudFiles.format", "json")
  .option("cloudFiles.connectionString", azdls_connection_string)
  .option("cloudFiles.resourceGroup", ressource_group)
  .option("cloudFiles.subscriptionId", cloudfiles_subid_telem)
  .option("cloudFiles.tenantId", tenantid)
  .option("cloudFiles.clientId", cloudfiles_clientid_telem)
  .option("cloudFiles.clientSecret", cloudfiles_clientsecret_telem)
  .option("cloudFiles.region", "francecentral")
  .schema(stream_schema)
  .option("cloudFiles.includeExistingFiles", False)
  .load(abs_fs + "/input")
)

checkpoint_path = abs_fs + "/checkpoints"
out_path = abs_fs + "/out"

df = read_stream.writeStream.format("delta") 
  .option("checkpointLocation", checkpoint_path) 
  .start(out_path)

And when i try to start my streaming i got a error. My permissions are correctly set because my Azure Queue is created. I don't find any informations in the documentation of autloader on the databricks website about this error.

And here is my error :

java.io.IOException: Attempted read from closed stream.
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:165)
    at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
    at java.io.InputStreamReader.read(InputStreamReader.java:184)
    at java.io.Reader.read(Reader.java:140)
    at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2001)
    at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1980)
    at org.apache.commons.io.IOUtils.copy(IOUtils.java:1957)
    at org.apache.commons.io.IOUtils.copy(IOUtils.java:1907)
    at org.apache.commons.io.IOUtils.toString(IOUtils.java:778)
    at org.apache.commons.io.IOUtils.toString(IOUtils.java:759)
    at com.databricks.sql.aqs.EventGridClient.prettyResponse(EventGridClient.scala:428)
    at com.databricks.sql.aqs.EventGridClient.com$databricks$sql$aqs$EventGridClient$$errorResponse(EventGridClient.scala:424)
    at com.databricks.sql.aqs.EventGridClient$$anonfun$createEventSubscription$3.applyOrElse(EventGridClient.scala:235)
    at com.databricks.sql.aqs.EventGridClient$$anonfun$createEventSubscription$3.applyOrElse(EventGridClient.scala:229)
    at com.databricks.sql.aqs.EventGridClient.executeRequest(EventGridClient.scala:387)
    at com.databricks.sql.aqs.EventGridClient.createEventSubscription(EventGridClient.scala:226)
    at com.databricks.sql.aqs.autoIngest.AzureEventNotificationSetup.$anonfun$setupEventGridSubscription$1(AzureEventNotificationSetup.scala:135)
    at scala.Option.getOrElse(Option.scala:189)
    at com.databricks.sql.aqs.autoIngest.AzureEventNotificationSetup.setupEventGridSubscription(AzureEventNotificationSetup.scala:121)
    at com.databricks.sql.aqs.autoIngest.AzureEventNotificationSetup.<init>(AzureEventNotificationSetup.scala:75)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at com.databricks.sql.fileNotification.autoIngest.EventNotificationSetup$.$anonfun$create$1(EventNotificationSetup.scala:66)
    at com.databricks.sql.fileNotification.autoIngest.ResourceManagementUtils$.unwrapInvocationTargetException(ResourceManagementUtils.scala:42)
    at com.databricks.sql.fileNotification.autoIngest.EventNotificationSetup$.create(EventNotificationSetup.scala:50)
    at com.databricks.sql.fileNotification.autoIngest.CloudFilesSourceProvider.$anonfun$createSource$2(CloudFilesSourceProvider.scala:162)
    at scala.Option.getOrElse(Option.scala:189)
    at com.databricks.sql.fileNotification.autoIngest.CloudFilesSourceProvider.createSource(CloudFilesSourceProvider.scala:154)
    at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:306)
    at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.$anonfun$applyOrElse$1(MicroBatchExecution.scala:93)
    at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
    at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:90)
    at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:88)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:322)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:80)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:322)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:166)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:164)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:311)
    at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:88)
    at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:68)
    at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:346)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:269)

Thank you in advance


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Ok i found my issue. I had a conflit with a previous stream when i was trying abqs connector.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...