Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
163 views
in Technique[技术] by (71.8m points)

python - Is there a way to write pyspark dataframe to azure cache for redis?

I'm having a pyspark dataframe with 2 columns. I created a azure cache for redis instance. I would like to write the pyspark dataframe to redis with first column of dataframe as key and second column as value. How can I do it in azure?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You need to leverage this library:https://github.com/RedisLabs/spark-redis along with the associated jar needed(depending on which version of spark+scala you are using).

In my case I have installed 3 jars on spark cluster(Scala=2.12) latest spark:

  1. spark_redis_2_12_2_6_0.jar
  2. commons_pool2_2_10_0.jar
  3. jedis_3_6_0.jar

Along the configuration for connecting to redis:

Cluster conf setup

spark.redis.auth PASSWORD
spark.redis.port 6379
spark.redis.host xxxx.xxx.cache.windows.net

Make sure you have azure redis 4.0, the library might have issue with 6.0. Sample code to push:

    from pyspark.sql.types import StructType, StructField, StringType
schema = StructType([
    StructField("id", StringType(), True),
    StructField("colA", StringType(), True),
    StructField("colB", StringType(), True)
])

data = [
    ['1', '8', '2'],
    ['2', '5', '3'],
    ['3', '3', '1'],
    ['4', '7', '2']
]
df = spark.createDataFrame(data, schema=schema)
df.show()
--------------
(
    df.
    write.
    format("org.apache.spark.sql.redis").
    option("table", "mytable").
    option("key.column", "id").
    save()
)

 

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...