Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
288 views
in Technique[技术] by (71.8m points)

python - Date difference between consecutive rows - Pyspark Dataframe

I have a table with following structure

USER_ID     Tweet_ID                 Date
  1           1001       Thu Aug 05 19:11:39 +0000 2010
  1           6022       Mon Aug 09 17:51:19 +0000 2010
  1           1041       Sun Aug 19 11:10:09 +0000 2010
  2           9483       Mon Jan 11 10:51:23 +0000 2012
  2           4532       Fri May 21 11:11:11 +0000 2012
  3           4374       Sat Jul 10 03:21:23 +0000 2013
  3           4334       Sun Jul 11 04:53:13 +0000 2013

Basically what I would like to do is have a PysparkSQL query that calculates the date difference (in seconds) for consecutive records with the same user_id number. The expected result would be:

1      Sun Aug 19 11:10:09 +0000 2010 - Mon Aug 09 17:51:19 +0000 2010     839930
1      Mon Aug 09 17:51:19 +0000 2010 - Thu Aug 05 19:11:39 +0000 2010     340780
2      Fri May 21 11:11:11 +0000 2012 - Mon Jan 11 10:51:23 +0000 2012     1813212
3      Sun Jul 11 04:53:13 +0000 2013 - Sat Jul 10 03:21:23 +0000 2013     5510
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Another way could be:

from pyspark.sql.functions import lag
from pyspark.sql.window import Window

df.withColumn("time_intertweet",(df.date.cast("bigint") - lag(df.date.cast("bigint"), 1)
.over(Window.partitionBy("user_??id")
.orderBy("date")??))
.cast("bigint"))

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...