Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
731 views
in Technique[技术] by (71.8m points)

scrapy - Avoid Duplicate URL Crawling

I coded a simple crawler. In the settings.py file, by referring to scrapy documentation, I used

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'

If I stop the crawler and restart the crawler again, it is scraping the duplicate urls again. Am I doing something wrong?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I believe what you are looking for is "persistence support", to pause and resume crawls.

To enable it you can do:

scrapy crawl somespider -s JOBDIR=crawls/somespider-1

You can read more about it here.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...