Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
124 views
in Technique[技术] by (71.8m points)

web scraping - Python - Why is this data being written to file incorrectly?

Only the first result is being written to a csv, with one letter of the url per row. This is instead of all urls being written, one per row.

What am I not doing right in the last section of this code that is causing the cvs to be written only with one of the results instead of all of them?

import requests
from bs4 import BeautifulSoup
import csv

def grab_listings():
    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/2/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/3/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/4/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/5/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/6/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/7/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/8/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/9/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

l = grab_listings()


with open ("gyms.csv", "wb") as file:
        writer = csv.writer(file)
        for row in l:
            writer.writerow(row)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

So I refactored your code a bit and i think it should work as you would expect it now:

import requests
from bs4 import BeautifulSoup
import csv


def grab_listings(page_idx):
    ret = []
    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/"
           "page/{}/").format(page_idx) # the index of the page will be inserted here
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class": "wlt_search_results"})
    for elem in l_area.findAll("a", {"class": "frame"}):
        # be sure to add all your results to a list and return it,
        # if you return here then you will only get the first result
        ret.append(elem["href"])
    return ret


def main():
    l = [] # this will be a list of lists
    # call the function 9 times here with idx from 1 till 9
    for page_idx in range(1, 10):
        l.append(grab_listings(page_idx))
    print l

    with open("gyms.csv", "wb") as f:
        writer = csv.writer(f)
        for row in l:
            # be sure that your row is a list here, if it is only
            # a string all characters will be seperated by a comma.
            writer.writerow(row)

# for writing each URL in one line separated by commas at the end 
#    with open("gyms.csv", "wb") as f:
#        for row in l:
#            string_to_write = ',
'.join(row)
#            f.write(string_to_write)

if __name__ == '__main__':
    main()

I added some comments to the code and hope it is explanatory enough. If not just ask :)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...