Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
157 views
in Technique[技术] by (71.8m points)

loops - R function is looping over the same data in webscraper

This is my program that I've written

    library(rvest)
    library(RCurl)
    library(XML)
    library(stringr)


    #Getting the number of Page
    getPageNumber <- function(URL){
      parsedDocument = read_html(URL)
      Sort1 <- html_nodes(parsedDocument, 'div')
      Sort2 <- Sort1[which(html_attr(Sort1, "class") == "pageNumbers al-pageNumbers")] 
      P <- str_count(html_text(Sort2), pattern = " \d+
")
      return(ifelse(length(P) == 0, 0, max(P)))
    }


    #Getting all articles based off of their DOI
    getAllArticles <-function(URL){
      parsedDocument = read_html(URL)
      Sort1 <- html_nodes(parsedDocument,'div')
      Sort2 <-  Sort1[which(html_attr(Sort1, "class") == "al-citation-list")]
      ArticleDOInumber = trimws(gsub(".*10.1093/dnares/","",html_text(Sort2)))
      URL3 <- "https://doi.org/10.1093/dnares/"
      URL4 <- paste(URL3, ArticleDOInumber, sep = "")
      return(URL4)
    }


    Title <- function(parsedDocument){
      Sort1 <- html_nodes(parsedDocument, 'h1')
      Title <- gsub("<h1>\n|\n</h1>","",Sort1)
      return(Title)
    }


    #main function with input as parameter year
    findURL <- function(year_chosen){
      if(year_chosen >= 1994){
      noYearURL = glue::glue("https://academic.oup.com/dnaresearch/search-results?rg_IssuePublicationDate=01%2F01%2F{year_chosen}%20TO%2012%2F31%2F{year_chosen}")
      pagesURl = "&fl_SiteID=5275&startpage="
      URL = paste(noYearURL, pagesURl, sep = "")
      #URL is working with parameter year_chosen
      Page <- getPageNumber(URL)
      

      Page2 <- 0
      while(Page < Page2 | Page != Page2){
        Page <- Page2
        URL3 <- paste(URL, Page-1, sep = "")
        Page2 <- getPageNumber(URL3)    
      }
      R_Data <- data.frame()
      for(i in 1:Page){ #0:Page-1
        URL2 <- getAllArticles(paste(URL, i, sep = ""))
        for(j in 1:(length(URL2))){
          parsedDocument <- read_html(URL2[j])
          print(URL2[j])
          R <- data.frame("Title" = Title(parsedDocument),stringsAsFactors = FALSE)
          #R <- data.frame("Title" = Title(parsedDocument), stringsAsFactors = FALSE)
          R_Data <- rbind(R_Data, R)
        } 
      }
      paste(URL2)
      suppressWarnings(write.csv(R_Data, "DNAresearch.csv", row.names = FALSE, sep = "	"))
      #return(R_Data)
      } else {
        print("The Year you provide is out of range, this journal only contain articles from 2005 to present")
      }
    }

    findURL(2003)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It is not that it is reading the same url. It is that you are selecting for the wrong node which happens to yield repeating info. As I mentioned in your last question, you need to re-work your Title function. The Title re-write below will extract the actual article title based on class name and single node match.

Please note the removal of your sep arg. There are also some other areas of the code that look like they probably could be simplified in terms of logic.


Title function:

Title <- function(parsedDocument) {
  Title <- parsedDocument %>%
    html_node(".article-title-main") %>%
    html_text() %>%
    gsub("\r\n\s+", "", .) %>%
    trimws(.)
  return(Title)
}

R:

library(rvest)
library(XML)
library(stringr)


# Getting the number of Page
getPageNumber <- function(URL) {
  # print(URL)
  parsedDocument <- read_html(URL)
  Sort1 <- html_nodes(parsedDocument, "div")
  Sort2 <- Sort1[which(html_attr(Sort1, "class") == "pagination al-pagination")]
  P <- str_count(html_text(Sort2), pattern = " \d+
")
  return(ifelse(length(P) == 0, 0, max(P)))
}

# Getting all articles based off of their DOI
getAllArticles <- function(URL) {
  print(URL)
  parsedDocument <- read_html(URL)
  Sort1 <- html_nodes(parsedDocument, "div")
  Sort2 <- Sort1[which(html_attr(Sort1, "class") == "al-citation-list")]
  ArticleDOInumber <- trimws(gsub(".*10.1093/dnares/", "", html_text(Sort2)))
  URL3 <- "https://doi.org/10.1093/dnares/"
  URL4 <- paste(URL3, ArticleDOInumber, sep = "")
  return(URL4)
}


Title <- function(parsedDocument) {
  Title <- parsedDocument %>%
    html_node(".article-title-main") %>%
    html_text() %>%
    gsub("\r\n\s+", "", .) %>%
    trimws(.)
  return(Title)
}


# main function with input as parameter year
findURL <- function(year_chosen) {
  if (year_chosen >= 1994) {
    noYearURL <- glue::glue("https://academic.oup.com/dnaresearch/search-results?rg_IssuePublicationDate=01%2F01%2F{year_chosen}%20TO%2012%2F31%2F{year_chosen}")
    pagesURl <- "&fl_SiteID=5275&page="
    URL <- paste(noYearURL, pagesURl, sep = "")
    # URL is working with parameter year_chosen
    Page <- getPageNumber(URL)


    if (Page == 5) {
      Page2 <- 0
      while (Page < Page2 | Page != Page2) {
        Page <- Page2
        URL3 <- paste(URL, Page - 1, sep = "")
        Page2 <- getPageNumber(URL3)
      }
    }
    R_Data <- data.frame()
    for (i in 1:Page) {
      URL2 <- getAllArticles(paste(URL, i, sep = ""))
      for (j in 1:(length(URL2))) {
        parsedDocument <- read_html(URL2[j])
        #print(URL2[j])
        #print(Title(parsedDocument))
        R <- data.frame("Title" = Title(parsedDocument), stringsAsFactors = FALSE)
        #print(R)
        R_Data <- rbind(R_Data, R)
      }
    }
    write.csv(R_Data, "Group4.csv", row.names = FALSE)
  } else {
    print("The Year you provide is out of range, this journal only contain articles from 2005 to present")
  }
}

findURL(2003)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...