Automated Web Scraping in R

Sponsored Post.By Rebecca Merrett, Instructor at Data Science DojoThere are many blogs and tutorials that teach you how to scrape data from a bunch of web pages once and then you’re done. But one-off web scraping is not useful for many applications that require sentiment analysis on recent or timely content, or capturing changing events and commentary, or analyzing trends in real time. As fun as it is to do an academic exercise of web scraping for one-off analysis on historical data, it is not useful to when wanting to use timely or frequently updated data.

Scenario: You would like to tap into news sources to analyze the political events that are changing by the hour and people’s comments on these events. These events could be analyzed to summarize the key discussions and debates in the comments, rate the overall sentiment of the comments, find the key themes in the headlines, see how events and commentary change over time, and more. You need a collection of recent political events or news scraped every hour so that you can analyze these events.

Let’s go fetch your data!

Example source:

Reddit’s r/politics is a repository of political news from a variety of news sites and includes comments or discussion on the news. Content is added and updated at least every hour.

Tool:

R’s rvest library is an easy-to-use tool for web scraping content within html tags. To install rvest run this command in R:

A quick demonstration of rvest

The below commands scrape the news headline and comments on a Reddit page.

How did we grab this text? We grabbed the text between the relevant HTML tags and classes. Right click on the web page and select View page source to search for the text and find the relevant HTML tags.

Web scraping – let’s go!

The web scraping program we are going to write will:

  • Grab the URL and time of the latest Reddit pages added to r/politics

  • Filter the pages down to those that are marked as published no more than an hour ago

  • Loop through each filtered page and scrape the main head and comments from each page

  • Create a dataframe containing the Reddit news headline and each comment belonging to that headline.

Once the data is in a dataframe, you are then free to plug these data into your analysis function.

Step 1

First, we need to load rvest into R and read in our Reddit political news data source.

 

Step 2

Next, we need to grab all times of the news pages and all corresponding URLs so we can filter these down to pages published within the hour (i.e.all pages published minutes ago).

Step 3

To filter pages, we need to make a dataframe out of our ‘time’ and ‘urls’ vectors. We’ll filter our rows based on a partial match of the time marked as either ‘x minutes’ or ‘now’.

Go ahead and filter the URLs based on the time either being ‘x minutes’ or ‘now’.

Step 4

Now that we have filtered down the pages published within the hour, we are going to grab the golden nuggets of data that we plan to analyze. We’ll loop through the filtered list of URLs, grab the main head and paragraph text of the comments, and store these in their own vectors. Each comment will be its own element or item in the vector, with the corresponding news title/headline each comment belongs to.

Step 5

Last but not least, we’ll create a dataframe using the ‘titles’ and ‘comments’ vectors to get the data ready for analyzing.

 

There are several ways you could analyze these texts, depending on your application. For example, Data Science Dojo’s free Text Analytics video series goes through an end-to-end demonstration of preparing and analyzing text to predict the class label of the text. With nearly every single web page or business document containing some text, it is worth understanding the fundamentals of data mining for text, as well as important machine learning concepts.

Automate running your web scraping script

Here’s where the real automation comes into play. So far we have completed a fairly standard web scraping task, but with the addition of filtering and grabbing content based on a time window or timeframe. This script will save us from manually fetching the data every hour ourselves. But we need to automate the whole process by running this script in the background of our computer and freeing our hands to work on more interesting tasks.

Task Scheduler in Windows offers an easy user interface to schedule a script or program to run every minute, hour, day, week, month, etc. The OSX alternative to Task Manager is Automator and the Linux alternative is GNOME Schedule. We’ll use Task Scheduler in this tutorial, but Automator and GNOME Schedule operate in a similar way to Task Scheduler.

Step 1

Go to the Action tab in Task Scheduler and select Create Task.

Step 2

Give your task a name such as ‘Web Scraper Reddit Politics’.

Select the option Run whether user is logged in or not.

Click OK.

Step 3

Go to the Actions tab and click New.

Make sure Start a program option is selected from the Action dropdown menu.

Copy the directory path where your Rcmd.exe file sits on your local computer and paste it into the Program/script box.

Copy the directory path where your R script sits in your local computer and paste it into the Add arguments box with ‘BATCH’ before your path.

Click OK.

Step 4

Go to the Conditions tab and under Power select the Wake the computer to run this task option.

Step 5

Go to the Triggers tab and click New.

Under Advanced Settings select 1 hour to repeat the task and Indefinitely for the duration.

Click OK, then OK again to exit the window.

Step 6

Click on Active Tasks in the main panel to check your web scraping task is active.

Your script will now run every hour to web scrape the latest data from Reddit’s r/politics.

What’s next?

Now that you have a program automatically fetching your data for you, what to do with the data? You could add to your R script by writing an analysis function or component so that your script not only fetches the data every hour, but also analyzes it and sends an email alert of the results of the analysis.

But first you need to understand how to analyze text. It’s worth getting a good grasp of text analytics or natural language processing. With so much textual data in the world under utilized not only on the web, but also in important business documents, it’s useful to have text mining skills added to your knowledge set and data science toolbox. You can learn these skills properly without having to sign up to a five-year PhD or Masters program. Data Science Dojo’s bootcamp is a five-day, hands-on course that can help you get up to speed on both text analytics and core machine learning concepts and algorithms.