Webscraping R



Sponsored Post.
By Rebecca Merrett, Instructor at Data Science Dojo
There are many blogs and tutorials that teach you how to scrape data from a bunch of web pages once and then you’re done. But one-off web scraping is not useful for many applications that require sentiment analysis on recent or timely content, or capturing changing events and commentary, or analyzing trends in real time. As fun as it is to do an academic exercise of web scraping for one-off analysis on historical data, it is not useful when wanting to use timely or frequently updated data.

Scenario: You would like to tap into news sources to analyze the political events that are changing by the hour and people’s comments on these events. These events could be analyzed to summarize the key discussions and debates in the comments, rate the overall sentiment of the comments, find the key themes in the headlines, see how events and commentary change over time, and more. You need a collection of recent political events or news scraped every hour so that you can analyze these events.

Sep 23, 2015 By visually inspecting the plot we can see that the predictions made by the neural network are (in general) more concetrated around the line (a perfect alignment with the line would indicate a MSE of 0 and thus an ideal perfect prediction) than those made by the linear model. $(webscrapingexample) pip install -r setup.py Import Required Modules Within the folder we created earlier, create a webscrapingexample.py file and include the following code snippets. Web Scraping is just a technique to convert unorganized data that is usually available on the internet to an organized format so that it can be useful to us. The very basic idea of scraping data is the old school method of COPY AND PASTE. In this R tutorial, you will learn R programming from basic to advance. This tutorial is ideal for both beginners and advanced programmers. R is the world's most widely used programming language for statistical analysis, predictive modeling and data science. It's popularity is claimed in many recent surveys and studies. Web scraping is the process of collecting the data from the World Wide Web and transforming it into a structured format. Typically web scraping is referred to an automated procedure, even though formally it includes a manual human scraping. We distinguish several techniques of web scraping.

Let’s go fetch your data!

Example source:

Reddit’s r/politics is a repository of political news from a variety of news sites and includes comments or discussion on the news. Content is added and updated at least every hour.

Web scraping rvest

Tool:

Web Scraping Rust

R’s rvest library is an easy-to-use tool for web scraping content within html tags. To install rvest run this command in R:

A quick demonstration of rvest

The below commands scrape the news headline and comments on a Reddit page.

How did we grab this text? We grabbed the text between the relevant HTML tags and classes. Right click on the web page and select View page source to search for the text and find the relevant HTML tags.

Web scraping – let’s go!

The web scraping program we are going to write will:

  • Grab the URL and time of the latest Reddit pages added to r/politics
  • Filter the pages down to those that are marked as published no more than an hour ago
  • Loop through each filtered page and scrape the main head and comments from each page
  • Create a dataframe containing the Reddit news headline and each comment belonging to that headline.

Once the data is in a dataframe, you are then free to plug these data into your analysis function.

Step 1

First, we need to load rvest into R and read in our Reddit political news data source.

Step 2

Next, we need to grab all times of the news pages and all corresponding URLs so we can filter these down to pages published within the hour (i.e.all pages published minutes ago).

Web scraping requests

Step 3

To filter pages, we need to make a dataframe out of our ‘time’ and ‘urls’ vectors. We’ll filter our rows based on a partial match of the time marked as either ‘x minutes’ or ‘now’.


Go ahead and filter the URLs based on the time either being ‘x minutes’ or ‘now’.

Step 4

Automated Web Scraping Tool

Now that we have filtered down the pages published within the hour, we are going to grab the golden nuggets of data that we plan to analyze. We’ll loop through the filtered list of URLs, grab the main head and paragraph text of the comments, and store these in their own vectors. Each comment will be its own element or item in the vector, with the corresponding news title/headline each comment belongs to.

Step 5

Last but not least, we’ll create a dataframe using the ‘titles’ and ‘comments’ vectors to get the data ready for analyzing.

There are several ways you could analyze these texts, depending on your application. For example, Data Science Dojo’s free Text Analytics video series goes through an end-to-end demonstration of preparing and analyzing text to predict the class label of the text. With nearly every single web page or business document containing some text, it is worth understanding the fundamentals of data mining for text, as well as important machine learning concepts.

Automate running your web scraping script

Here’s where the real automation comes into play. So far we have completed a fairly standard web scraping task, but with the addition of filtering and grabbing content based on a time window or timeframe. This script will save us from manually fetching the data every hour ourselves. But we need to automate the whole process by running this script in the background of our computer and freeing our hands to work on more interesting tasks.

Task Scheduler in Windows offers an easy user interface to schedule a script or program to run every minute, hour, day, week, month, etc. The OSX alternative to Task Manager is Automator and the Linux alternative is GNOME Schedule. We’ll use Task Scheduler in this tutorial, but Automator and GNOME Schedule operate in a similar way to Task Scheduler.

Step 1

Go to the Action tab in Task Scheduler and select Create Task.

Step 2

Give your task a name such as ‘Web Scraper Reddit Politics’.

Select the option Run whether user is logged in or not.

Click OK.

Step 3

Go to the Actions tab and click New.

Make sure Start a program option is selected from the Action dropdown menu.

Copy the directory path where your Rcmd.exe file sits on your local computer and paste it into the Program/script box.

Copy the directory path where your R script sits in your local computer and paste it into the Add arguments box with ‘BATCH’ before your path.

Click OK.

Step 4

Free

Go to the Conditions tab and under Power select the Wake the computer to run this task option.

Step 5

Go to the Triggers tab and click New.

Under Advanced Settings select 1 hour to repeat the task and Indefinitely for the duration.

Web Scraping Resume

Click OK, then OK again to exit the window.

Web Scraping R Software

Step 6

Rvest Tutorial

Click on Active Tasks in the main panel to check your web scraping task is active.

Your script will now run every hour to web scrape the latest data from Reddit’s r/politics.

R Web Crawler

What’s next?

Now that you have a program automatically fetching your data for you, what to do with the data? You could add to your R script by writing an analysis function or component so that your script not only fetches the data every hour, but also analyzes it and sends an email alert of the results of the analysis.

But first you need to understand how to analyze text. It’s worth getting a good grasp of text analytics or natural language processing. With so much textual data in the world under utilized not only on the web, but also in important business documents, it’s useful to have text mining skills added to your knowledge set and data science toolbox. You can learn these skills properly without having to sign up to a five-year PhD or Masters program. Data Science Dojo’s bootcamp is a five-day, hands-on course that can help you get up to speed on both text analytics and core machine learning concepts and algorithms.