Some websites will offer an API, a set of structured HTTP requests that return data as JSON, which you handle using the techniques from @sec-rectangling.
In this chapter, we'll first discuss the ethics and legalities of scraping before we dive into the basics of HTML.
You'll then learn the basics of CSS selectors to locate specific elements on the page, and how to use rvest functions to get data from text and attributes out of HTML and into R.
We'll then discuss some techniques to figure out what CSS selector you need for the page you're scraping, before finishing up with a couple of case studies, and a brief discussion of dynamic websites.
These three factors are important because they're connected to the site's terms and conditions, personally identifiable information, and copyright, as we'll discuss below.
But this is the best summary we can give having read a bunch about this topic.
If the data isn't public, non-personal, or factual or you're scraping the data specifically to make money with it, you'll need to talk to a lawyer.
In any case, you should be respectful of the resources of the server hosting the pages you are scraping.
Most importantly, this means that if you're scraping many pages, you should make sure to wait a little between each request.
One easy way to do so is to use the [**polite**](https://dmi3kno.github.io/polite/) package by Dmytro Perepolkin.
It will automatically pause between requests and cache the results so you never ask for the same page twice.
### Terms of service
If you look closely, you'll find many websites include a "terms and conditions" or "terms of service" link somewhere on the page, and if you read that page closely you'll often discover that the site specifically prohibits web scraping.
These pages tend to be a legal land grab where companies make very broad claims.
It's polite to respect these terms of service where possible, but take any claims with a grain of salt.
US courts[^webscraping-3] have generally found that simply putting the terms of service in the footer of the website isn't sufficient for you to be bound by them.
Generally, to be bound to the terms of service, you must have taken some explicit action like creating an account or checking a box.
This is why whether or not the data is **public** is important; if you don't need an account to access them, it is unlikely that you are bound to the terms of service.
Note, however, the situation is rather different in Europe where courts have found that terms of service are enforceable even if you don't explicitly agree to them.
Even if the data is public, you should be extremely careful about scraping personally identifiable information like names, email addresses, phone numbers, dates of birth, etc.
Europe has particularly strict laws about the collection or storage of such data ([GDPR](https://gdpr-info.eu/)), and regardless of where you live you're likely to be entering an ethical quagmire.
For example, in 2016, a group of researchers scraped public profile information (e.g., usernames, age, gender, location, etc.) about 70,000 people on the dating site OkCupid and they publicly released these data without any attempts for anonymization.
While the researchers felt that there was nothing wrong with this since the data were already public, this work was widely condemned due to ethics concerns around identifiability of users whose information was released in the dataset.
If your work involves scraping personally identifiable information, we strongly recommend reading about the OkCupid study[^webscraping-4] as well as similar studies with questionable research ethics involving the acquisition and release of personally identifiable information.
[^webscraping-4]: One example of an article on the OkCupid study was published by Wired, <https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science>.
Copyright law is complicated, but it's worth taking a look at the [US law](https://www.law.cornell.edu/uscode/text/17/102) which describes exactly what's protected: "\[...\] original works of authorship fixed in any tangible medium of expression, \[...\]".
This is why when you're looking for a recipe on the internet there's always so much content beforehand.
If you do need to scrape original content (like text or images), you may still be protected under the [doctrine of fair use](https://en.wikipedia.org/wiki/Fair_use).
Fair use is not a hard and fast rule, but weighs up a number of factors.
It's more likely to apply if you are collecting the data for research or non-commercial purposes and if you limit what you scrape to just what you need.
## HTML basics
To scrape webpages, you need to first understand a little bit about **HTML**, the language that describes web pages.
HTML stands for **H**yper**T**ext **M**arkup **L**anguage and looks something like this:
HTML has a hierarchical structure formed by **elements** which consist of a start tag (e.g., `<tag>`), optional **attributes** (`id='first'`), an end tag[^webscraping-5] (like `</tag>`), and **contents** (everything in between the start and end tag).
[^webscraping-5]: A number of tags (including `<p>` and `<li>)` don't require end tags, but we think it's best to include them because it makes seeing the structure of the HTML a little easier.
- Every HTML page must be in an `<html>` element, and it must have two children: `<head>`, which contains document metadata like the page title, and `<body>`, which contains the content you see in the browser.
- Block tags like `<h1>` (heading 1), `<section>` (section), `<p>` (paragraph), and `<ol>` (ordered list) form the overall structure of the page.
- Inline tags like `<b>` (bold), `<i>` (italics), and `<a>` (link) format text inside block tags.
If you encounter a tag that you've never seen before, you can find out what it does with a little googling.
Another good place to start are the [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/HTML) which describe just about every aspect of web programming.
Most elements can have content in between their start and end tags.
This content can either be text or more elements.
For example, the following HTML contains paragraph of text, with one word in bold.
The `<b>` element has no children, but it does have contents (the text "name").
### Attributes
Tags can have named **attributes** which look like `name1='value1' name2='value2'`.
Two of the most important attributes are `id` and `class`, which are used in conjunction with CSS (Cascading Style Sheets) to control the visual appearance of the page.
These are often useful when scraping data off a page.
Attributes are also used to record the destination of links (the `href` attribute of `<a>` elements) and the source of images (the `src` attribute of the `<img>` element).
## Extracting data
To get started scraping, you'll need the URL of the page you want to scrape, which you can usually copy from your web browser.
You'll then need to read the HTML for that page into R with `read_html()`.
Now that you have the HTML in R, it's time to extract the data of interest.
You'll first learn about the CSS selectors that allow you to identify the elements of interest and the rvest functions that you can use to extract data from them.
Then we'll briefly cover HTML tables, which have some special tools.
### Find elements
CSS is short for cascading style sheets, and is a tool for defining the visual styling of HTML documents.
CSS includes a miniature language for selecting elements on a page called **CSS selectors**.
CSS selectors define patterns for locating HTML elements, and are useful for scraping because they provide a concise way of describing which elements you want to extract.
We'll come back to CSS selectors in more detail in @sec-css-selectors, but luckily you can get a long way with just three:
- `p` selects all `<p>` elements.
- `.title` selects all elements with `class` "title".
- `#title` selects the element with the `id` attribute that equals "title".
Id attributes must be unique within a document, so this will only ever select a single element.
Lets try out these selectors with a simple example:
```{r}
html <- minimal_html("
<h1>This is a heading</h1>
<p id='first'>This is a paragraph</p>
<p class='important'>This is an important paragraph</p>
")
```
Use `html_elements()` to find all elements that match the selector:
`html_elements()` returns a vector of length 0, where `html_element()` returns a missing value.
This will be important shortly.
```{r}
html |> html_elements("b")
html |> html_element("b")
```
### Nesting selections
In most cases, you'll use `html_elements()` and `html_element()` together, typically using `html_elements()` to identify elements that will become observations then using `html_element()` to find elements that will become variables.
Let's see this in action using a simple example.
Here we have an unordered list (`<ul>)` where each list item (`<li>`) contains some information about four characters from StarWars:
```{r}
html <- minimal_html("
<ul>
<li><b>C-3PO</b> is a <i>droid</i> that weighs <span class='weight'>167 kg</span></li>
We can use `html_elements()` to make a vector where each element corresponds to a different character:
```{r}
characters <- html |> html_elements("li")
characters
```
To extract the name of each character, we use `html_element()`, because when applied to the output of `html_elements()` its guaranteed to return one response per element:
[^webscraping-7]: rvest also provides `html_text()` but you should almost always use `html_text2()` since it does a better job of converting nested HTML to text.
`html_attr()` always returns a string, so if you're extracting numbers or dates, you'll need to do some post-processing.
### Tables
If you're lucky, your data will be already stored in an HTML table, and it'll be a matter of just reading it from that table.
It's usually straightforward to recognize a table in your browser: it'll have a rectangular structure of rows and columns, and you can copy and paste it into a tool like Excel.
HTML tables are built up from four main elements: `<table>`, `<tr>` (table row), `<th>` (table heading), and `<td>` (table data).
Here's a simple HTML table with two columns and three rows:
Use `html_element()` to identify the table you want to extract:
```{r}
html |>
html_element(".mytable") |>
html_table()
```
Note that `x` and `y` have automatically been converted to numbers.
This automatic conversion doesn't always work, so in more complex scenarios you may want to turn it off with `convert = FALSE` and then do your own conversion.
## Finding the right selectors {#sec-css-selectors}
Figuring out the selector you need for your data is typically the hardest part of the problem.
You'll often need to do some experimenting to find a selector that is both specific (i.e. it doesn't select things you don't care about) and sensitive (i.e. it does select everything you care about).
Lots of trial and error is a normal part of the process!
[SelectorGadget](https://rvest.tidyverse.org/articles/selectorgadget.html) is a javascript bookmarklet that automatically generates CSS selectors based on the positive and negative examples that you provide.
It doesn't always work, but when it does, it's magic!
You can learn how to install and use SelectorGadget either by reading <https://rvest.tidyverse.org/articles/selectorgadget.html> or watching Mine's video at <https://www.youtube.com/watch?v=PetWV5g1Xsc>.
Every modern browser comes with some toolkit for developers, but we recommend Chrome, even if it isn't your regular browser: its web developer tools are some of the best and they're immediately available.
Right click on an element on the page and click `Inspect`.
This will open an expandable view of the complete HTML page, centered on the element that you just clicked.
You can use this to explore the page and get a sense of what selectors might work.
Pay particular attention to the class and id attributes, since these are often used to form the visual structure of the page, and hence make for good tools to extract the data that you're looking for.
Inside the Elements view, you can also right click on an element and choose `Copy as Selector` to generate a selector that will uniquely identify the element of interest.
If either SelectorGadget or Chrome DevTools have generated a CSS selector that you don't understand, try [Selectors Explained](https://kittygiraudel.github.io/selectors-explained/){.uri} which translates CSS selectors into plain English.
If you find yourself doing this a lot, you might want to learn more about CSS selectors generally.
We recommend starting with the fun [CSS dinner](https://flukeout.github.io/) tutorial and then referring to the [MDN web docs](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors).
## Putting it all together
Lets put this all together to scrape some websites.
There's some risk that these examples may no longer work when you run them --- that's the fundamental challenge of web scraping; if the structure of the site changes, then you'll have to change your scraping code.
### StarWars
rvest includes a very simple example in `vignette("starwars")`.
Then we'll remove the new lines and extra spaces, and then apply `separate_wider_regex()` (from @sec-extract-variables) to pull out the title, year, and rank into their own variables.
Even in this case where most of the data comes from table cells, it's still worth looking at the raw HTML.
If you do so, you'll discover that we can add a little extra data by using one of the attributes.
This is one of the reasons it's worth spending a little time spelunking the source of the page; you might find extra data, or might find a parsing route that's slightly easier.
```{r}
html |>
html_elements("td strong") |>
head() |>
html_attr("title")
```
We can combine this with the tabular data and again apply `separate_wider_regex()` to extract out the bit of data we care about:
```{r}
ratings |>
mutate(
rating_n = html |> html_elements("td strong") |> html_attr("title")
So far we have focused on websites where `html_elements()` returns what you see in the browser and discussed how to parse what it returns and how to organize that information in tidy data frames.
From time-to-time, however, you'll hit a site where `html_elements()` and friends don't return anything like what you see in the browser.
In many cases, that's because you're trying to scrape a website that dynamically generates the content of the page with javascript.
This doesn't currently work with rvest, because rvest downloads the raw HTML and doesn't run any javascript.
It's still possible to scrape these types of sites, but rvest needs to use a more expensive process: fully simulating the web browser including running all javascript.
This functionality is not available at the time of writing, but it's something we're actively working on and might be available by the time you read this.
It uses the [chromote package](https://rstudio.github.io/chromote/index.html) which actually runs the Chrome browser in the background, and gives you additional tools to interact with the site, like a human typing text and clicking buttons.
In this chapter, you've learned about the why, the why not, and the how of scraping data from web pages.
First, you've learned about the basics of HTML and using CSS selectors to refer to specific elements, then you've learned about using the rvest package to get data out of HTML into R.
We then demonstrated web scraping with two case studies: a simpler scenario on scraping data on StarWars films from the rvest package website and a more complex scenario on scraping the top 250 films from IMDB.
Technical details of scraping data off the web can be complex, particularly when dealing with sites, however legal and ethical considerations can be even more complex.
It's important for you to educate yourself about both of these before setting out to scrape data.
This brings us to the end of the import part of the book where you've learned techniques to get data from where it lives (spreadsheets, databases, JSON files, and web sites) into a tidy form in R.