Update webscraping.qmd (#1175)

* Update webscraping.qmd

A few minor edits fixing typos etc. Also added two comments.

* Update webscraping.qmd

* Update webscraping.qmd

* Make HTML code visible

Co-authored-by: Mine Cetinkaya-Rundel <cetinkaya.mine@gmail.com>
This commit is contained in:
mcsnowface, PhD 2022-12-06 23:31:57 -07:00 committed by GitHub
parent 60052fe34e
commit 3c4ee847e0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 19 additions and 18 deletions

View File

@ -10,12 +10,13 @@ status("polishing")
This vignette introduces you to the basics of web scraping with [rvest](https://rvest.tidyverse.org).
Web scraping is a very useful tool for extracting data from web pages.
Some websites will offer an API, a set of structured HTTP requests that return data as JSON, which you handle using the techniques from @sec-rectangling.
Where possible, you should use the API, because typically it will give you more reliably data.
Unfortunately however, programming with web APIs is out of scope for this book, and we instead teaching scraping, a technique that works whether or not a site provides an API.
Where possible, you should use the API, because typically it will give you more reliable data.
Unfortunately, however, programming with web APIs is out of scope for this book.
Instead, we are teaching scraping, a technique that works whether or not a site provides an API.
In this chapter, we'll first discuss the ethics and legalities of scraping before we dive into the basics of HTML.
You'll then learn the basics of CSS selectors to locate specific elements on the page, and how to use rvest functions to get data from text and attributes out of HTML and into R.
We'll then discuss some techniques to figure out what CSS selector you need for the page you're scraping, before finish up with a couple of case studies, and a brief discussion of dynamic websites.
We'll then discuss some techniques to figure out what CSS selector you need for the page you're scraping, before finishing up with a couple of case studies, and a brief discussion of dynamic websites.
### Prerequisites
@ -37,7 +38,7 @@ Before we get started discussing the code you'll need to perform web scraping, w
Overall, the situation is complicated with regards to both of these.
Legalities depend a lot on where you live.
However, as general principle, if the data is public, non-personal, and factual, you're likely to be ok[^webscraping-1].
However, as a general principle, if the data is public, non-personal, and factual, you're likely to be ok[^webscraping-1].
These three factors are important because they're connected to the site's terms and conditions, personally identifiable information, and copyright, as we'll discuss below.
[^webscraping-1]: Obviously we're not lawyers, and this is not legal advice.
@ -55,7 +56,7 @@ If you look closely, you'll find many websites include a "terms and conditions"
These pages tend to be a legal land grab where companies make very broad claims.
It's polite to respect these terms of service where possible, but take any claims with a grain of salt.
US courts[^webscraping-2] have generally found that simply putting the terms of service in the footer of the website isn't sufficient for you be bound by them.
US courts[^webscraping-2] have generally found that simply putting the terms of service in the footer of the website isn't sufficient for you to be bound by them.
Generally, to be bound to the terms of service, you must have taken some explicit action like creating an account or checking a box.
This is why whether or not the data is **public** is important; if you don't need an account to access them, it is unlikely that you are bound to the terms of service.
Note, however, the situation is rather different in Europe where courts have found that terms of service are enforceable even if you don't explicitly agree to them.
@ -64,7 +65,7 @@ Note, however, the situation is rather different in Europe where courts have fou
### Personally identifiable information
Even if the data is public, you should be extremely careful about scraping personally identifiable information like name, email address, phone numbers, date of birth etc.
Even if the data is public, you should be extremely careful about scraping personally identifiable information like names, email addresses, phone numbers, dates of birth, etc.
Europe has particularly strict laws about the collection of storage of such data (GDPR), and regardless of where you live you're likely to be entering an ethical quagmire.
For example, in 2016, a group of researchers scraped public profile information (e.g., usernames, age, gender, location, etc.) about 70,000 people on the dating site OkCupid and they publicly released these data without any attempts for anonymization.
While the researchers felt that there was nothing wrong with this since the data were already public, this work was widely condemned due to ethics concerns around identifiability of users whose information was released in the dataset.
@ -75,14 +76,14 @@ If your work involves scraping personally identifiable information, we strongly
### Copyright
Finally, you also need to worry about copyright law.
Copyright law is complicated, but it's worth taking a look at the [US law](https://www.law.cornell.edu/uscode/text/17/102) which describes exactly what's protected: "original works of authorship fixed in any tangible medium of expression".
It then goes on to describe specific categories that it applies like literary woks, musical works, motions pictures and more.
Copyright law is complicated, but it's worth taking a look at the [US law](https://www.law.cornell.edu/uscode/text/17/102) which describes exactly what's protected: "[...] original works of authorship fixed in any tangible medium of expression, [...]".
It then goes on to describe specific categories that it applies like literary works, musical works, motions pictures and more.
Notably absent from copyright protection are data.
This means that as long as you limit your scraping to facts, copyright protection does not apply.
(But note that Europe has a separate "[sui generis](https://en.wikipedia.org/wiki/Database_right)" right that protects databases.)
As a brief example, in the US lists of ingredient and instructions are not copyrightable, so copyright can not be used to protect a recipe.
But if that list of recipes is accompanied by substantial novel literary content, that is is copyrightable.
As a brief example, in the US, lists of ingredients and instructions are not copyrightable, so copyright can not be used to protect a recipe.
But if that list of recipes is accompanied by substantial novel literary content, that is copyrightable.
This is why when you're looking for a recipe on the internet there's always so much content beforehand.
If you do need to scrape original content (like text or images), you may still be protected under the [doctrine of fair use](https://en.wikipedia.org/wiki/Fair_use).
@ -124,7 +125,7 @@ Web scraping is possible because most pages that contain data that you want to s
All up, there are over 100 HTML elements.
Some of the most important are:
- Every HTML page must be must be in an `<html>` element, and it must have two children: `<head>`, which contains document metadata like the page title, and `<body>`, which contains the content you see in the browser.
- Every HTML page must be in an `<html>` element, and it must have two children: `<head>`, which contains document metadata like the page title, and `<body>`, which contains the content you see in the browser.
- Block tags like `<h1>` (heading 1), `<section>` (section), `<p>` (paragraph), and `<ol>` (ordered list) form the overall structure of the page.
@ -137,11 +138,12 @@ Most elements can have content in between their start and end tags.
This content can either be text or more elements.
For example, the following HTML contains paragraph of text, with one word in bold.
```{=html}
```
<p>
Hi! My <b>name</b> is Hadley.
</p>
```
The **children** of a node refers only to elements, so the `<p>` element above has one child, the `<b>` element.
The `<b>` element has no children, but it does have contents (the text "name").
@ -269,7 +271,7 @@ We want to try and get the weight for each character
characters |> html_element(".weight")
```
If we instead used `html_elements()` we lose the connection between names and weights:
If we instead used `html_elements()`, we lose the connection between names and weights:
```{r}
characters |> html_elements(".weight")
@ -347,7 +349,7 @@ html <- minimal_html("
```
rvest provides a function that knows how to read this sort of data: `html_table()`.
It returns a list containing with one tibble for each table found on the page.
It returns a list containing one tibble for each table found on the page.
Use `html_element()` to identify the table you want to extract:
```{r}
@ -364,7 +366,7 @@ This automatic conversion doesn't always work, so in more complex scenarios you
Figuring out the selector you need for your data is typically the hardest part of the problem.
You'll often need to do some experimenting to find a selector that is both specific (i.e. it doesn't select things you don't care about) and sensitive (i.e. it does select everything you care about).
Lots of trial and error is a normal part of the process!
There are two main tools that are available to help you with this process: SelectorGagdget and your browser's developer tools.
There are two main tools that are available to help you with this process: SelectorGadget and your browser's developer tools.
[SelectorGadget](https://rvest.tidyverse.org/articles/selectorgadget.html) is a javascript bookmarklet that automatically generates CSS selectors based on the positive and negative examples that you provide.
It doesn't always work, but when it does, it's magic!
@ -377,7 +379,6 @@ You can use this to explore the page and get a sense of what selectors might wor
Pay particular attention to the class and id attributes, since these are often used to form the visual structure of the page, and hence make for good tools to extract the data that you're looking for.
Inside the Elements view, you can also right click on an element and choose `Copy as Selector` to generate a selector that will uniquely identify the element of interest.
You'll need
If either SelectorGadget or Chrome DevTools have generated a CSS selector that you don't understand, try [Selectors Explained](https://kittygiraudel.github.io/selectors-explained/){.uri} which translates CSS selectors into plain English.
If you find yourself doing this a lot, you might want to learn more about CSS selectors generally.
@ -482,7 +483,7 @@ table
This includes a few empty columns, but overall does a good job of capturing the information from the table.
However, we need to do some more processing to make it easier to use.
First, we'll rename the columns to be easier work with, and remove the extraneous whitespace in rank and title.
First, we'll rename the columns to be easier to work with, and remove the extraneous whitespace in rank and title.
We will do this with `select()` (instead of `rename()`) to do the renaming and selecting of just these two columns in one step.
Then, we'll apply `separate_wider_regex()` (from @sec-extract-variables) to pull out the title, year, and rank into their own variables.
@ -545,7 +546,7 @@ This doesn't currently work with rvest, because rvest downloads the raw HTML and
It's still possible to scrape these types of sites, but rvest needs to use a more expensive process: fully simulating the web browser including running all javascript.
This functionality is not available at the time of writing, but it's something we're actively working on and should be available by the time you read this.
It uses the [chromote package](https://rstudio.github.io/chromote/index.html) which actually runs chrome browser in the background, and gives you additional tools to interact with the site, like a human typing text and clicking buttons.
It uses the [chromote package](https://rstudio.github.io/chromote/index.html) which actually runs the Chrome browser in the background, and gives you additional tools to interact with the site, like a human typing text and clicking buttons.
Check out the rvest website for more details.
## Summary