r4ds/data-transform.Rmd

615 lines
22 KiB
Plaintext
Raw Normal View History

2021-02-22 19:47:39 +08:00
# Data transformation {#data-transform}
2015-12-12 02:34:20 +08:00
2021-05-04 21:10:39 +08:00
```{r, results = "asis", echo = FALSE}
status("restructuring")
```
2016-07-14 23:46:37 +08:00
## Introduction
Visualisation is an important tool for insight generation, but it is rare that you get the data in exactly the right form you need.
Often you'll need to create some new variables or summaries, or maybe you just want to rename the variables or reorder the observations in order to make the data a little easier to work with.
You'll learn how to do all that (and more!) in this chapter, which will introduce you to data transformation using the dplyr package and a new dataset on flights departing New York City in 2013.
The goal of this chapter is to give you an overview of all the key tools for transforming a data frame.
We'll come back these functions in more detail in later chapters, as we start to dig into specific types of data (e.g. numbers, strings, dates).
2015-12-09 00:28:54 +08:00
2016-07-08 23:31:52 +08:00
### Prerequisites
2015-12-09 00:28:54 +08:00
In this chapter we're going to focus on how to use the dplyr package, another core member of the tidyverse.
We'll illustrate the key ideas using data from the nycflights13 package, and use ggplot2 to help us understand the data.
2015-12-09 00:28:54 +08:00
2021-04-21 21:25:39 +08:00
```{r setup}
2015-12-09 00:28:54 +08:00
library(nycflights13)
2016-10-04 01:30:24 +08:00
library(tidyverse)
2015-12-14 23:45:41 +08:00
```
Take careful note of the conflicts message that's printed when you load the tidyverse.
It tells you that dplyr overwrites some functions in base R.
If you want to use the base version of these functions after loading dplyr, you'll need to use their full names: `stats::filter()` and `stats::lag()`.
2016-07-22 22:15:55 +08:00
2016-07-14 23:46:37 +08:00
### nycflights13
2015-12-14 23:45:41 +08:00
2021-04-21 21:25:39 +08:00
To explore the basic dplyr verbs, we're going to look at `nycflights13::flights`.
This data frame contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013.
The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is documented in `?flights`.
2015-12-14 23:45:41 +08:00
```{r}
2016-07-08 23:31:52 +08:00
flights
2015-12-14 23:45:41 +08:00
```
If you've used R before, you might notice that this data frame prints a little differently to data frames that you might've worked with in the past.
2021-04-21 21:25:39 +08:00
That's because it's a **tibble**, a special type of data frame designed by the tidyverse team to avoid some common data.frame gotchas.
The most important difference is the way it prints: tibbles are designed for large datasets, so they only show the first few rows and only the columns that fit on one screen.
2021-04-21 21:25:39 +08:00
If you want to see everything you can use `View(flights)` to open the dataset in the RStudio viewer.
2021-04-20 20:59:47 +08:00
We'll come back to other important differences in Chapter \@ref(tibbles).
2015-12-14 23:45:41 +08:00
You might also have noticed the row of short abbreviations following each column name.
These describe the type of each variable: `<int>` is short for integer, and `<dbl>` is short for double (aka real numbers), `<chr>` for characters (aka strings), and `<dttm>` for date-times.
These are important because the operations you can perform on a column depend so much on the type of column, and are used to organize the chapters in the Transform section of this book.
2016-10-04 05:08:44 +08:00
### dplyr basics
2015-12-14 23:45:41 +08:00
2021-04-21 21:25:39 +08:00
In this chapter you are going to learn the primary dplyr verbs which will allow you to solve the vast majority of your data manipulation challenges.
2021-04-20 20:59:47 +08:00
All dplyr verbs work the same way:
2015-12-14 23:45:41 +08:00
1. The first argument is a data frame.
2. The subsequent arguments describe what to do with the data frame, using the variable names (without quotes).
3. The result is a new data frame.
2015-12-14 23:45:41 +08:00
This means that dplyr code typically looks something like this:
```{r, eval = FALSE}
data |>
filter(x == 1) |>
mutate(
y = x + 1
)
```
`|>` is a special operator called a pipe.
It takes the thing on its left and passes it along to the function on its right.
The easiest way to pronounce the pipe is "then".
So you can read the above as take data, then filter it, then mutate it.
We'll come back to the pipe and its alternatives in Chapter \@ref(pipes).
In RStudio, you can make the pipe by pressing Ctrl/Cmd + Shift + M.
Behind the scenes, `x %>% f(y)` turns into `f(x, y)`, and `x %>% f(y) %>% g(z)` turns into `g(f(x, y), z)` and so on.
You can use the pipe to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom.
We'll use piping frequently from now on because it considerably improves the readability of code, and we'll come back to it in more detail in Chapter \@ref(workflow-pipes).
Together these properties make it easy to chain together multiple simple steps to achieve a complex result.
The verbs are organised into four groups based on what they operate on: **rows**, **columns**, **groups**, or **tables**.
In the following sections you'll learn the most important verbs for rows, columns, and groups.
We'll come back to operations that work on multiple tables in Chapter \@ref(relational-data).
Let's dive in!
2015-12-14 23:45:41 +08:00
2021-04-20 20:59:47 +08:00
## Rows
The most important verbs that affect the rows are `filter()` which changes membership without changing order and `arrange()` which changes the order without changing the membership.
Both functions only affect the rows, so the columns are left unchanged.
2021-04-21 21:25:39 +08:00
2021-04-20 20:59:47 +08:00
### `filter()`
2015-12-09 00:28:54 +08:00
`filter()` allows you to pick rows based on the values of the columns[^data-transform-1].
The first argument is the data frame.
The second and subsequent arguments are the conditions that must be true to keep the row.
For example, we could find all flights that arrived more than 120 minutes (two hours) late:
2015-12-09 00:28:54 +08:00
2021-04-21 21:25:39 +08:00
[^data-transform-1]: Later, you'll learn about the `slice_*()` family which allows you to choose rows based on their positions
2015-12-09 00:28:54 +08:00
```{r}
flights |>
filter(arr_delay > 120)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
As well as `>` (greater than) provides the `>=` (greater than or equal to), `<` (less than), `<=` (less than or equal to), `==` (equal to), and `!=` (not equal to).
You can use `&` (and) or `|` (or) to combine multiple conditions:
2016-01-01 00:24:18 +08:00
```{r}
# Flights that departed on January 1
flights |>
filter(month == 1 & day == 1)
# Flights that departed in January or February
flights |>
filter(month == 1 | month == 2)
2016-01-01 00:24:18 +08:00
```
There's a useful shortcut when you're combining `|` and `==`: `%in%`.
It returns true if the value on the left right hand side is any of the values on the right hand side:
2016-01-01 00:24:18 +08:00
```{r}
flights |>
filter(month %in% c(1, 2))
```
2015-12-14 23:45:41 +08:00
We'll come back to these comparisons and logical operators in more detail in Chapter \@ref(logicals-numbers).
When you run `filter()` dplyr executes the filtering operation, creating a new data frame, and then prints it.
It doesn't modify the existing `flights` dataset because dplyr functions never modify their inputs.
To save the result, you need to use the assignment operator, `<-`:
```{r}
jan1 <- flights |>
filter(month == 1 & day == 1)
2015-12-15 23:26:47 +08:00
```
2015-12-14 23:45:41 +08:00
2021-04-20 20:59:47 +08:00
### `arrange()`
`arrange()` changes the order of the rows based on the value of the columns.
Again, it takes a data frame and a set of column names (or more complicated expressions) to order by.
If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns.
For example, the following code sorts by the departure time, which is spread over four columns.
2021-04-20 20:59:47 +08:00
```{r}
flights |>
arrange(year, month, day, dep_time)
2021-04-20 20:59:47 +08:00
```
You can use `desc()` to re-order by a column in descending order.
For example, this is useful if you want to see the most delayed flights:
2021-04-20 20:59:47 +08:00
```{r}
flights |>
arrange(desc(dep_delay))
2021-04-20 20:59:47 +08:00
```
You can of course combine `arrange()` and `filter()` to solve more complex problems.
For example, we could look for the flights that were most delayed on arrival that left on roughly on time:
```{r}
flights |>
filter(dep_delay <= 10 & dep_delay >= -10) |>
arrange(desc(arr_delay))
```
### Common mistakes
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality.
`filter()` will let you know when this happens:
```{r, error = TRUE}
flights |>
filter(month = 1)
```
Another mistakes is you write "or" statements like you would in English:
```{r, eval = FALSE}
flights |>
filter(month == 1 | 2)
```
This works, in the sense that it doesn't throw an error, but it doesn't do what you want.
We'll come back to what it does and why in Section \@ref(boolean-operations).
2015-12-14 23:45:41 +08:00
### Exercises
1. Find all flights that
2015-12-14 23:45:41 +08:00
a. Had an arrival delay of two or more hours
b. Flew to Houston (`IAH` or `HOU`)
c. Were operated by United, American, or Delta
d. Departed in summer (July, August, and September)
e. Arrived more than two hours late, but didn't leave late
f. Were delayed by at least an hour, but made up over 30 minutes in flight
2016-07-15 00:07:51 +08:00
2021-04-20 20:59:47 +08:00
2. Sort `flights` to find the flights with longest departure delays.
Find the flights that left earliest in the morning.
3. Sort `flights` to find the fastest flights (Hint: try sorting by a calculation).
2015-09-21 21:41:14 +08:00
4. Which flights traveled the farthest?
Which traveled the shortest?
2015-12-09 00:28:54 +08:00
5. Does it matter what order you used `filter()` and `arrange()` in if you're using both?
Why/why not?
Think about the results and how much work the functions would have to do.
2021-04-20 20:59:47 +08:00
## Columns
The are four important verbs that affect the columns without changing the rows: `mutate()`, `select()`, `rename()`, and `relocate()`.
`mutate()` creates new columns that are functions of the existing columns; `select()`, `rename()`, and `relocate()` change which columns are present, their names, and their positions.
2021-04-20 20:59:47 +08:00
2021-04-21 21:25:39 +08:00
### `mutate()`
2021-04-20 20:59:47 +08:00
The job of `mutate()` is to add new columns that are calculated from the existing columns.
In the transform chapters, you'll learn a large set of functions that you can use to manipulate different types of variables.
For now, we'll stick with basic algebra, which allows us to compute the `gain`, how much time a delayed flight made up in the air, and the `speed` in miles per hour:
2015-12-09 00:28:54 +08:00
```{r}
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60
)
2015-12-09 00:28:54 +08:00
```
By default, `mutate()` adds new columns on the right hand side of your dataset, which makes it hard to see what's happening here.
2021-04-21 21:25:39 +08:00
We can use the `.before` argument to instead add the variables to the left hand side[^data-transform-2]:
[^data-transform-2]: Remember that when you're in RStudio, the easiest way to see all the columns is `View()`.
2015-12-09 00:28:54 +08:00
```{r}
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60,
.before = 1
)
2015-12-09 00:28:54 +08:00
```
The `.` is a sign that `.before` is an argument to the function, not the name of a new variable.
You can also use `.after` to add after a variable, and in both `.before` and `.after` you can the name of a variable name instead of a position.
For example, we could add the new variables after `day:`
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
```{r}
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60,
.after = day
)
2021-04-20 20:59:47 +08:00
```
2016-07-14 23:46:37 +08:00
Alternatively, can control which variables are kept with the `.keep` argument.
A particularly useful argument is `"used"` which allows you to see the inputs and outputs from your calculations:
2021-04-20 20:59:47 +08:00
```{r}
flights |>
mutate(,
gain = dep_delay - arr_delay,
hours = air_time / 60,
gain_per_hour = gain / hours,
.keep = "used"
)
2021-04-20 20:59:47 +08:00
```
2016-07-14 23:46:37 +08:00
2021-04-20 20:59:47 +08:00
### `select()` {#select}
2015-12-09 00:28:54 +08:00
It's not uncommon to get datasets with hundreds or even thousands of variables.
In this case, the first challenge is often focusing on just the variables you're interested in.
`select()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables.
2016-07-14 23:46:37 +08:00
2021-04-21 21:25:39 +08:00
`select()` is not terribly useful with the flights data because we only have 19 variables, but you can still get the general idea of how it works:
2015-12-09 00:28:54 +08:00
```{r}
# Select columns by name
flights |>
select(year, month, day)
2021-04-21 21:25:39 +08:00
2015-12-09 00:28:54 +08:00
# Select all columns between year and day (inclusive)
flights |>
select(year:day)
2021-04-21 21:25:39 +08:00
2015-12-09 00:28:54 +08:00
# Select all columns except those from year to day (inclusive)
flights |>
select(-(year:day))
# Select all columns that are characters
flights |>
select(where(is.character))
2015-12-09 00:28:54 +08:00
```
2015-12-15 23:26:47 +08:00
There are a number of helper functions you can use within `select()`:
2015-12-09 00:28:54 +08:00
- `starts_with("abc")`: matches names that begin with "abc".
- `ends_with("xyz")`: matches names that end with "xyz".
- `contains("ijk")`: matches names that contain "ijk".
- `num_range("x", 1:3)`: matches `x1`, `x2` and `x3`.
2015-12-15 23:26:47 +08:00
See `?select` for more details.
Once you know regular expressions (the topic of Chapter \@ref(regular-expressions)) you'll also be use `matches()` to select variables that match a pattern.
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
You can rename variables as you `select()` them by using `=`.
The new name appears on the left hand side of the `=`, and the old variable appears on the right hand side:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
flights |> select(tail_num = tailnum)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
2021-04-20 20:59:47 +08:00
### `rename()`
If you just want to keep all the existing variables and just want to rename a few, you can use `rename()` instead of `select()`:
2016-07-14 23:46:37 +08:00
```{r}
flights |>
rename(tail_num = tailnum)
2016-07-14 23:46:37 +08:00
```
2021-04-20 20:59:47 +08:00
It works exactly the same way as `select()`, but keeps all the variables that aren't explicitly selected.
2015-09-21 21:41:14 +08:00
2021-04-20 20:59:47 +08:00
### `relocate()`
2015-12-15 23:26:47 +08:00
You can move variables around with `relocate()`.
2021-04-20 20:59:47 +08:00
By default it moves variables to the front:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
flights |>
relocate(time_hour, air_time)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
But you can use the same `.before` and `.after` arguments as `mutate()` to choose where to put them:
2015-12-09 00:28:54 +08:00
```{r}
flights |>
relocate(year:dep_time, .after = time_hour)
flights |>
relocate(starts_with("arr"), .before = dep_time)
2015-12-09 00:28:54 +08:00
```
2015-12-15 23:26:47 +08:00
### Exercises
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r, eval = FALSE, echo = FALSE}
# For data checking, not used in results shown in book
2015-12-15 23:26:47 +08:00
flights <- flights %>% mutate(
dep_time = hour * 60 + minute,
arr_time = (arr_time %/% 100) * 60 + (arr_time %% 100),
airtime2 = arr_time - dep_time,
dep_sched = dep_time + dep_delay
2015-12-14 23:45:41 +08:00
)
2015-12-16 23:58:52 +08:00
2015-12-15 23:26:47 +08:00
ggplot(flights, aes(dep_sched)) + geom_histogram(binwidth = 60)
ggplot(flights, aes(dep_sched %% 60)) + geom_histogram(binwidth = 1)
ggplot(flights, aes(air_time - airtime2)) + geom_histogram()
2015-12-14 23:45:41 +08:00
```
2015-12-09 00:28:54 +08:00
1. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they're not really continuous numbers.
Convert them to a more convenient representation of number of minutes since midnight.
2. Compare `air_time` with `arr_time - dep_time`.
What do you expect to see?
What do you see?
What do you need to do to fix it?
2015-12-09 00:28:54 +08:00
3. Compare `dep_time`, `sched_dep_time`, and `dep_delay`.
How would you expect those three numbers to be related?
2015-12-29 23:59:14 +08:00
2021-04-20 20:59:47 +08:00
4. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from `flights`.
5. What happens if you include the name of a variable multiple times in a `select()` call?
6. What does the `any_of()` function do?
Why might it be helpful in conjunction with this vector?
```{r}
variables <- c("year", "month", "day", "dep_delay", "arr_delay")
```
7. Does the result of running the following code surprise you?
How do the select helpers deal with case by default?
How can you change that default?
2016-07-22 22:15:55 +08:00
2021-04-20 20:59:47 +08:00
```{r, eval = FALSE}
select(flights, contains("TIME"))
```
2021-04-20 20:59:47 +08:00
## Groups
2016-07-22 22:15:55 +08:00
2021-04-21 21:25:39 +08:00
The real power of dplyr comes when you add grouping into the mix.
The two key functions are `group_by()` and `summarise()`, but as you'll learn `group_by()` affects many other dplyr verbs in interesting ways.
2021-04-20 20:59:47 +08:00
### `group_by()`
2015-12-09 00:28:54 +08:00
2021-04-21 21:25:39 +08:00
Use `group_by()` to divide your dataset into groups meaningful for your analysis:
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
flights |>
group_by(month)
2015-12-15 23:26:47 +08:00
```
2015-12-09 00:28:54 +08:00
`group_by()` doesn't change the data but, if you look closely at the output, you'll notice that it's now "grouped by" month.
2021-04-21 21:25:39 +08:00
The reason to group your data is because it changes the operation of subsequent verbs.
2016-07-14 23:46:37 +08:00
2021-04-20 20:59:47 +08:00
### `summarise()`
The most important operation that you might apply to grouped data is a summary.
2021-04-21 21:25:39 +08:00
It collapses each group to a single row[^data-transform-3].
Here we compute the average departure delay by month:
[^data-transform-3]: This is a slightly simplification; later on you'll learn how to use `summarise()` to produce multiple summary rows for each group.
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
flights |>
group_by(month) |>
summarise(
delay = mean(dep_delay)
)
2021-04-20 20:59:47 +08:00
```
2015-12-16 23:58:52 +08:00
Uhoh!
Something has gone wrong and all of our results are `NA`, R's symbol for missing value.
We'll come back to discuss missing values in Chapter \@ref(missing-values), but for now we'll remove them by using `na.rm = TRUE`:
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
```{r}
flights |>
group_by(month) |>
summarise(
delay = mean(dep_delay, na.rm = TRUE)
)
2021-04-20 20:59:47 +08:00
```
2015-12-16 23:58:52 +08:00
You can create any number of summaries in a single call to `summarise()`.
You'll learn various useful summaries in the upcoming chapters, but one very useful summary is `n()`, which returns the number of rows in each group:
2015-12-09 00:28:54 +08:00
```{r}
flights |>
group_by(month) |>
summarise(
delay = mean(dep_delay, na.rm = TRUE),
n = n()
)
2015-12-09 00:28:54 +08:00
```
(In fact, `count()`, which we've used a bunch in previous chapters, is just shorthand for `group_by()` + `summarise(n = n())`.)
2015-12-15 23:26:47 +08:00
Means and counts can get you a surprisingly long way in data science!
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
### Grouping by multiple variables
2021-04-20 20:59:47 +08:00
You can group a data frame by multiple variables:
```{r}
daily <- flights %>%
group_by(year, month, day)
daily
```
When you group by multiple variables, each summary peels off one level of the grouping by default, and a message is printed that tells you how you can change this behaviour.
```{r}
daily %>%
summarise(
n = n()
)
```
2021-04-21 21:25:39 +08:00
If you're happy with this behaviour, you can explicitly define it in order to suppress the message:
```{r, results = FALSE}
daily %>%
summarise(
n = n(),
.groups = "drop_last"
)
```
2021-04-21 21:25:39 +08:00
Alternatively, you can change the default behaviour by setting a different value, e.g. `"drop"` for dropping all levels of grouping or `"keep"` for same grouping structure as `daily`:
```{r, results = FALSE}
daily %>%
summarise(
n = n(),
.groups = "drop"
)
daily %>%
summarise(
n = n(),
.groups = "keep"
)
```
### Ungrouping
You might also want to remove grouping outside of `summarise()`.
You can do this and return to operations on ungrouped data using `ungroup()`.
```{r}
daily %>%
2021-04-20 20:59:47 +08:00
ungroup() %>%
summarise(
delay = mean(dep_delay, na.rm = TRUE),
flights = n()
)
```
2021-04-20 20:59:47 +08:00
For the purposes of summarising, ungrouped data is treated as if all your data was in a single group, so you get one row back.
### Other verbs
`group_by()` is usually paired with `summarise()`, but it's good to know how it affects other verbs:
- `select()`, `rename()`, `relocate()`: grouping has no affect.
2021-04-20 20:59:47 +08:00
- `mutate()`: computation happens per group.
2021-04-20 20:59:47 +08:00
This doesn't affect the functions you currently know but is very useful once you learn about window functions, Section \@ref(window-functions).
- `arrange()` and `filter()` are mostly unaffected by grouping, unless you are doing computation (e.g. `filter(flights, dep_delay == min(dep_delay)`), in which case the `mutate()` caveat applies.
2021-04-20 20:59:47 +08:00
### Exercises
1. Which carrier has the worst delays?
Challenge: can you disentangle the effects of bad airports vs. bad carriers?
Why/why not?
(Hint: think about `flights %>% group_by(carrier, dest) %>% summarise(n())`)
2. What does the `sort` argument to `count()` do.
Can you explain it in terms of the dplyr verbs you've learned so far?
## Case study: aggregates and sample size
2015-12-09 00:28:54 +08:00
Whenever you do any aggregation, it's always a good idea to include either a count (`n()`).
That way you can check that you're not drawing conclusions based on very small amounts of data.
For example, let's look at the planes (identified by their tail number) that have the highest average delays:
2015-12-15 23:26:47 +08:00
2015-12-16 23:58:52 +08:00
```{r}
delays <- flights %>%
filter(!is.na(arr_delay)) |>
2015-12-30 23:43:15 +08:00
group_by(tailnum) %>%
2015-12-16 23:58:52 +08:00
summarise(
delay = mean(arr_delay),
n = n()
2015-12-16 23:58:52 +08:00
)
2015-12-15 23:26:47 +08:00
2016-07-15 00:07:51 +08:00
ggplot(data = delays, mapping = aes(x = delay)) +
2016-07-22 22:15:55 +08:00
geom_freqpoly(binwidth = 10)
2015-12-15 23:26:47 +08:00
```
Wow, there are some planes that have an *average* delay of 5 hours (300 minutes)!
2015-12-15 23:26:47 +08:00
The story is actually a little more nuanced.
We can get more insight if we draw a scatterplot of number of flights vs. average delay:
2015-12-16 23:58:52 +08:00
```{r}
2016-07-15 00:07:51 +08:00
ggplot(data = delays, mapping = aes(x = n, y = delay)) +
geom_point(alpha = 1/10)
2015-12-09 00:28:54 +08:00
```
Not surprisingly, there is much greater variation in the average delay when there are few flights.
The shape of this plot is very characteristic: whenever you plot a mean (or other summary) vs. group size, you'll see that the variation decreases as the sample size increases.
2015-12-16 23:58:52 +08:00
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups.
This is what the following code does, as well as showing you a handy pattern for integrating ggplot2 into dplyr flows.
It's a bit painful that you have to switch from `%>%` to `+`, but once you get the hang of it, it's quite convenient.
2015-12-17 22:46:44 +08:00
```{r}
delays %>%
filter(n > 25) %>%
2016-07-15 00:07:51 +08:00
ggplot(mapping = aes(x = n, y = delay)) +
geom_point(alpha = 1/10) +
geom_smooth(se = FALSE)
2015-12-17 22:46:44 +08:00
```
There's another common variation of this type of pattern.
Let's look at how the average performance of batters in baseball is related to the number of times they're at bat.
Here I use data from the **Lahman** package to compute the batting average (number of hits / number of attempts) of every major league baseball player.
2016-08-01 00:32:16 +08:00
```{r}
batters <- Lahman::Batting %>%
group_by(playerID) %>%
summarise(
ba = sum(H, na.rm = TRUE) / sum(AB, na.rm = TRUE),
ab = sum(AB, na.rm = TRUE)
)
batters
```
2016-08-01 00:32:16 +08:00
When I plot the skill of the batter (measured by the batting average, `ba`) against the number of opportunities to hit the ball (measured by at bat, `ab`), you see two patterns:
2015-12-16 23:58:52 +08:00
1. As above, the variation in our aggregate decreases as we get more data points.
2. There's a positive correlation between skill (`ba`) and opportunities to hit the ball (`ab`).
This is because teams control who gets to play, and obviously they'll pick their best players.
2015-12-09 00:28:54 +08:00
```{r}
2015-12-17 22:46:44 +08:00
batters %>%
filter(ab > 100) %>%
2016-07-15 00:07:51 +08:00
ggplot(mapping = aes(x = ab, y = ba)) +
geom_point(alpha = 1 / 10) +
2015-12-17 22:46:44 +08:00
geom_smooth(se = FALSE)
2015-12-09 00:28:54 +08:00
```
This also has important implications for ranking.
If you naively sort on `desc(ba)`, the people with the best batting averages are clearly lucky, not skilled:
2015-12-16 23:58:52 +08:00
```{r}
2016-08-01 00:32:16 +08:00
batters %>%
arrange(desc(ba))
2015-12-09 00:28:54 +08:00
```
2015-12-17 22:46:44 +08:00
You can find a good explanation of this problem at <http://varianceexplained.org/r/empirical_bayes_baseball/> and <http://www.evanmiller.org/how-not-to-sort-by-average-rating.html>.