r4ds/data-transform.qmd

869 lines
31 KiB
Plaintext
Raw Normal View History

# Data transformation {#sec-data-transform}
2015-12-12 02:34:20 +08:00
```{r}
#| results: "asis"
#| echo: false
source("_common.R")
2023-01-23 21:55:51 +08:00
status("complete")
2021-05-04 21:10:39 +08:00
```
2016-07-14 23:46:37 +08:00
## Introduction
2023-02-28 21:35:57 +08:00
Visualization is an important tool for generating insight, but it's rare that you get the data in exactly the right form you need for it.
2022-02-23 07:48:09 +08:00
Often you'll need to create some new variables or summaries to see the most important patterns, or maybe you just want to rename the variables or reorder the observations to make the data a little easier to work with.
You'll learn how to do all that (and more!) in this chapter, which will introduce you to data transformation using the **dplyr** package and a new dataset on flights that departed New York City in 2013.
The goal of this chapter is to give you an overview of all the key tools for transforming a data frame.
We'll start with functions that operate on rows and then columns of a data frame, then circle back to talk more about the pipe, an important tool that you use to combine verbs.
We will then introduce the ability to work with groups.
We will end the chapter with a case study that showcases these functions in action and we'll come back to the functions in more detail in later chapters, as we start to dig into specific types of data (e.g. numbers, strings, dates).
2015-12-09 00:28:54 +08:00
2016-07-08 23:31:52 +08:00
### Prerequisites
2015-12-09 00:28:54 +08:00
2022-02-16 01:59:19 +08:00
In this chapter we'll focus on the dplyr package, another core member of the tidyverse.
We'll illustrate the key ideas using data from the nycflights13 package, and use ggplot2 to help us understand the data.
2015-12-09 00:28:54 +08:00
```{r}
#| label: setup
2015-12-09 00:28:54 +08:00
library(nycflights13)
2016-10-04 01:30:24 +08:00
library(tidyverse)
2015-12-14 23:45:41 +08:00
```
Take careful note of the conflicts message that's printed when you load the tidyverse.
It tells you that dplyr overwrites some functions in base R.
If you want to use the base version of these functions after loading dplyr, you'll need to use their full names: `stats::filter()` and `stats::lag()`.
2022-08-09 02:57:08 +08:00
So far we've mostly ignored which package a function comes from because most of the time it doesn't matter.
However, knowing the package can help you find help and find related functions, so when we need to be precise about which function a package comes from, we'll use the same syntax as R: `packagename::functionname()`.
2016-07-22 22:15:55 +08:00
2016-07-14 23:46:37 +08:00
### nycflights13
2015-12-14 23:45:41 +08:00
2022-02-23 07:48:09 +08:00
To explore the basic dplyr verbs, we're going to use `nycflights13::flights`.
This dataset contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013.
The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is documented in `?flights`.
2015-12-14 23:45:41 +08:00
```{r}
2016-07-08 23:31:52 +08:00
flights
2015-12-14 23:45:41 +08:00
```
2022-02-16 01:59:19 +08:00
If you've used R before, you might notice that this data frame prints a little differently to other data frames you've seen.
That's because it's a **tibble**, a special type of data frame used by the tidyverse to avoid some common gotchas.
The most important difference is the way it prints: tibbles are designed for large datasets, so they only show the first few rows and only the columns that fit on one screen.
2022-11-19 05:25:52 +08:00
There are a few options to see everything.
If you're using RStudio, the most convenient is probably `View(flights)`, which will open an interactive scrollable and filterable view.
Otherwise you can use `print(flights, width = Inf)` to show all columns, or use call `glimpse()`:
2015-12-14 23:45:41 +08:00
2022-11-19 05:25:52 +08:00
```{r}
glimpse(flights)
```
In both views, the variables names are followed by abbreviations that tell you the type of each variable: `<int>` is short for integer, `<dbl>` is short for double (aka real numbers), `<chr>` for character (aka strings), and `<dttm>` for date-time.
2022-02-23 07:48:09 +08:00
These are important because the operations you can perform on a column depend so much on its "type", and these types are used to organize the chapters in the next section of the book.
2016-10-04 05:08:44 +08:00
### dplyr basics
2015-12-14 23:45:41 +08:00
2022-02-23 07:48:09 +08:00
You're about to learn the primary dplyr verbs which will allow you to solve the vast majority of your data manipulation challenges.
But before we discuss their individual differences, it's worth stating what they have in common:
2015-12-14 23:45:41 +08:00
2022-02-23 07:48:09 +08:00
1. The first argument is always a data frame.
2015-12-14 23:45:41 +08:00
2. The subsequent arguments describe what to do with the data frame, using the variable names (without quotes).
2022-02-23 07:48:09 +08:00
3. The result is always a new data frame.
2015-12-14 23:45:41 +08:00
Since each verb is quite simple, solving complex problems will usually require combining multiple verbs, and we'll do so with the pipe, `|>`.
We'll discuss the pipe more in @the-pipe, but in brief, the pipe takes the thing on its left and passes it along to the function on its right so that `x |> f(y)` is equivalent to `f(x, y)`, and `x |> f(y) |> g(z)` is equivalent to into `g(f(x, y), z)`.
2022-02-23 07:48:09 +08:00
The easiest way to pronounce the pipe is "then".
That makes it possible to get a sense of the following code even though you haven't yet learned the details:
```{r}
#| eval: false
2022-02-23 07:48:09 +08:00
flights |>
filter(dest == "IAH") |>
group_by(year, month, day) |>
summarize(
arr_delay = mean(arr_delay, na.rm = TRUE)
)
```
2023-02-28 21:35:57 +08:00
dplyr's verbs are organized into four groups based on what they operate on: **rows**, **columns**, **groups**, or **tables**.
In the following sections you'll learn the most important verbs for rows, columns, and groups, then we'll come back to the join verbs that work on tables in @sec-joins.
Let's dive in!
2015-12-14 23:45:41 +08:00
2021-04-20 20:59:47 +08:00
## Rows
2022-02-23 07:48:09 +08:00
The most important verbs that operate on rows are `filter()`, which changes which rows are present without changing their order, and `arrange()`, which changes the order of the rows without changing which are present.
Both functions only affect the rows, and the columns are left unchanged.
2022-11-21 22:42:55 +08:00
We'll also discuss `distinct()` which finds rows with unique values but unlike `arrange()` and `filter()` it can also optionally modify the columns.
2021-04-21 21:25:39 +08:00
2021-04-20 20:59:47 +08:00
### `filter()`
2015-12-09 00:28:54 +08:00
2022-02-23 07:48:09 +08:00
`filter()` allows you to keep rows based on the values of the columns[^data-transform-1].
The first argument is the data frame.
The second and subsequent arguments are the conditions that must be true to keep the row.
For example, we could find all flights that arrived more than 120 minutes (two hours) late:
2015-12-09 00:28:54 +08:00
[^data-transform-1]: Later, you'll learn about the `slice_*()` family which allows you to choose rows based on their positions.
2021-04-21 21:25:39 +08:00
2015-12-09 00:28:54 +08:00
```{r}
flights |>
filter(dep_delay > 120)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
2022-02-23 07:48:09 +08:00
As well as `>` (greater than), you can use `>=` (greater than or equal to), `<` (less than), `<=` (less than or equal to), `==` (equal to), and `!=` (not equal to).
You can also use `&` (and) or `|` (or) to combine multiple conditions:
2016-01-01 00:24:18 +08:00
```{r}
# Flights that departed on January 1
flights |>
filter(month == 1 & day == 1)
# Flights that departed in January or February
flights |>
filter(month == 1 | month == 2)
2016-01-01 00:24:18 +08:00
```
There's a useful shortcut when you're combining `|` and `==`: `%in%`.
2022-02-23 07:48:09 +08:00
It keeps rows where the variable equals one of the values on the right:
2016-01-01 00:24:18 +08:00
```{r}
2022-02-23 07:48:09 +08:00
# A shorter way to select flights that departed in January or February
flights |>
filter(month %in% c(1, 2))
```
2015-12-14 23:45:41 +08:00
We'll come back to these comparisons and logical operators in more detail in @sec-logicals.
When you run `filter()` dplyr executes the filtering operation, creating a new data frame, and then prints it.
It doesn't modify the existing `flights` dataset because dplyr functions never modify their inputs.
To save the result, you need to use the assignment operator, `<-`:
```{r}
jan1 <- flights |>
filter(month == 1 & day == 1)
2015-12-15 23:26:47 +08:00
```
2015-12-14 23:45:41 +08:00
2022-02-23 07:48:09 +08:00
### Common mistakes
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality.
`filter()` will let you know when this happens:
```{r}
#| error: true
2022-02-23 07:48:09 +08:00
flights |>
filter(month = 1)
```
Another mistakes is you write "or" statements like you would in English:
```{r}
#| eval: false
2022-02-23 07:48:09 +08:00
flights |>
filter(month == 1 | 2)
```
This works, in the sense that it doesn't throw an error, but it doesn't do what you want.
We'll come back to what it does and why in @sec-boolean-operations.
2022-02-23 07:48:09 +08:00
2021-04-20 20:59:47 +08:00
### `arrange()`
`arrange()` changes the order of the rows based on the value of the columns.
2022-02-23 07:48:09 +08:00
It takes a data frame and a set of column names (or more complicated expressions) to order by.
If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns.
For example, the following code sorts by the departure time, which is spread over four columns.
2021-04-20 20:59:47 +08:00
```{r}
flights |>
arrange(year, month, day, dep_time)
2021-04-20 20:59:47 +08:00
```
You can use `desc()` to re-order by a column in descending order.
2022-02-23 07:48:09 +08:00
For example, this code shows the most delayed flights:
2021-04-20 20:59:47 +08:00
```{r}
flights |>
arrange(desc(dep_delay))
2021-04-20 20:59:47 +08:00
```
2022-11-21 22:42:55 +08:00
### `distinct()`
`distinct()` finds all the unique rows in a dataset, so in a technical sense, it primarily operates on the rows.
Most of the time, however, you'll want the distinct combination of some variables, so you can also optionally supply column names:
2022-11-21 22:42:55 +08:00
```{r}
# This would remove any duplicate rows if there were any
flights |>
distinct()
# This finds all unique origin and destination pairs
2022-11-21 22:42:55 +08:00
flights |>
distinct(origin, dest)
```
Note that if you want to find the number of duplicates, or rows that weren't duplicated, you're better off swapping `distinct()` for `count()`, which will give the number of observations per unique level, and then filtering as needed.
2022-11-21 22:42:55 +08:00
2015-12-14 23:45:41 +08:00
### Exercises
1. Find all flights that
2015-12-14 23:45:41 +08:00
a. Had an arrival delay of two or more hours
b. Flew to Houston (`IAH` or `HOU`)
c. Were operated by United, American, or Delta
d. Departed in summer (July, August, and September)
e. Arrived more than two hours late, but didn't leave late
f. Were delayed by at least an hour, but made up over 30 minutes in flight
2016-07-15 00:07:51 +08:00
2021-04-20 20:59:47 +08:00
2. Sort `flights` to find the flights with longest departure delays.
Find the flights that left earliest in the morning.
3. Sort `flights` to find the fastest flights (Hint: try sorting by a calculation).
2015-09-21 21:41:14 +08:00
4. Was there a flight on every day of 2013?
2022-11-21 22:42:55 +08:00
5. Which flights traveled the farthest distance?
Which traveled the least distance?
2015-12-09 00:28:54 +08:00
6. Does it matter what order you used `filter()` and `arrange()` if you're using both?
Why/why not?
Think about the results and how much work the functions would have to do.
2021-04-20 20:59:47 +08:00
## Columns
2022-02-23 07:48:09 +08:00
There are four important verbs that affect the columns without changing the rows: `mutate()`, `select()`, `rename()`, and `relocate()`.
`mutate()` creates new columns that are derived from the existing columns; `select()`, `rename()`, and `relocate()` change which columns are present, their names, or their positions.
2022-11-21 22:42:55 +08:00
We'll also discuss `pull()` since it allows you to get a column out of data frame.
2021-04-20 20:59:47 +08:00
### `mutate()` {#sec-mutate}
2021-04-20 20:59:47 +08:00
The job of `mutate()` is to add new columns that are calculated from the existing columns.
In the transform chapters, you'll learn a large set of functions that you can use to manipulate different types of variables.
For now, we'll stick with basic algebra, which allows us to compute the `gain`, how much time a delayed flight made up in the air, and the `speed` in miles per hour:
2015-12-09 00:28:54 +08:00
```{r}
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60
)
2015-12-09 00:28:54 +08:00
```
By default, `mutate()` adds new columns on the right hand side of your dataset, which makes it difficult to see what's happening here.
2021-04-21 21:25:39 +08:00
We can use the `.before` argument to instead add the variables to the left hand side[^data-transform-2]:
[^data-transform-2]: Remember that in RStudio, the easiest way to see a dataset with many columns is `View()`.
2021-04-21 21:25:39 +08:00
2015-12-09 00:28:54 +08:00
```{r}
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60,
.before = 1
)
2015-12-09 00:28:54 +08:00
```
The `.` is a sign that `.before` is an argument to the function, not the name of a new variable.
You can also use `.after` to add after a variable, and in both `.before` and `.after` you can use the variable name instead of a position.
For example, we could add the new variables after `day`:
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
```{r}
#| results: false
flights |>
mutate(
gain = dep_delay - arr_delay,
speed = distance / air_time * 60,
.after = day
)
2021-04-20 20:59:47 +08:00
```
2016-07-14 23:46:37 +08:00
2022-02-23 07:48:09 +08:00
Alternatively, you can control which variables are kept with the `.keep` argument.
A particularly useful argument is `"used"` which allows you to see the inputs and outputs from your calculations.
For example, the following output will contain only the variables `dep_delay`, `arr_delay`, `air_time`, `gain`, `hours`, and `gain_per_hour`.
2021-04-20 20:59:47 +08:00
```{r}
#| results: false
flights |>
2023-01-23 21:52:18 +08:00
mutate(
gain = dep_delay - arr_delay,
hours = air_time / 60,
gain_per_hour = gain / hours,
.keep = "used"
)
2021-04-20 20:59:47 +08:00
```
2016-07-14 23:46:37 +08:00
### `select()` {#sec-select}
2015-12-09 00:28:54 +08:00
It's not uncommon to get datasets with hundreds or even thousands of variables.
2022-02-23 07:48:09 +08:00
In this situation, the first challenge is often just focusing on the variables you're interested in.
`select()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables.
`select()` is not terribly useful with the `flights` data because we only have 19 variables, but you can still get the general idea of how it works:
2015-12-09 00:28:54 +08:00
- Select columns by name:
2021-04-21 21:25:39 +08:00
```{r}
#| results: false
flights |>
select(year, month, day)
```
2021-04-21 21:25:39 +08:00
- Select all columns between year and day (inclusive):
```{r}
#| results: false
flights |>
select(year:day)
```
- Select all columns except those from year to day (inclusive):
```{r}
#| results: false
flights |>
select(!year:day)
```
- Select all columns that are characters:
```{r}
#| results: false
flights |>
select(where(is.character))
```
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
There are a number of helper functions you can use within `select()`:
2015-12-09 00:28:54 +08:00
- `starts_with("abc")`: matches names that begin with "abc".
- `ends_with("xyz")`: matches names that end with "xyz".
- `contains("ijk")`: matches names that contain "ijk".
- `num_range("x", 1:3)`: matches `x1`, `x2` and `x3`.
2015-12-15 23:26:47 +08:00
See `?select` for more details.
Once you know regular expressions (the topic of @sec-regular-expressions) you'll also be able to use `matches()` to select variables that match a pattern.
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
You can rename variables as you `select()` them by using `=`.
The new name appears on the left hand side of the `=`, and the old variable appears on the right hand side:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
2022-02-23 07:48:09 +08:00
flights |>
select(tail_num = tailnum)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
2021-04-20 20:59:47 +08:00
### `rename()`
If you just want to keep all the existing variables and just want to rename a few, you can use `rename()` instead of `select()`:
2016-07-14 23:46:37 +08:00
```{r}
flights |>
rename(tail_num = tailnum)
2016-07-14 23:46:37 +08:00
```
2021-04-20 20:59:47 +08:00
It works exactly the same way as `select()`, but keeps all the variables that aren't explicitly selected.
2015-09-21 21:41:14 +08:00
2022-05-11 06:26:04 +08:00
If you have a bunch of inconsistently named columns and it would be painful to fix them all by hand, check out `janitor::clean_names()` which provides some useful automated cleaning.
2021-04-20 20:59:47 +08:00
### `relocate()`
2015-12-15 23:26:47 +08:00
Use `relocate()` to move variables around.
You might want to collect related variables together or move important variables to the front.
By default `relocate()` moves variables to the front:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
flights |>
relocate(time_hour, air_time)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
But you can use the same `.before` and `.after` arguments as `mutate()` to choose where to put them:
2015-12-09 00:28:54 +08:00
```{r}
#| results: false
flights |>
relocate(year:dep_time, .after = time_hour)
flights |>
relocate(starts_with("arr"), .before = dep_time)
2015-12-09 00:28:54 +08:00
```
2015-12-15 23:26:47 +08:00
### Exercises
2015-12-09 00:28:54 +08:00
```{r}
#| eval: false
#| echo: false
# For data checking, not used in results shown in book
2022-02-24 03:15:52 +08:00
flights <- flights |> mutate(
2015-12-15 23:26:47 +08:00
dep_time = hour * 60 + minute,
arr_time = (arr_time %/% 100) * 60 + (arr_time %% 100),
airtime2 = arr_time - dep_time,
dep_sched = dep_time + dep_delay
2015-12-14 23:45:41 +08:00
)
2015-12-16 23:58:52 +08:00
2022-12-05 16:12:12 +08:00
ggplot(flights, aes(x = dep_sched)) + geom_histogram(binwidth = 60)
ggplot(flights, aes(x = dep_sched %% 60)) + geom_histogram(binwidth = 1)
ggplot(flights, aes(x = air_time - airtime2)) + geom_histogram()
2015-12-14 23:45:41 +08:00
```
2015-12-09 00:28:54 +08:00
2023-02-08 00:19:55 +08:00
1. Compare `dep_time`, `sched_dep_time`, and `dep_delay`.
How would you expect those three numbers to be related?
2015-12-29 23:59:14 +08:00
2023-02-08 00:19:55 +08:00
2. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from `flights`.
2021-04-20 20:59:47 +08:00
2023-02-08 00:19:55 +08:00
3. What happens if you include the name of a variable multiple times in a `select()` call?
2021-04-20 20:59:47 +08:00
2023-02-08 00:19:55 +08:00
4. What does the `any_of()` function do?
2021-04-20 20:59:47 +08:00
Why might it be helpful in conjunction with this vector?
```{r}
variables <- c("year", "month", "day", "dep_delay", "arr_delay")
```
2023-02-08 00:19:55 +08:00
5. Does the result of running the following code surprise you?
2021-04-20 20:59:47 +08:00
How do the select helpers deal with case by default?
How can you change that default?
2016-07-22 22:15:55 +08:00
```{r}
#| eval: false
2021-04-20 20:59:47 +08:00
select(flights, contains("TIME"))
```
6. Rename `air_time` to `air_time_min` to indicate units of measurement and move it to the beginning of the data frame.
## The pipe {#the-pipe}
We've shown you simple examples of the pipe above, but its real power arises when you start to combine multiple verbs.
For example, imagine that you wanted to find the fast flights to Houston's IAH airport: you need to combine `filter()`, `mutate()`, `select()`, and `arrange()`:
```{r}
flights |>
filter(dest == "IAH") |>
mutate(speed = distance / air_time * 60) |>
select(year:day, dep_time, carrier, flight, speed) |>
arrange(desc(speed))
```
Even though this pipe has four steps, it's easy to skim because the verbs come at the start of each line: start with the `flights` data, then filter, then group, then summarize.
What would happen if we didn't have the pipe?
We could nest each function call inside the previous call:
```{r}
#| results: false
arrange(
select(
mutate(
filter(
flights,
dest == "IAH"
),
speed = distance / air_time * 60
),
year:day, dep_time, carrier, flight, speed
),
desc(speed)
)
```
Or we could use a bunch of intermediate variables:
```{r}
#| results: false
flights1 <- filter(flights, dest == "IAH")
flights2 <- mutate(flights1, speed = distance / air_time * 60)
flights3 <- select(flights2, year:day, dep_time, carrier, flight, speed)
arrange(flights3, desc(speed))
```
While both forms have their time and place, the pipe generally produces data analysis code that is easier to write and read.
To add the pipe to your code, we recommend using the build-in keyboard shortcut Ctrl/Cmd + Shift + M.
You'll need to make one change to your RStudio options to use `|>` instead of `%>%` as shown in @fig-pipe-options; more on `%>%` shortly.
```{r}
#| label: fig-pipe-options
#| echo: false
#| fig-cap: >
#| To insert `|>`, make sure the "Use native pipe operator" option is checked.
#| fig-alt: >
#| Screenshot showing the "Use native pipe operator" option which can
#| be found on the "Editing" panel of the "Code" options.
knitr::include_graphics("screenshots/rstudio-pipe-options.png")
```
::: callout-note
## magrittr
If you've been using the tidyverse for a while, you might be familiar with the `%>%` pipe provided by the **magrittr** package.
The magrittr package is included in the core tidyverse, so you can use `%>%` whenever you load the tidyverse:
```{r}
#| eval: false
library(tidyverse)
mtcars %>%
group_by(cyl) %>%
summarize(n = n())
```
For simple cases, `|>` and `%>%` behave identically.
So why do we recommend the base pipe?
Firstly, because it's part of base R, it's always available for you to use, even when you're not using the tidyverse.
Secondly, `|>` is quite a bit simpler than `%>%`: in the time between the invention of `%>%` in 2014 and the inclusion of `|>` in R 4.1.0 in 2021, we gained a better understanding of the pipe.
This allowed the base implementation to jettison infrequently used and less important features.
:::
2021-04-20 20:59:47 +08:00
## Groups
2016-07-22 22:15:55 +08:00
2022-02-16 01:59:19 +08:00
So far you've learned about functions that work with rows and columns.
dplyr gets even more powerful when you add in the ability to work with groups.
In this section, we'll focus on the most important functions: `group_by()`, `summarize()`, and the slice family of functions.
2021-04-21 21:25:39 +08:00
2021-04-20 20:59:47 +08:00
### `group_by()`
2015-12-09 00:28:54 +08:00
2021-04-21 21:25:39 +08:00
Use `group_by()` to divide your dataset into groups meaningful for your analysis:
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
flights |>
group_by(month)
2015-12-15 23:26:47 +08:00
```
2015-12-09 00:28:54 +08:00
`group_by()` doesn't change the data but, if you look closely at the output, you'll notice that it's now "grouped by" month.
2022-02-16 01:59:19 +08:00
This means subsequent operations will now work "by month".
2022-11-19 06:17:43 +08:00
`group_by()` doesn't do anything by itself; instead it changes the behavior of the subsequent verbs.
2016-07-14 23:46:37 +08:00
### `summarize()` {#sec-summarize}
2021-04-20 20:59:47 +08:00
The most important grouped operation is a summary, which collapses each group to a single row.
In dplyr, this is operation is performed by `summarize()`[^data-transform-3], as shown by the following example, which computes the average departure delay by month:
2021-04-21 21:25:39 +08:00
[^data-transform-3]: Or `summarise()`, if you prefer British English.
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
flights |>
group_by(month) |>
summarize(
delay = mean(dep_delay)
)
2021-04-20 20:59:47 +08:00
```
2015-12-16 23:58:52 +08:00
Uhoh!
2022-05-05 00:57:28 +08:00
Something has gone wrong and all of our results are `NA` (pronounced "N-A"), R's symbol for missing value.
We'll come back to discuss missing values in detail in @sec-missing-values, but for now we'll remove them by using `na.rm = TRUE`:
2015-12-15 23:26:47 +08:00
2021-04-20 20:59:47 +08:00
```{r}
flights |>
group_by(month) |>
summarize(
delay = mean(dep_delay, na.rm = TRUE)
)
2021-04-20 20:59:47 +08:00
```
2015-12-16 23:58:52 +08:00
You can create any number of summaries in a single call to `summarize()`.
You'll learn various useful summaries in the upcoming chapters, but one very useful summary is `n()`, which returns the number of rows in each group:
2015-12-09 00:28:54 +08:00
```{r}
flights |>
group_by(month) |>
summarize(
delay = mean(dep_delay, na.rm = TRUE),
n = n()
)
2015-12-09 00:28:54 +08:00
```
Means and counts can get you a surprisingly long way in data science!
2015-12-15 23:26:47 +08:00
2022-02-16 01:59:19 +08:00
### The `slice_` functions
There are five handy functions that allow you pick off specific rows within each group:
- `df |> slice_head(n = 1)` takes the first row from each group.
- `df |> slice_tail(n = 1)` takes the last row in each group.
- `df |> slice_min(x, n = 1)` takes the row with the smallest value of `x`.
- `df |> slice_max(x, n = 1)` takes the row with the largest value of `x`.
2022-11-19 05:27:05 +08:00
- `df |> slice_sample(n = 1)` takes one random row.
2022-02-16 01:59:19 +08:00
2022-02-23 07:48:09 +08:00
You can vary `n` to select more than one row, or instead of `n =`, you can use `prop = 0.1` to select (e.g.) 10% of the rows in each group.
2022-02-16 01:59:19 +08:00
For example, the following code finds the most delayed flight to each destination:
```{r}
flights |>
group_by(dest) |>
slice_max(arr_delay, n = 1)
```
This is similar to computing the max delay with `summarize()`, but you get the whole row instead of the single summary.
2022-02-16 01:59:19 +08:00
2021-04-20 20:59:47 +08:00
### Grouping by multiple variables
2022-02-23 07:48:09 +08:00
You can create groups using more than one variable.
2022-02-16 01:59:19 +08:00
For example, we could make a group for each day:
```{r}
2022-02-16 01:59:19 +08:00
daily <- flights |>
group_by(year, month, day)
daily
```
2022-02-16 01:59:19 +08:00
When you summarize a tibble grouped by more than one variable, each summary peels off the last group.
2023-02-28 21:35:57 +08:00
In hindsight, this wasn't a great way to make this function work, but it's difficult to change without breaking existing code.
2022-02-16 01:59:19 +08:00
To make it obvious what's happening, dplyr displays a message that tells you how you can change this behavior:
```{r}
2022-02-24 03:15:52 +08:00
daily_flights <- daily |>
summarize(
n = n()
)
```
2022-02-23 07:48:09 +08:00
If you're happy with this behavior, you can explicitly request it in order to suppress the message:
```{r}
#| results: false
2022-02-24 03:15:52 +08:00
daily_flights <- daily |>
summarize(
n = n(),
.groups = "drop_last"
)
```
2022-02-16 01:59:19 +08:00
Alternatively, change the default behavior by setting a different value, e.g. `"drop"` to drop all grouping or `"keep"` to preserve the same groups.
### Ungrouping
You might also want to remove grouping outside of `summarize()`.
2022-02-16 01:59:19 +08:00
You can do this with `ungroup()`.
```{r}
2022-02-24 03:15:52 +08:00
daily |>
ungroup() |>
summarize(
2021-04-20 20:59:47 +08:00
delay = mean(dep_delay, na.rm = TRUE),
flights = n()
)
```
2022-02-23 07:48:09 +08:00
As you can see, when you summarize an ungrouped data frame, you get a single row back because dplyr treats all the rows in an ungrouped data frame as belonging to one group.
2021-04-20 20:59:47 +08:00
### Exercises
1. Which carrier has the worst delays?
Challenge: can you disentangle the effects of bad airports vs. bad carriers?
Why/why not?
(Hint: think about `flights |> group_by(carrier, dest) |> summarize(n())`)
2022-02-23 07:48:09 +08:00
2. Find the most delayed flight to each destination.
3. How do delays vary over the course of the day.
Illustrate your answer with a plot.
4. What happens if you supply a negative `n` to `slice_min()` and friends?
2021-04-20 20:59:47 +08:00
2023-02-08 00:41:53 +08:00
5. Explain what `count()` does in terms of the dplyr verbs you just learned.
2022-02-23 07:48:09 +08:00
What does the `sort` argument to `count()` do?
2021-04-20 20:59:47 +08:00
6. Suppose we have the following tiny data frame:
```{r}
df <- tibble(
x = 1:5,
y = c("a", "b", "a", "a", "b"),
z = c("K", "K", "L", "L", "K")
)
```
a. What does the following code do?
Run it, analyze the result, and describe what `group_by()` does.
```{r}
#| eval: false
df |>
group_by(y)
```
b. What does the following code do?
Run it, analyze the result, and describe what `arrange()` does.
Also comment on how it's different from the `group_by()` in part (a)?
```{r}
#| eval: false
df |>
arrange(y)
```
c. What does the following code do?
Run it, analyze the result, and describe what the pipeline does.
```{r}
#| eval: false
df |>
group_by(y) |>
summarize(mean_x = mean(x))
```
d. What does the following code do?
Run it, analyze the result, and describe what the pipeline does.
Then, comment on what the message says.
```{r}
#| eval: false
df |>
group_by(y, z) |>
summarize(mean_x = mean(x))
```
e. What does the following code do?
Run it, analyze the result, and describe what the pipeline does.
How is the output different from the one in part (d).
```{r}
#| eval: false
df |>
group_by(y, z) |>
summarize(mean_x = mean(x), .groups = "drop")
```
f. What do the following pipelines do?
Run both, analyze the results, and describe what each pipeline does.
How are the outputs of the two pipelines different?
```{r}
#| eval: false
df |>
group_by(y, z) |>
summarize(mean_x = mean(x))
df |>
group_by(y, z) |>
mutate(mean_x = mean(x))
```
## Case study: aggregates and sample size {#sec-sample-size}
2015-12-09 00:28:54 +08:00
2022-02-16 01:59:19 +08:00
Whenever you do any aggregation, it's always a good idea to include a count (`n()`).
That way, you can ensure that you're not drawing conclusions based on very small amounts of data.
For example, let's look at the planes (identified by their tail number) that have the highest average delays:
2015-12-15 23:26:47 +08:00
2015-12-16 23:58:52 +08:00
```{r}
#| fig-alt: >
2022-02-16 01:59:19 +08:00
#| A frequency histogram showing the distribution of flight delays.
#| The distribution is unimodal, with a large spike around 0, and
#| asymmetric: very few flights leave more than 30 minutes early,
#| but flights are delayed up to 5 hours.
2022-02-16 01:59:19 +08:00
delays <- flights |>
filter(!is.na(arr_delay), !is.na(tailnum)) |>
group_by(tailnum) |>
summarize(
2022-02-16 01:59:19 +08:00
delay = mean(arr_delay, na.rm = TRUE),
n = n()
2015-12-16 23:58:52 +08:00
)
2015-12-15 23:26:47 +08:00
2022-12-05 16:12:12 +08:00
ggplot(delays, aes(x = delay)) +
2016-07-22 22:15:55 +08:00
geom_freqpoly(binwidth = 10)
2015-12-15 23:26:47 +08:00
```
Wow, there are some planes that have an *average* delay of 5 hours (300 minutes)!
2022-02-16 01:59:19 +08:00
That seems pretty surprising, so lets draw a scatterplot of number of flights vs. average delay:
2015-12-16 23:58:52 +08:00
```{r}
#| fig-alt: >
#| A scatterplot showing number of flights versus average arrival delay. Delays
2022-02-16 01:59:19 +08:00
#| for planes with very small number of flights have very high variability
#| (from -50 to ~300), but the variability rapidly decreases as the
#| number of flights increases.
ggplot(delays, aes(x = delay, y = n)) +
geom_point(alpha = 1/10)
2015-12-09 00:28:54 +08:00
```
Not surprisingly, there is much greater variation in the average delay when there are few flights for a given plane.
The shape of this plot is very characteristic: whenever you plot a mean (or other summary statistics) vs. group size, you'll see that the variation decreases as the sample size increases[^data-transform-4].
2022-02-16 01:59:19 +08:00
[^data-transform-4]: \*cough\* the law of large numbers \*cough\*.
2022-02-16 01:59:19 +08:00
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups:
```{r}
#| warning: false
#| fig-alt: >
#| Scatterplot of number of flights of a given plane vs. the average delay
#| for those flights, for planes with more than 25 flights. As average delay
#| increases from -20 to 10, the number of flights also increases. For
#| larger average delayes, the number of flights decreases.
2022-02-16 01:59:19 +08:00
delays |>
filter(n > 25) |>
ggplot(aes(x = delay, y = n)) +
geom_point(alpha = 1/10) +
geom_smooth(se = FALSE)
2015-12-17 22:46:44 +08:00
```
2022-02-23 07:48:09 +08:00
Note the handy pattern for combining ggplot2 and dplyr.
It's a bit annoying that you have to switch from `|>` to `+`, but it's not too much of a hassle once you get the hang of it.
2022-02-16 01:59:19 +08:00
2022-02-23 07:48:09 +08:00
There's another common variation on this pattern that we can see in some data about baseball players.
2023-02-21 22:19:55 +08:00
The following code uses data from the **Lahman** package to compare what proportion of times a player gets a hit vs. the number of times they try to put the ball in play:
2016-08-01 00:32:16 +08:00
```{r}
2022-02-24 03:15:52 +08:00
batters <- Lahman::Batting |>
group_by(playerID) |>
summarize(
2022-02-16 01:59:19 +08:00
perf = sum(H, na.rm = TRUE) / sum(AB, na.rm = TRUE),
n = sum(AB, na.rm = TRUE)
)
batters
```
2022-02-16 01:59:19 +08:00
When we plot the skill of the batter (measured by the batting average, `ba`) against the number of opportunities to hit the ball (measured by at bat, `ab`), you see two patterns:
2015-12-16 23:58:52 +08:00
1. As above, the variation in our aggregate decreases as we get more data points.
2022-02-16 01:59:19 +08:00
2. There's a positive correlation between skill (`perf`) and opportunities to hit the ball (`n`) because obviously teams want to give their best batters the most opportunities to hit the ball.
2015-12-09 00:28:54 +08:00
```{r}
#| warning: false
#| fig-alt: >
#| A scatterplot of number of batting opportunites vs. batting performance
2022-02-16 01:59:19 +08:00
#| overlaid with a smoothed line. Average performance increases sharply
#| from 0.2 at when n is 1 to 0.25 when n is ~1000. Average performance
#| continues to increase linearly at a much shallower slope reaching
#| ~0.3 when n is ~15,000.
2022-02-24 03:15:52 +08:00
batters |>
filter(n > 100) |>
2022-12-05 16:12:12 +08:00
ggplot(aes(x = n, y = perf)) +
geom_point(alpha = 1 / 10) +
2015-12-17 22:46:44 +08:00
geom_smooth(se = FALSE)
2015-12-09 00:28:54 +08:00
```
This also has important implications for ranking.
If you naively sort on `desc(ba)`, the people with the best batting averages are clearly lucky, not skilled:
2015-12-16 23:58:52 +08:00
```{r}
2022-02-24 03:15:52 +08:00
batters |>
2022-02-16 01:59:19 +08:00
arrange(desc(perf))
2015-12-09 00:28:54 +08:00
```
You can find a good explanation of this problem and how to overcome it at <http://varianceexplained.org/r/empirical_bayes_baseball/> and <https://www.evanmiller.org/how-not-to-sort-by-average-rating.html>.
## Summary
In this chapter, you've learned the tools that dplyr provides for working with data frames.
The tools are roughly grouped into three categories: those that manipulate the rows (like `filter()` and `arrange()`, those that manipulate the columns (like `select()` and `mutate()`), and those that manipulate groups (like `group_by()` and `summarize()`).
In this chapter, we've focused on these "whole data frame" tools, but you haven't yet learned much about what you can do with the individual variable.
We'll come back to that in the Transform part of the book, where each chapter will give you tools for a specific type of variable.
In the next chapter, we'll pivot back to workflow to discuss the importance of code style, keeping your code well organized in order to make it easy for you and others to read and understand your code.