r4ds/transform.Rmd

886 lines
35 KiB
Plaintext
Raw Normal View History

2015-12-12 02:34:20 +08:00
# Data transformation {#transform}
2016-07-14 23:46:37 +08:00
## Introduction
2016-07-22 22:15:55 +08:00
Visualisation is an important tool for insight generation, but it is rare that you get the data in exactly the right form you need. Often you'll need to create some new variables or summaries, or maybe you just want to rename the variables or reorder the observations in order to make the data a little easier to work with. You'll learn how to do all that (and more!) in this chapter which will teach you how to transform your data using the dplyr package and new dataset on flights departing New York City in 2013.
2015-12-09 00:28:54 +08:00
2016-07-08 23:31:52 +08:00
### Prerequisites
2015-12-09 00:28:54 +08:00
2016-07-14 23:46:37 +08:00
In this chapter we're going to focus on how to use the dplyr package. We'll illustrate the key ideas using data from the nycflights13 package, and use ggplot2 to help us understand the data.
2015-12-09 00:28:54 +08:00
2016-07-14 23:46:37 +08:00
```{r setup}
2015-12-14 23:45:41 +08:00
library(dplyr)
2015-12-09 00:28:54 +08:00
library(nycflights13)
2016-07-08 23:31:52 +08:00
library(ggplot2)
2015-12-14 23:45:41 +08:00
```
2016-07-22 22:15:55 +08:00
Take careful note of the message that's printed when you load dplyr - it tells you that dplyr overwrite some functions in base R. If you want to use the base version of these functions after loading dplyr, you'll need to use their full names: `stats::filter()`, `base::intersect()`, etc.
2016-07-14 23:46:37 +08:00
### nycflights13
2015-12-14 23:45:41 +08:00
2016-07-22 22:15:55 +08:00
To explore the basic data manipulation verbs of dplyr, we'll use `nycflights13::flights`. This data frame contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013. The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is documented in `?flights`.
2015-12-14 23:45:41 +08:00
```{r}
2016-07-08 23:31:52 +08:00
flights
2015-12-14 23:45:41 +08:00
```
2016-07-22 22:15:55 +08:00
You might notice that this data frame prints little differently to other data frames you might have used in the past: it only shows the first few rows and all the columns that fit on one screen. (To see the whole dataset, you can run `View(flights)` which will open the dataset in the RStudio viewer). It prints differently because it's a __tibble__. Tibbles are data frames, but slightly tweaked to work better in the tidyverse. For now, you don't need to worry about the differences; we'll come back to tibbles in more detail in [wrangle](#wrangle-intro).
2016-07-08 23:31:52 +08:00
You might also have noticed the row of three letter abbreviations under the column names. These describe the type of each variable:
2015-12-14 23:45:41 +08:00
2016-07-14 23:46:37 +08:00
* `lgl` stands for logical, vectors that contain only `TRUE` or `FALSE`.
* `int` stands for integers.
2016-07-22 22:15:55 +08:00
* `dbl` stands for doubles, or real numbers.
* `chr` stands for character vectors, or strings.
2016-07-14 23:46:37 +08:00
### Dplyr basics
2015-12-14 23:45:41 +08:00
2016-07-14 23:46:37 +08:00
In this chapter you are going to learn the five key five dplyr functions that allow you to solve vast majority of your data manipulation challenges:
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
* Pick observations by their values (`filter()`).
* Reorder the rows (`arrange()`).
* Pick variables by their names (`select()`).
* Create new variables with functions of existing variables (`mutate()`).
* Collapse many values down to a single summary (`summarise()`).
2015-12-14 23:45:41 +08:00
These can all be used in conjunction with `group_by()` which changes the scope of each function from operating on the entire dataset to operating on it group-by-group. These six functions provide the verbs for a language of data manipulation.
2015-12-14 23:45:41 +08:00
2015-12-15 23:26:47 +08:00
All verbs work similarly:
2015-12-14 23:45:41 +08:00
1. The first argument is a data frame.
1. The subsequent arguments describe what to do with the data frame.
2015-12-15 23:26:47 +08:00
You can refer to columns in the data frame directly without using `$`.
2015-12-14 23:45:41 +08:00
1. The result is a new data frame.
2016-07-14 23:46:37 +08:00
Together these properties make it easy to chain together multiple simple steps to achieve a complex result. Let's dive in and see how these verbs work.
2015-12-14 23:45:41 +08:00
## Filter rows with `filter()`
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
`filter()` allows you to subset observations based on their values. The first argument is the name of the data frame. The second and subsequent arguments are the expressions that filter the data frame. For example, we can select all flights on January 1st with:
2015-12-09 00:28:54 +08:00
```{r}
filter(flights, month == 1, day == 1)
```
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
When you run that line of code, dplyr executes the filtering operation and returns a new data frame. dplyr functions never modify their inputs, so if you want to save the result, you'll need to use the assignment operator, `<-`:
2016-01-01 00:24:18 +08:00
```{r}
jan1 <- filter(flights, month == 1, day == 1)
```
2016-07-22 22:15:55 +08:00
R either prints out the results, or saves them to a variable. If you want to do both, you can wrap the assignment in parentheses:
2016-01-01 00:24:18 +08:00
```{r}
(dec25 <- filter(flights, month == 12, day == 25))
```
2015-12-14 23:45:41 +08:00
### Comparisons
2016-07-14 23:46:37 +08:00
To use filtering effectively, you have to know how to select the observations that you want using the comparison operators. R provides the standard suite: `>`, `>=`, `<`, `<=`, `!=` (not equal), and `==` (equal).
2016-01-01 00:24:18 +08:00
2016-07-14 23:46:37 +08:00
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality. When this happens you'll get an informative error:
2015-12-14 23:45:41 +08:00
2015-12-15 23:26:47 +08:00
```{r, error = TRUE}
filter(flights, month = 1)
```
2015-12-14 23:45:41 +08:00
2016-07-22 22:15:55 +08:00
There's another common problem you might encounter when using `==`: floating point numbers. These results might surprise you!
2015-12-14 23:45:41 +08:00
2015-12-15 23:26:47 +08:00
```{r}
sqrt(2) ^ 2 == 2
1/49 * 49 == 1
```
2015-12-14 23:45:41 +08:00
2016-07-14 23:46:37 +08:00
Computers use finite precision arithmetic (they obviously can't store an infinite number of digits!) so remember that every number you see is an approximation. Instead of relying on `==`, use use `dplyr::near()`:
2015-12-15 23:26:47 +08:00
```{r}
2016-07-08 23:31:52 +08:00
near(sqrt(2) ^ 2, 2)
near(1 / 49 * 49, 1)
2015-12-14 23:45:41 +08:00
```
(Remember that we use `::` to explicit about where a function lives. If dplyr is installed, `dplyr::near()` will always work. If you want to use the shorter `near()`, you need to make sure you have loaded dplyr with `library(dplyr)`.)
2015-12-14 23:45:41 +08:00
### Logical operators
2016-07-22 22:15:55 +08:00
Multiple arguments to `filter()` are combined with "and": every expression must be true in order for a row to be included in the output. For other types of combinations, you'll need to use Boolean operators yourself: `&` is "and", `|` is "or", and `!` is "not". Figure \@ref(fig:bool-ops) shows the complete set of Boolean operations.
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
```{r bool-ops, echo = FALSE, fig.cap = "Complete set of boolean operations. `x` is the left-hand circle, `y` is the right hand circle, and the shaded region show which parts each operator selects."}
knitr::include_graphics("diagrams/transform-logical.png")
2015-12-09 00:28:54 +08:00
```
2016-07-22 22:15:55 +08:00
The following code finds all flights that departed in November or December:
2015-12-15 23:26:47 +08:00
```{r, eval = FALSE}
2016-07-22 22:15:55 +08:00
filter(flights, month == 11 | month == 12)
2015-12-15 23:26:47 +08:00
```
2016-07-22 22:15:55 +08:00
The order of operations doesn't work like English. You can't write `filter(flights, month == 11 | 12)`, which you might literally translate into "finds all flights that departed in November or December". Instead it finds all months that equal `11 | 12`, an expression that evaluates to `TRUE`. In a numeric context (like here), `TRUE` becomes one, so this finds all flights in January, not November or December. This is quite confusing!
2015-12-06 22:02:29 +08:00
2015-12-15 23:26:47 +08:00
Sometimes you can simplify complicated subsetting by remembering De Morgan's law: `!(x & y)` is the same as `!x | !y`, and `!(x | y)` is the same as `!x & !y`. For example, if you wanted to find flights that weren't delayed (on arrival or departure) by more than two hours, you could use either of the following two filters:
```{r, eval = FALSE}
filter(flights, !(arr_delay > 120 | dep_delay > 120))
filter(flights, arr_delay <= 120, dep_delay <= 120)
```
2015-12-14 23:45:41 +08:00
2016-07-22 22:15:55 +08:00
As well as `&` and `|`, R also has `&&` and `||`. Don't use them here! You'll when you should use them in [conditional execution].
2015-12-14 23:45:41 +08:00
2016-07-14 23:46:37 +08:00
Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The window functions `cumany()` and `cumall()` allow you to find these values:
2015-12-15 23:26:47 +08:00
```{r}
2016-07-14 23:57:54 +08:00
df <- tibble(
2015-12-15 23:26:47 +08:00
x = c(FALSE, TRUE, FALSE),
y = c(TRUE, FALSE, TRUE)
)
filter(df, cumany(x)) # all rows after first TRUE
filter(df, cumall(y)) # all rows until first FALSE
```
2016-07-22 22:15:55 +08:00
(`tibble()` creates a dataset "by hand". You'll learn more about it in [tibbles].)
2016-07-14 23:46:37 +08:00
Whenever you start using complicated, multipart expressions in `filter()`, consider making them explicit variables instead. That makes it much easier to check your work. You'll learn how to create new variables shortly.
2015-12-14 23:45:41 +08:00
2015-12-06 22:02:29 +08:00
### Missing values
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
One important feature of R that can make comparison tricky are missing values, or `NA`s ("not applicables"). `NA` represents an unknown value so missing values are "contagious": almost any operation involving an unknown value will also be unknown.
2015-12-14 23:45:41 +08:00
```{r}
NA > 5
10 == NA
NA + 10
NA / 2
```
The most confusing result is this one:
```{r}
NA == NA
```
It's easiest to understand why this is true with a bit more context:
```{r}
# Let x be Mary's age. We don't know how old she is.
x <- NA
# Let y be John's age. We don't know how old he is.
y <- NA
# Are John and Mary the same age?
x == y
# We don't know!
```
2016-07-22 22:15:55 +08:00
If you want to determine if a value is missing, use `is.na()`:
```{r}
is.na(x)
```
2015-12-14 23:45:41 +08:00
2015-12-15 23:26:47 +08:00
`filter()` only includes rows where the condition is `TRUE`; it excludes both `FALSE` and `NA` values. If you want to preserve missing values, ask for them explicitly:
2015-12-14 23:45:41 +08:00
```{r}
2016-07-14 23:57:54 +08:00
df <- tibble(x = c(1, NA, 3))
2015-12-14 23:45:41 +08:00
filter(df, x > 1)
filter(df, is.na(x) | x > 1)
```
### Exercises
1. Find all flights that
2015-12-14 23:45:41 +08:00
1. Were delayed by more two hours
1. Flew to Houston (`IAH` or `HOU`)
1. Were operated by United, American, or Delta
1. Departed in summer (July, August, and September)
1. Arrived more than two hours late, but didn't leave late
1. Were delayed by at least an hour, but made up over 30 minutes in flight
1. Departed between midnight and 6am (inclusive)
2016-07-15 00:07:51 +08:00
1. Another useful dplyr filtering helper is `between()`. What does it do?
Can you use it to simplify the code needed to answer the previous
challenges?
2015-12-14 23:45:41 +08:00
1. How many flights have a missing `dep_time`? What other variables are
missing? What might these rows represent?
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
1. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing?
Why is `FALSE & NA` not missing? Can you figure out the general
rule? (`NA * 0` is a tricky counterexample!)
2015-12-09 00:28:54 +08:00
## Arrange rows with `arrange()`
2016-07-14 23:46:37 +08:00
`arrange()` works similarly to `filter()` except that instead of selecting rows, it changes their order. It takes a data frame and a set of column names (or more complicated expressions) to order by. If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns:
2015-12-09 00:28:54 +08:00
```{r}
arrange(flights, year, month, day)
```
Use `desc()` to re-order by a column in descending order:
2015-12-09 00:28:54 +08:00
```{r}
arrange(flights, desc(arr_delay))
```
2016-07-14 23:46:37 +08:00
Missing values are always sorted at the end:
2015-12-15 23:26:47 +08:00
```{r}
2016-07-14 23:57:54 +08:00
df <- tibble(x = c(5, 2, NA))
2015-12-15 23:26:47 +08:00
arrange(df, x)
arrange(df, desc(x))
```
### Exercises
2016-07-09 08:49:51 +08:00
1. How could you use `arrange()` to sort all missing values to the start?
2015-12-15 23:26:47 +08:00
(Hint: use `is.na()`).
1. Sort `flights` to find the most delayed flights. Find the flights that
left earliest.
2016-07-14 23:46:37 +08:00
1. Sort `flights` to find the fastest flights.
2016-07-22 20:50:07 +08:00
1. Which flights travelled the longest? Which travelled the shortest?
2016-07-14 23:46:37 +08:00
2015-12-09 00:28:54 +08:00
## Select columns with `select()`
2016-07-14 23:46:37 +08:00
It's not uncommon to get datasets with hundreds or even thousands of variables. In this case, the first challenge is often narrowing in on the variables you're actually interested in. `select()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables.
`select()` is not terribly useful with the flights the data because we only have 19 variables, but you can still get the general idea:
2015-12-09 00:28:54 +08:00
```{r}
# Select columns by name
select(flights, year, month, day)
# Select all columns between year and day (inclusive)
select(flights, year:day)
# Select all columns except those from year to day (inclusive)
select(flights, -(year:day))
```
2015-12-15 23:26:47 +08:00
There are a number of helper functions you can use within `select()`:
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
* `starts_with("abc")`: matches names that begin with "abc".
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
* `ends_with("xyz")`: matches names that end with "xyz".
* `contains("ijk")`: matches names that contain "ijk".
2015-12-15 23:26:47 +08:00
2016-07-22 22:15:55 +08:00
* `matches("(.)\\1")`: selects variables that match a regular expression.
2015-12-15 23:26:47 +08:00
This one matches any variables that contain repeated characters. You'll
2016-07-15 00:07:51 +08:00
learn more about regular expressions in [strings].
2015-12-15 23:26:47 +08:00
* `num_range("x", 1:3)` matches `x1`, `x2` and `x3`.
See `?select` for more details.
It's possible to use `select()` to rename variables:
2015-12-09 00:28:54 +08:00
```{r}
select(flights, tail_num = tailnum)
```
2016-07-14 23:46:37 +08:00
But because `select()` drops all the variables not explicitly mentioned, it's not that useful. Instead, use `rename()`, which is a variant of `select()` that keeps all the variables that aren't explicitly mentioned:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
rename(flights, tail_num = tailnum)
```
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
Another option is to use `select()` in conjunction with the `everything()` helper. This is useful if you have a handful of variables you'd like to move to the start of the data frame.
```{r}
select(flights, time_hour, air_time, everything())
```
2016-01-08 03:46:28 +08:00
### Exercises
2015-12-15 23:26:47 +08:00
1. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`,
2015-12-29 23:59:14 +08:00
`arr_time`, and `arr_delay` from `flights`.
2016-07-14 23:46:37 +08:00
1. What happens if you include the name of a variable multiple times in
a `select()` call?
1. What does the `one_of()` function do? Why might it be helpful in conjunction
with this vector?
```{r}
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
```
1. Does the result of running the following code suprise you? How do the
select helpers deal with case by default? How can you change that default?
```{r, eval = FALSE}
select(flights, contains("TIME"))
```
2015-12-15 23:26:47 +08:00
2016-01-16 07:45:13 +08:00
## Add new variables with `mutate()`
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
Besides selecting sets of existing columns, it's often useful to add new columns that are functions of existing columns. That's the job of `mutate()`.
2015-12-15 23:26:47 +08:00
2016-07-14 23:46:37 +08:00
`mutate()` always adds new columns at the end of your dataset so we'll start by creating a narrower dataset so we can see the new variables. Remember that when you're in RStudio, the easiest way to see all the columns is `View()`.
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
2015-12-14 23:45:41 +08:00
flights_sml <- select(flights,
year:day,
ends_with("delay"),
distance,
air_time
)
mutate(flights_sml,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
2015-12-14 23:45:41 +08:00
speed = distance / air_time * 60
)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
Note that you can refer to columns that you've just created:
2015-12-09 00:28:54 +08:00
```{r}
2015-12-14 23:45:41 +08:00
mutate(flights_sml,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
2015-12-29 23:59:14 +08:00
hours = air_time / 60,
gain_per_hour = gain / hours
2015-12-09 00:28:54 +08:00
)
```
If you only want to keep the new variables, use `transmute()`:
```{r}
2015-12-15 23:26:47 +08:00
transmute(flights,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
2015-12-29 23:59:14 +08:00
hours = air_time / 60,
gain_per_hour = gain / hours
2015-12-09 00:28:54 +08:00
)
```
2015-12-14 23:45:41 +08:00
### Useful functions
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
There are many functions for creating new variables that you can use with `mutate()`. The key property is that the function must be vectorised: it must take a vector of values as input, return a vector with the same number of values as output. There's no way to list every possible function that you might use, but here's a selection of functions that are frequently useful:
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
* Arithmetic operators: `+`, `-`, `*`, `/`, `^`. These are all vectorised,
using the so called "recycling rules". If one parameter is shorter than
the other, it will be automatically extended to be the same length. This
is most useful when one of the arguments is a single number: `airtime / 60`,
`hours * 60 + minute`, etc.
2015-12-15 23:26:47 +08:00
2015-12-29 23:59:14 +08:00
Arithmetic operators are also useful in conjunction with the aggregate
functions you'll learn about later. For example, `x / sum(x)` calculates
2016-07-14 23:46:37 +08:00
the proportion of a total, and `y - mean(y)` computes the difference from
the mean.
2015-12-14 23:45:41 +08:00
2016-01-16 07:45:13 +08:00
* Modular arithmetic: `%/%` (integer division) and `%%` (remainder), where
2015-12-15 23:26:47 +08:00
`x == y * (x %/% y) + (x %% y)`. Modular arithmetic is a handy tool because
it allows you to break integers up into pieces. For example, in the
flights dataset, you can compute `hour` and `minute` from `dep_time` with:
2015-12-14 23:45:41 +08:00
```{r}
transmute(flights,
dep_time,
hour = dep_time %/% 100,
minute = dep_time %% 100
)
```
2015-12-15 23:26:47 +08:00
* Logs: `log()`, `log2()`, `log10()`. Logarithms are an incredibly useful
2016-07-14 23:46:37 +08:00
transformation for dealing with data that ranges across multiple orders of
2015-12-15 23:26:47 +08:00
magnitude. They also convert multiplicative relationships to additive, a
feature we'll come back to in modelling.
All else being equal, I recommend using `log2()` because it's easy to
interpret: an difference of 1 on the log scale corresponds to doubling on
the original scale and a difference of -1 corresponds to halving.
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
* Offsets: `lead()` and `lag()` allow you to refer to leading or lagging
values. This allows you to compute running differences (e.g. `x - lag(x)`)
or find when values change (`x != lag(x))`. They are most useful in
conjunction with `group_by()`, which you'll learn about shortly.
```{r}
x <- 1:10
x
lag(x)
lead(x)
```
2015-12-29 23:59:14 +08:00
* Cumulative and rolling aggregates: R provides functions for running sums,
2016-07-14 23:46:37 +08:00
products, mins and maxes: `cumsum()`, `cumprod()`, `cummin()`, `cummax()`;
and dplyr provides `cummean()` for cumulative means. If you need rolling
2015-12-29 23:59:14 +08:00
aggregates (i.e. a sum computed over a rolling window), try the RcppRoll
package.
```{r}
x
cumsum(x)
cummean(x)
```
2015-12-29 23:59:14 +08:00
* Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, which you learned about
earlier. If you're doing a complex sequence of logical operations it's
often a good idea to store the interim values in new variables so you can
2016-07-14 23:46:37 +08:00
check that each step is working as expected.
2015-12-09 00:28:54 +08:00
2015-12-29 23:59:14 +08:00
* Ranking: there are a number of ranking functions, but you should
start with `min_rank()`. It does the most usual type of ranking
2015-12-15 23:26:47 +08:00
(e.g. 1st, 2nd, 2nd, 4th). The default gives smallest values the small
ranks; use `desc(x)` to give the largest values the smallest ranks.
2016-07-22 22:15:55 +08:00
If `min_rank()` doesn't do what you need, look at the variants
`row_number()`, `dense_rank()`, `cume_dist()`, `percent_rank()`,
`ntile()`.
2015-12-15 23:26:47 +08:00
2015-12-29 23:59:14 +08:00
```{r}
y <- c(1, 2, 2, NA, 3, 4)
2016-07-14 23:57:54 +08:00
tibble(
row_number(y),
min_rank(y),
dense_rank(y),
percent_rank(y),
cume_dist(y)
2015-12-29 23:59:14 +08:00
) %>% knitr::kable()
```
2015-12-15 23:26:47 +08:00
### Exercises
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r, eval = FALSE, echo = FALSE}
flights <- flights %>% mutate(
dep_time = hour * 60 + minute,
arr_time = (arr_time %/% 100) * 60 + (arr_time %% 100),
airtime2 = arr_time - dep_time,
dep_sched = dep_time + dep_delay
2015-12-14 23:45:41 +08:00
)
2015-12-16 23:58:52 +08:00
2015-12-15 23:26:47 +08:00
ggplot(flights, aes(dep_sched)) + geom_histogram(binwidth = 60)
ggplot(flights, aes(dep_sched %% 60)) + geom_histogram(binwidth = 1)
ggplot(flights, aes(air_time - airtime2)) + geom_histogram()
2015-12-14 23:45:41 +08:00
```
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
1. Currently `dep_time` and `sched_dep_time` are convenient to look at, but
2015-12-15 23:26:47 +08:00
hard to compute with because they're not really continuous numbers.
2016-01-09 07:38:03 +08:00
Convert them to a more convenient representation of number of minutes
2015-12-15 23:26:47 +08:00
since midnight.
1. Compare `airtime` with `arr_time - dep_time`. What do you expect to see?
2016-07-22 22:15:55 +08:00
What do you see? What do you need to do to fix it?
1. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you
expect those three numbers to be related?
2015-12-09 00:28:54 +08:00
2016-07-15 00:07:51 +08:00
1. Find the 10 most delayed flights using a ranking function. How do you want
to handle ties? Carefully read the documentation for `min_rank()`.
2015-12-29 23:59:14 +08:00
2016-07-22 22:15:55 +08:00
1. What does `1:3 + 1:10` return? Why?
1. What trigonometric functions does R provide?
2015-12-15 23:26:47 +08:00
## Grouped summaries with `summarise()`
2015-12-09 00:28:54 +08:00
2016-07-22 22:15:55 +08:00
The last key verb is `summarise()`. It collapses a data frame to a single row:
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
summarise(flights, delay = mean(dep_delay, na.rm = TRUE))
```
2015-12-09 00:28:54 +08:00
2016-07-14 23:46:37 +08:00
(we'll come back to what that `na.rm = TRUE` means very shortly.)
2016-07-22 22:15:55 +08:00
`summarise()` is terribly useful unless we pair it with `group_by()`. This changes the unit of analysis from the complete dataset to individual groups. Then, when you use the dplyr verbs on a grouped data frame they'll be automatically applied "by group". For example, if we applied exactly the same code to a data frame grouped by date, we get the average delay per date:
2015-12-09 00:28:54 +08:00
2015-12-15 23:26:47 +08:00
```{r}
by_day <- group_by(flights, year, month, day)
summarise(by_day, delay = mean(dep_delay, na.rm = TRUE))
2015-12-09 00:28:54 +08:00
```
Together `group_by()` and `summarise()` provide one of the tools that you'll use most commonly when working with dplyr: grouped summaries. But before we go any further with this, we need to introduce a powerful new idea: the pipe.
2015-12-15 23:26:47 +08:00
2015-12-29 23:59:14 +08:00
### Combining multiple operations with the pipe
2015-12-16 23:58:52 +08:00
2016-07-22 22:15:55 +08:00
Imagine that we want to explore the relationship between the distance and average delay for each location. Using what you know about dplyr, you might write code like this:
2015-12-16 23:58:52 +08:00
```{r, fig.width = 6}
by_dest <- group_by(flights, dest)
delay <- summarise(by_dest,
count = n(),
dist = mean(distance, na.rm = TRUE),
2016-07-22 22:15:55 +08:00
delay = mean(arr_delay, na.rm = TRUE)
)
2015-12-16 23:58:52 +08:00
delay <- filter(delay, count > 20, dest != "HNL")
2016-07-22 22:15:55 +08:00
# It looks like delays increase with distance up to ~750 miles
# and then decrease. Maybe as flights get longer there's more
# ability to make up delays in the air?
2016-07-15 00:07:51 +08:00
ggplot(data = delay, mapping = aes(x = dist, y = delay)) +
2015-12-16 23:58:52 +08:00
geom_point(aes(size = count), alpha = 1/3) +
geom_smooth(se = FALSE)
2015-12-15 23:26:47 +08:00
```
There are three steps to prepare this data:
2015-12-15 23:26:47 +08:00
2016-07-14 23:46:37 +08:00
1. Group flights by destination.
2015-09-21 21:41:14 +08:00
2016-07-14 23:46:37 +08:00
1. Summarise to compute distance, average delay, and number of flights.
2015-12-16 23:58:52 +08:00
2016-07-14 23:46:37 +08:00
1. Filter to remove noisy points and Honolulu airport, which is almost
twice as far away as the next closest airport.
2015-12-16 23:58:52 +08:00
2016-07-22 22:15:55 +08:00
This code is a little frustrating to write because we have to give each intermediate data frame a name, even though we don't care about it. Naming things is hard, so this slows down our analysis.
2015-12-29 23:59:14 +08:00
There's another way to tackle the same problem with the pipe, `%>%`:
2015-12-09 00:28:54 +08:00
```{r}
2015-12-16 23:58:52 +08:00
delays <- flights %>%
group_by(dest) %>%
summarise(
count = n(),
dist = mean(distance, na.rm = TRUE),
delay = mean(arr_delay, na.rm = TRUE)
) %>%
filter(count > 20, dest != "HNL")
2015-12-09 00:28:54 +08:00
```
2015-12-16 23:58:52 +08:00
This focuses on the transformations, not what's being transformed, which makes the code easier to read. You can read it as a series of imperative statements: group, then summarise, then filter. As suggested by this reading, a good way to pronounce `%>%` when reading code is "then".
2015-12-15 23:26:47 +08:00
2016-07-14 23:46:37 +08:00
Behind the scenes, `x %>% f(y)` turns into `f(x, y)`, and `x %>% f(y) %>% g(z)` turns into `g(f(x, y), z)` and so on. You can use the pipe to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom. We'll use piping frequently from now on because it considerably improves the readability of code, and we'll come back to it in more detail in [pipes].
2015-12-15 23:26:47 +08:00
2016-07-22 22:15:55 +08:00
Working with the pipe is one of the key criteria for belonging to the tidyverse. The only exception is ggplot2: it was written before the pipe was discovered. Unfortunately, the next iteration of ggplot2, ggvis, which does use the pipe, isn't quite ready for prime time yet.
2015-12-29 23:59:14 +08:00
### Missing values
2015-12-15 23:26:47 +08:00
2015-12-30 23:43:15 +08:00
You may have wondered about the `na.rm` argument we used above. What happens if we don't set it?
2015-12-17 22:46:44 +08:00
2015-12-29 23:59:14 +08:00
```{r}
flights %>%
group_by(year, month, day) %>%
2015-12-30 23:43:15 +08:00
summarise(mean = mean(dep_delay))
2015-12-29 23:59:14 +08:00
```
2015-12-17 22:46:44 +08:00
2016-07-14 23:46:37 +08:00
We get a lot of missing values! That's because aggregation functions obey the usual rule of missing values: if there's any missing value in the input, the output will be a missing value. Fortunately, all aggregation functions have an `na.rm` argument which removes the missing values prior to computation:
2015-12-29 23:59:14 +08:00
```{r}
flights %>%
group_by(year, month, day) %>%
2015-12-30 23:43:15 +08:00
summarise(mean = mean(dep_delay, na.rm = TRUE))
2015-12-29 23:59:14 +08:00
```
2016-07-14 23:46:37 +08:00
In this case, where missing values represent cancelled flights, we could also tackle the problem by first removing the cancelled flights. We'll save this dataset so we can reuse in the next few examples.
2015-12-29 23:59:14 +08:00
```{r}
2015-12-30 23:43:15 +08:00
not_cancelled <- filter(flights, !is.na(dep_delay), !is.na(arr_delay))
2015-12-29 23:59:14 +08:00
not_cancelled %>%
group_by(year, month, day) %>%
2015-12-30 23:43:15 +08:00
summarise(mean = mean(dep_delay))
2015-12-29 23:59:14 +08:00
```
2015-12-15 23:26:47 +08:00
2015-12-16 23:58:52 +08:00
### Counts
2015-12-09 00:28:54 +08:00
2016-07-14 23:46:37 +08:00
Whenever you do any aggregation, it's always a good idea to include either a count (`n()`), or a count of non-missing values (`sum(!is.na(x))`). That way you can check that you're not drawing conclusions based on very small amounts of data. For example, let's look at the planes (identified by their tail number) that have the highest average delays:
2015-12-15 23:26:47 +08:00
2015-12-16 23:58:52 +08:00
```{r}
2015-12-30 23:43:15 +08:00
delays <- not_cancelled %>%
group_by(tailnum) %>%
2015-12-16 23:58:52 +08:00
summarise(
2016-07-15 00:07:51 +08:00
delay = mean(arr_delay)
2015-12-16 23:58:52 +08:00
)
2015-12-15 23:26:47 +08:00
2016-07-15 00:07:51 +08:00
ggplot(data = delays, mapping = aes(x = delay)) +
2016-07-22 22:15:55 +08:00
geom_freqpoly(binwidth = 10)
2015-12-15 23:26:47 +08:00
```
2016-07-14 23:46:37 +08:00
Wow, there are some planes that have an _average_ delay of 5 hours (300 minutes)!
2015-12-15 23:26:47 +08:00
2015-12-30 23:43:15 +08:00
The story is actually a little more nuanced. We can get more insight if we draw a scatterplot of number of flights vs. average delay:
2015-12-16 23:58:52 +08:00
```{r}
2015-12-30 23:43:15 +08:00
delays <- not_cancelled %>%
group_by(tailnum) %>%
2015-12-16 23:58:52 +08:00
summarise(
delay = mean(arr_delay, na.rm = TRUE),
2015-12-30 23:43:15 +08:00
n = n()
2015-12-16 23:58:52 +08:00
)
2016-07-15 00:07:51 +08:00
ggplot(data = delays, mapping = aes(x = n, y = delay)) +
2015-12-16 23:58:52 +08:00
geom_point()
2015-12-09 00:28:54 +08:00
```
2016-07-22 22:15:55 +08:00
Not surprisingly, there is much greater variation in the average delay when there are few flights. The shape of this plot is very characteristic: whenever you plot a mean (or other summary) vs. group size, you'll see that the variation decreases as the sample size increases.
2015-12-16 23:58:52 +08:00
2016-07-14 23:46:37 +08:00
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups. This is what the following code does, as well as showing you a handy pattern for integrating ggplot2 into dplyr flows. It's a bit painful that you have to switch from `%>%` to `+`, but once you get the hang of it, it's quite convenient.
2015-12-17 22:46:44 +08:00
```{r}
delays %>%
filter(n > 25) %>%
2016-07-15 00:07:51 +08:00
ggplot(mapping = aes(x = n, y = delay)) +
2015-12-17 22:46:44 +08:00
geom_point()
```
--------------------------------------------------------------------------------
2016-07-14 23:46:37 +08:00
RStudio tip: a useful keyboard shortcut is Cmd/Ctrl + Shift + P. This resends the previously sent chunk from the editor to the console. This is very convenient when you're (e.g.) exploring the value of `n` in the example above. You send the whole block once with Cmd/Ctrl + Enter, then you modify the value of `n` and press Cmd/Ctrl + Shift + P to resend the complete block.
2015-12-17 22:46:44 +08:00
--------------------------------------------------------------------------------
2016-07-22 22:15:55 +08:00
There's another common variation of this type of pattern. Let's look at how the average performance of batters in baseball is related to the number of times they're at bat. Here I use data from the __Lahman__ package to compute the batting average (number of hits / number of attempts) of every major league baseball player. When I plot the skill of the batter against the number of times batted, you see two patterns:
2015-12-16 23:58:52 +08:00
1. As above, the variation in our aggregate decreases as we get more
data points.
2016-07-22 22:15:55 +08:00
2. There's a positive correlation between skill (batting average, `ba`) and
number of opportunities to hit the ball (at bat, `ab`). This is because
teams control who gets to play, and obviously they'll pick their best
players.
2015-12-09 00:28:54 +08:00
```{r}
2016-07-15 00:07:51 +08:00
# Convert to a tibble so it prints nicely
batting <- tibble::as_tibble(Lahman::Batting)
2015-12-16 23:58:52 +08:00
batters <- batting %>%
group_by(playerID) %>%
2015-12-09 00:28:54 +08:00
summarise(
2016-07-15 00:07:51 +08:00
ba = sum(H, na.rm = TRUE) / sum(AB, na.rm = TRUE),
ab = sum(AB, na.rm = TRUE)
2015-12-17 22:46:44 +08:00
)
2015-12-16 23:58:52 +08:00
2015-12-17 22:46:44 +08:00
batters %>%
filter(ab > 100) %>%
2016-07-15 00:07:51 +08:00
ggplot(mapping = aes(x = ab, y = ba)) +
2015-12-17 22:46:44 +08:00
geom_point() +
geom_smooth(se = FALSE)
2015-12-09 00:28:54 +08:00
```
2015-12-17 22:46:44 +08:00
This also has important implications for ranking. If you naively sort on `desc(ba)`, the people with the best batting averages are clearly lucky, not skilled:
2015-12-16 23:58:52 +08:00
```{r}
2015-12-17 22:46:44 +08:00
batters %>% arrange(desc(ba))
2015-12-09 00:28:54 +08:00
```
2015-12-17 22:46:44 +08:00
You can find a good explanation of this problem at <http://varianceexplained.org/r/empirical_bayes_baseball/> and <http://www.evanmiller.org/how-not-to-sort-by-average-rating.html>.
2015-12-16 23:58:52 +08:00
2016-06-05 15:19:39 +08:00
### Other summary functions
2015-12-29 23:59:14 +08:00
2015-12-30 23:43:15 +08:00
Just using means, counts, and sum can get you a long way, but R provides many other useful summary functions:
2015-12-29 23:59:14 +08:00
2016-07-14 23:46:37 +08:00
* Measures of location: we've used `mean(x)`, but `median(x)` is also
useful. The mean is the sum divided by the length; the median is a value
2016-07-22 22:15:55 +08:00
where 50% of `x` is above it, and 50% is below it.
2015-12-30 23:43:15 +08:00
2016-07-14 23:46:37 +08:00
It's sometimes useful to combine aggregation with logical subsetting.
We haven't talked about this sort of subsetting yet, but you'll learn more
about it in [subsetting].
2015-12-30 23:43:15 +08:00
```{r}
not_cancelled %>%
group_by(year, month, day) %>%
summarise(
avg_delay1 = mean(arr_delay),
avg_delay2 = mean(arr_delay[arr_delay > 0]) # the average positive delay
2015-12-30 23:43:15 +08:00
)
```
2015-12-29 23:59:14 +08:00
2016-07-14 23:46:37 +08:00
* Measures of spread: `sd(x)`, `IQR(x)`, `mad(x)`. The mean squared deviation,
2015-12-29 23:59:14 +08:00
or standard deviation or sd for short, is the standard measure of spread.
2016-01-16 07:45:13 +08:00
The interquartile range `IQR()` and median absolute deviation `mad(x)`
2015-12-29 23:59:14 +08:00
are robust equivalents that maybe more useful if you have outliers.
```{r}
# Why is distance to some destinations more variable than to others?
2015-12-29 23:59:14 +08:00
not_cancelled %>%
group_by(dest) %>%
summarise(distance_sd = sd(distance)) %>%
arrange(desc(distance_sd))
```
2016-07-22 22:15:55 +08:00
* Measures of rank: `min(x)`, `quantile(x, 0.25)`, `max(x)`. Quantiles
are a generalisation of the median. For example, `quantile(x, 0.25)`
will find a value of `x` that is greater than 25% of the values,
and less than the remaining 75%.
2015-12-29 23:59:14 +08:00
```{r}
# When do the first and last flights leave each day?
not_cancelled %>%
group_by(year, month, day) %>%
summarise(
first = min(dep_time),
last = max(dep_time)
)
```
2016-07-14 23:46:37 +08:00
* Measures of position: `first(x)`, `nth(x, 2)`, `last(x)`. These work
similarly to `x[1]`, `x[2]`, and `x[length(x)]` but let you set a default
value if that position does not exist (i.e. you're trying to get the 3rd
element from a group that only has two elements).
2015-12-29 23:59:14 +08:00
2015-12-30 23:43:15 +08:00
These functions are complementary to filtering on ranks. Filtering gives
2016-01-16 07:45:13 +08:00
you all variables, with each observation in a separate row. Summarising
2015-12-30 23:43:15 +08:00
gives you one row per group, with multiple variables:
```{r}
not_cancelled %>%
group_by(year, month, day) %>%
2016-07-15 00:07:51 +08:00
mutate(r = min_rank(desc(dep_time))) %>%
filter(r %in% range(r))
2015-12-30 23:43:15 +08:00
not_cancelled %>%
group_by(year, month, day) %>%
2016-07-14 23:46:37 +08:00
summarise(
first_dep = first(dep_time),
last_dep = last(dep_time)
)
2015-12-30 23:43:15 +08:00
```
* Counts: You've seen `n()`, which takes no arguments, and returns the
size of the current group. To count the number of non-missing values, use
`sum(!is.na(x))`. To count the number of distinct (unique) values, use
`n_distinct(x)`.
2015-12-29 23:59:14 +08:00
```{r}
# Which destinations have the most carriers?
not_cancelled %>%
group_by(dest) %>%
summarise(carriers = n_distinct(carrier)) %>%
arrange(desc(carriers))
```
2016-07-14 23:46:37 +08:00
Counts are so useful that dplyr provides a simple helper if all you want is
a count:
2015-12-29 23:59:14 +08:00
```{r}
not_cancelled %>% count(dest)
```
You can optionally provide a weight variable. For example, you could use
2015-12-30 23:43:15 +08:00
this to "count" (sum) the total number of miles a plane flew:
2015-12-29 23:59:14 +08:00
```{r}
not_cancelled %>%
count(tailnum, wt = distance)
```
* Counts and proportions of logical values: `sum(x > 10)`, `mean(y == 0)`.
2015-12-29 23:59:14 +08:00
When used with numeric functions, `TRUE` is converted to 1 and `FALSE` to 0.
2016-07-14 23:46:37 +08:00
This makes `sum()` and `mean()` very useful: `sum(x)` gives the number of
`TRUE`s in `x`, and `mean(x)` gives the proportion.
2015-12-29 23:59:14 +08:00
```{r}
# How many flights left before 5am? (these usually indicate delayed
# flights from the previous day)
not_cancelled %>%
group_by(year, month, day) %>%
summarise(n_early = sum(dep_time < 500))
# What proportion of flights are delayed by more than an hour?
not_cancelled %>%
group_by(year, month, day) %>%
summarise(hour_perc = mean(arr_delay > 60, na.rm = TRUE))
```
### Grouping by multiple variables
When you group by multiple variables, each summary peels off one level of the grouping. That makes it easy to progressively roll-up a dataset:
```{r}
daily <- group_by(flights, year, month, day)
(per_day <- summarise(daily, flights = n()))
(per_month <- summarise(per_day, flights = sum(flights)))
(per_year <- summarise(per_month, flights = sum(flights)))
```
2016-07-14 23:46:37 +08:00
Be careful when progressively rolling up summaries: it's OK for sums and counts, but you need to think about weighting means and variances, and it's not possible to do it exactly for rank-based statistics like the median. In otherwords, the sum of groupwise sums is the overall sum, but the median of groupwise medians is not the overall median.
2015-12-29 23:59:14 +08:00
### Ungrouping
2015-12-30 23:43:15 +08:00
If you need to remove grouping, and return to operations on ungrouped data, use `ungroup()`.
2015-12-29 23:59:14 +08:00
```{r}
daily %>%
ungroup() %>% # no longer grouped by date
summarise(flights = n()) # all flights
```
2015-12-29 23:59:14 +08:00
### Exercises
2016-01-16 07:45:13 +08:00
1. Brainstorm at least 5 different ways to assess the typical delay
2015-12-29 23:59:14 +08:00
characteristics of a group of flights. Consider the following scenarios:
* A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of
the time.
* A flight is always 10 minutes late.
* A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of
the time.
* 99% of the time a flight is on time. 1% of the time it's 2 hours late.
Which is more important: arrival delay or departure delay?
2016-07-23 00:16:50 +08:00
1. Our definition of cancelled flights (`!is.na(dep_delay) & !is.na(arr_delay)`
) is slightly sup-optimal. Why? Which is the most important column?
1. Look at the number of cancelled flights per day. Is there a pattern?
2015-12-30 23:43:15 +08:00
Is the proportion of cancelled flights related to the average delay?
2015-12-29 23:59:14 +08:00
1. Which carrier has the worst delays? Challenge: can you disentangle the
effects of bad airports vs. bad carriers? Why/why not? (Hint: think about
`flights %>% group_by(carrier, dest) %>% summarise(n())`)
2016-07-14 23:46:37 +08:00
1. For each plane, count the number of flights before the first delay
of greater than 1 hour.
2016-07-22 22:15:55 +08:00
1. What does the `sort` argument to `count()` do. When might you use it?
2015-12-29 23:59:14 +08:00
## Grouped mutates (and filters)
2015-12-16 23:58:52 +08:00
2015-12-30 23:43:15 +08:00
Grouping is most useful in conjunction with `summarise()`, but you can also do convenient operations with `mutate()` and `filter()`:
2015-12-16 23:58:52 +08:00
2015-12-17 22:46:44 +08:00
* Find the worst members of each group:
```{r}
2016-07-22 22:15:55 +08:00
flights_sml %>%
2015-12-17 22:46:44 +08:00
group_by(year, month, day) %>%
filter(rank(desc(arr_delay)) < 10)
2015-12-17 22:46:44 +08:00
```
* Find all groups bigger than a threshold:
```{r}
popular_dests <- flights %>%
group_by(dest) %>%
filter(n() > 365)
```
* Standardise to compute per group metrics:
```{r}
popular_dests %>%
filter(arr_delay > 0) %>%
2016-07-22 22:15:55 +08:00
mutate(prop_delay = arr_delay / sum(arr_delay)) %>%
select(year:day, dest, arr_delay, prop_delay)
2015-12-17 22:46:44 +08:00
```
2015-12-30 23:43:15 +08:00
A grouped filter is a grouped mutate followed by an ungrouped filter. I generally avoid them except for quick and dirty manipulations: otherwise it's hard to check that you've done the manipulation correctly.
2015-12-15 23:26:47 +08:00
2016-07-14 23:46:37 +08:00
Functions that work most naturally in grouped mutates and filters are known as window functions (vs. the summary functions used for summaries). You can learn more about useful window functions in the corresponding vignette: `vignette("window-functions")`.
2015-12-18 23:53:15 +08:00
### Exercises
1. Refer back to the table of useful mutate and filtering functions.
Describe how each operation changes when you combine it with grouping.
2015-12-18 23:53:15 +08:00
1. Which plane (`tailnum`) has the worst on-time record?
1. What time of day should you fly if you want to avoid delays as much
as possible?
2016-01-16 07:45:13 +08:00
1. Delays are typically temporally correlated: even once the problem that
caused the initial delay has been resolved, later flights are delayed
to allow earlier flights to leave. Using `lag()` explore how the delay
2016-07-22 22:15:55 +08:00
of a flight is related to the delay of the immediately preceeding flight.
2015-12-18 23:53:15 +08:00
1. Look at each destination. Can you find flights that are suspiciously
fast? (i.e. flights that represent a potential data entry error). Compute
the air time a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
1. Find all destinations that are flown by at least two carriers. Use that
information to rank the carriers.