Polishing transform chapter

This commit is contained in:
hadley 2016-07-14 10:46:37 -05:00
parent 3e177dd093
commit 40613ad2d7
1 changed files with 119 additions and 88 deletions

View File

@ -1,5 +1,7 @@
# Data transformation {#transform}
## Introduction
Visualisation is an important tool for insight generation, but it is rare that you get the data in exactly the right form you need for visualisation. Often you'll need to create some new variables or summaries, or maybe you just want to rename the variables or reorder the observations in order to make the data a little easier to work with. You'll learn how to do all that (and more!) in this chapter which will teach you how to transform your data using the dplyr package.
When working with data you must:
@ -11,7 +13,7 @@ When working with data you must:
1. Execute the program.
The dplyr package makes these steps fast and easy:
dplyr makes these steps fast and easy:
* By constraining your options, it simplifies how you can think about
common data manipulation tasks.
@ -27,15 +29,15 @@ In this chapter you'll learn the key verbs of dplyr in the context of a new data
### Prerequisites
In this chapter we're going to focus on how to use dplyr. We'll illustrate the key ideas using some data in nycflight3.
In this chapter we're going to focus on how to use the dplyr package. We'll illustrate the key ideas using data from the nycflights13 package, and use ggplot2 to help us understand the data.
```{r}
```{r setup}
library(dplyr)
library(nycflights13)
library(ggplot2)
```
## nycflights13
### nycflights13
To explore the basic data manipulation verbs of dplyr, we'll use the `flights` data frame from the nycflights13 package. This data frame contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013. The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is documented in `?nycflights13`.
@ -43,21 +45,21 @@ To explore the basic data manipulation verbs of dplyr, we'll use the `flights` d
flights
```
You might notice that this data frame prints little differently to other data frames you might have used: it only shows the first few rows and all the columns that fit on one screen (To see the whole dataset, you can run `View(flights)` which will open the dataset in the RStudio viewer). It prints differently because it's a __tibble__. Tibbles are data frames, but slightly tweaked to work better in the tidyverse. For now, you don't need to worry about the differences; we'll come back to tibbles in more detail in [wrangle].
You might notice that this data frame prints little differently to other data frames you might have used in the past: it only shows the first few rows and all the columns that fit on one screen. (To see the whole dataset, you can run `View(flights)` which will open the dataset in the RStudio viewer). It prints differently because it's a __tibble__. Tibbles are data frames, but slightly tweaked to work better in the tidyverse. For now, you don't need to worry about the differences; we'll come back to tibbles in more detail in [wrangle].
You might also have noticed the row of three letter abbreviations under the column names. These describe the type of each variable:
* lgl: logical (`TRUE` or `FALSE`).
* int: integers.
* dbl: doubles (real numbers).
* chr: character strings.
* `lgl` stands for logical, vectors that contain only `TRUE` or `FALSE`.
* `int` stands for integers.
* `dbl` stands for doubles, aka real numbers.
* `chr` stands for character vectors that contain strings.
## Dplyr verbs
### Dplyr basics
There are five dplyr functions that you will use to do the vast majority of data manipulations:
In this chapter you are going to learn the five key five dplyr functions that allow you to solve vast majority of your data manipulation challenges:
* reorder the rows (`arrange()`),
* pick observations by their values (`filter()`),
* reorder the rows (`arrange()`),
* pick variables by their names (`select()`),
* create new variables with functions of existing variables (`mutate()`), or
* collapse many values down to a single summary (`summarise()`).
@ -73,7 +75,7 @@ All verbs work similarly:
1. The result is a new data frame.
Together these properties make it easy to chain together multiple simple steps to achieve a complex result.
Together these properties make it easy to chain together multiple simple steps to achieve a complex result. Let's dive in and see how these verbs work.
## Filter rows with `filter()`
@ -98,11 +100,11 @@ Or with the base `subset()` function:
subset(flights, month == 1 & day == 1)
```
`filter()` works similarly to `subset()` except that you can give it any number of filtering conditions, which are joined together with `&`.
`filter()` works similarly to `subset()` except that you can give it any number of filtering conditions, which are applied simulatenously: a row must meet all criteria in order to be included in the result.
--------------------------------------------------------------------------------
When you run this line of code, dplyr executes the filtering operation and returns a new data frame. dplyr functions never modify their inputs, so if you want to save the results, you'll need to use the assignment operator `<-`:
When you run that line of code, dplyr executes the filtering operation and returns a new data frame. dplyr functions never modify their inputs, so if you want to save the result, you'll need to use the assignment operator, `<-`:
```{r}
jan1 <- filter(flights, month == 1, day == 1)
@ -116,16 +118,14 @@ R either prints out the results, or saves them to a variable. If you want to do
### Comparisons
R provides the standard suite of numeric comparison operators: `>`, `>=`, `<`, `<=`, `!=` (not equal), and `==` (equal).
To use filtering effectively, you have to know how to select the observations that you want using the comparison operators. R provides the standard suite: `>`, `>=`, `<`, `<=`, `!=` (not equal), and `==` (equal).
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality. When this happens you'll get a somewhat uninformative error:
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality. When this happens you'll get an informative error:
```{r, error = TRUE}
filter(flights, month = 1)
```
Whenever you see this message, check for `=` instead of `==`.
Beware using `==` with floating point numbers:
```{r}
@ -133,7 +133,7 @@ sqrt(2) ^ 2 == 2
1/49 * 49 == 1
```
It's better instead to use `near()` to check that you're close:
Computers use finite precision arithmetic (they obviously can't store an infinite number of digits!) so remember that every number you see is an approximation. Instead of relying on `==`, use use `dplyr::near()`:
```{r}
near(sqrt(2) ^ 2, 2)
@ -142,27 +142,21 @@ near(1 / 49 * 49, 1)
### Logical operators
Multiple arguments to `filter()` are combined with "and". To get more complicated expressions, you can use Boolean operators yourself:
Multiple arguments to `filter()` are combined with "and": every expression must be true in order for a row to be included in the output. For other types of combinations, you'll need to use Boolean operators yourself:
```{r, eval = FALSE}
filter(flights, month == 11 | month == 12)
```
Note the order isn't like English. The following expression doesn't find on months that equal 11 or 12. Instead it finds all months that equal `11 | 12`, an expression that evaluates to `TRUE`. In a numeric context (like here), `TRUE` becomes one, so this finds all flights in January, not November or December (It is the equivalent of `filter(flights, month == 1)`).
Note the order of operations isn't like English. The following expression doesn't find on months that equal 11 or 12. Instead it finds all months that equal `11 | 12`, an expression that evaluates to `TRUE`. In a numeric context (like here), `TRUE` becomes one, so this finds all flights in January, not November or December (It is the equivalent of `filter(flights, month == 1)`). This is quite confusing!
```{r, eval = FALSE}
filter(flights, month == 11 | 12)
```
Instead you can use the helpful `%in%` shortcut:
```{r}
filter(flights, month %in% c(11, 12))
```
The following figure shows the complete set of Boolean operations:
```{r bool-ops, echo = FALSE, fig.cap = "Complete set of boolean operations"}
```{r bool-ops, echo = FALSE, fig.cap = "Complete set of boolean operations. `x` is the left-hand circle, `y` is the right hand circle, and the shaded region show which parts each operator selects."}
knitr::include_graphics("diagrams/transform-logical.png")
```
@ -173,9 +167,9 @@ filter(flights, !(arr_delay > 120 | dep_delay > 120))
filter(flights, arr_delay <= 120, dep_delay <= 120)
```
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used in `if` statements when programming. You'll learn about that later on in Chapter ?.
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used in `if` statements when programming. You'll learn about that later on in [Conditional execution].
Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The cumulative functions `cumany()` and `cumall()` allow you to find these values:
Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The window functions `cumany()` and `cumall()` allow you to find these values:
```{r}
df <- data_frame(
@ -187,11 +181,11 @@ filter(df, cumany(x)) # all rows after first TRUE
filter(df, cumall(y)) # all rows until first FALSE
```
Whenever you start using multipart expressions in your `filter()`, it's typically a good idea to make the expressions explicit variables with `mutate()` so that you can more easily check your work. You'll learn about `mutate()` in the next section.
Whenever you start using complicated, multipart expressions in `filter()`, consider making them explicit variables instead. That makes it much easier to check your work. You'll learn how to create new variables shortly.
### Missing values
One important feature of R that can make comparison tricky is the missing value, `NA`. `NA` represents an unknown value so missing values are "contagious": any operation involving an unknown value will also be unknown.
One important feature of R that can make comparison tricky are missing values, or `NA`s ("not applicables"). `NA` represents an unknown value so missing values are "contagious": almost any operation involving an unknown value will also be unknown.
```{r}
NA > 5
@ -220,7 +214,7 @@ x == y
# We don't know!
```
If you want to determine if a value is missing, use `is.na()`. (This is such a common mistake RStudio will remind you whenever you use `x == NA`)
If you want to determine if a value is missing, use `is.na()`. (This is such a common mistake RStudio will remind you whenever you write `x == NA` in your script)
`filter()` only includes rows where the condition is `TRUE`; it excludes both `FALSE` and `NA` values. If you want to preserve missing values, ask for them explicitly:
@ -234,20 +228,24 @@ filter(df, is.na(x) | x > 1)
1. Find all the flights that:
* Departed in summer.
* That flew to Houston (`IAH` or `HOU`).
* There were operated by United, American, or Delta.
* That were delayed by more two hours.
* That arrived more than two hours late, but didn't leave late.
* Were delayed by at least an hour, but made up over 30 minutes in flight.
* Departed between midnight and 6am.
1. That were delayed by more two hours.
1. That flew to Houston (`IAH` or `HOU`).
1. There were operated by United, American, or Delta.
1. Departed in summer.
1. That arrived more than two hours late, but didn't leave late.
1. Were delayed by at least an hour, but made up over 30 minutes in flight.
1. Departed between midnight and 6am.
1. How many flights have a missing `dep_time`? What other variables are
missing? What might these rows represent?
1. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing?
Why is `FALSE & NA` not missing? Can you figure out the general
rule? (`NA * 0` is a tricky counterexample!)
## Arrange rows with `arrange()`
`arrange()` works similarly to `filter()` except that instead of filtering or selecting rows, it reorders them. It takes a data frame and a set of column names (or more complicated expressions) to order by. If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns:
`arrange()` works similarly to `filter()` except that instead of selecting rows, it changes their order. It takes a data frame and a set of column names (or more complicated expressions) to order by. If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns:
```{r}
arrange(flights, year, month, day)
@ -259,7 +257,7 @@ Use `desc()` to re-order by a column in descending order:
arrange(flights, desc(arr_delay))
```
Missing values always come at the end:
Missing values are always sorted at the end:
```{r}
df <- data_frame(x = c(5, 2, NA))
@ -287,9 +285,15 @@ flights[order(flights$year, flights$month, flights$day), , drop = FALSE]
1. Sort `flights` to find the most delayed flights. Find the flights that
left earliest.
1. Sort `flights` to find the fastest flights.
1. Which flights travlled the longest? Which travelled the shortest?
## Select columns with `select()`
It's not uncommon to get datasets with hundreds or even thousands of variables. In this case, the first challenge is often narrowing in on the variables you're actually interested in. `select()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables:
It's not uncommon to get datasets with hundreds or even thousands of variables. In this case, the first challenge is often narrowing in on the variables you're actually interested in. `select()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables.
`select()` is not terribly useful with the flights the data because we only have 19 variables, but you can still get the general idea:
```{r}
# Select columns by name
@ -322,12 +326,18 @@ It's possible to use `select()` to rename variables:
select(flights, tail_num = tailnum)
```
But because `select()` drops all the variables not explicitly mentioned, it's not that useful. Instead, use `rename()`, which is a variant of `select()` that keeps variables by default:
But because `select()` drops all the variables not explicitly mentioned, it's not that useful. Instead, use `rename()`, which is a variant of `select()` that keeps all the variables that aren't explicitly mentioned:
```{r}
rename(flights, tail_num = tailnum)
```
Another option is to use `select()` in conjunction with the `everything()` helper. This is useful if you have a handful of variables you'd like to move to the start of the data frame.
```{r}
select(flights, time_hour, air_time, everything())
```
--------------------------------------------------------------------------------
The `select()` function works similarly to the `select` argument in `base::subset()`. `select()` is its own function in dplyr because the dplyr philosophy is to have small functions that each do one thing well.
@ -338,12 +348,29 @@ The `select()` function works similarly to the `select` argument in `base::subse
1. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`,
`arr_time`, and `arr_delay` from `flights`.
1. What happens if you include the name of a variable multiple times in
a `select()` call?
1. What does the `one_of()` function do? Why might it be helpful in conjunction
with this vector?
```{r}
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
```
1. Does the result of running the following code suprise you? How do the
select helpers deal with case by default? How can you change that default?
```{r, eval = FALSE}
select(flights, contains("TIME"))
```
## Add new variables with `mutate()`
Besides selecting sets of existing columns, it's often useful to add new columns that are functions of existing columns. This is the job of `mutate()`.
Besides selecting sets of existing columns, it's often useful to add new columns that are functions of existing columns. That's the job of `mutate()`.
`mutate()` always adds new columns at the end of your dataset so we'll start by creating a narrower dataset so we can see the new variables. Remember that when you're in RStudio, the easiest way to see all the columns is `View()`
`mutate()` always adds new columns at the end of your dataset so we'll start by creating a narrower dataset so we can see the new variables. Remember that when you're in RStudio, the easiest way to see all the columns is `View()`.
```{r}
flights_sml <- select(flights,
@ -358,7 +385,7 @@ mutate(flights_sml,
)
```
Note that you can refer to columns in `mutate()` that you've just created:
Note that you can refer to columns that you've just created:
```{r}
mutate(flights_sml,
@ -396,8 +423,8 @@ There are many functions for creating new variables that you can use with `mutat
Arithmetic operators are also useful in conjunction with the aggregate
functions you'll learn about later. For example, `x / sum(x)` calculates
the proportion of a total and `y - mean(y)` computes the difference from
the mean, and so on.
the proportion of a total, and `y - mean(y)` computes the difference from
the mean.
* Modular arithmetic: `%/%` (integer division) and `%%` (remainder), where
`x == y * (x %/% y) + (x %% y)`. Modular arithmetic is a handy tool because
@ -413,7 +440,7 @@ There are many functions for creating new variables that you can use with `mutat
```
* Logs: `log()`, `log2()`, `log10()`. Logarithms are an incredibly useful
transformation for dealing with data that ranges over multiple orders of
transformation for dealing with data that ranges across multiple orders of
magnitude. They also convert multiplicative relationships to additive, a
feature we'll come back to in modelling.
@ -434,8 +461,8 @@ There are many functions for creating new variables that you can use with `mutat
```
* Cumulative and rolling aggregates: R provides functions for running sums,
products, mins and maxes: `cumsum()`, `cumprod()`, `cummin()`, `cummax()`.
dplyr provides `cummean()` for cumulative means. If you need rolling
products, mins and maxes: `cumsum()`, `cumprod()`, `cummin()`, `cummax()`;
and dplyr provides `cummean()` for cumulative means. If you need rolling
aggregates (i.e. a sum computed over a rolling window), try the RcppRoll
package.
@ -448,7 +475,7 @@ There are many functions for creating new variables that you can use with `mutat
* Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, which you learned about
earlier. If you're doing a complex sequence of logical operations it's
often a good idea to store the interim values in new variables so you can
check that each step is doing what you expect.
check that each step is working as expected.
* Ranking: there are a number of ranking functions, but you should
start with `min_rank()`. It does the most usual type of ranking
@ -490,10 +517,6 @@ ggplot(flights, aes(air_time - airtime2)) + geom_histogram()
Convert them to a more convenient representation of number of minutes
since midnight.
1. Compute the scheduled time by adding `dep_delay` to `dep_time`. Plot
the distribution of departure times. What do you think causes the
interesting pattern?
1. Compare `airtime` with `arr_time - dep_time`. What do you expect to see?
What do you see? Why?
@ -509,7 +532,9 @@ The last verb is `summarise()`. It collapses a data frame to a single row:
summarise(flights, delay = mean(dep_delay, na.rm = TRUE))
```
That's not terribly useful unless we pair it with `group_by()`. This changes the unit of analysis from the complete dataset to individual groups. When you use the dplyr verbs on a grouped data frame they'll be automatically applied "by group". For example, if we applied exactly the same code to a data frame grouped by date, we get the average delay per date:
(we'll come back to what that `na.rm = TRUE` means very shortly.)
That's not terribly useful unless we pair it with `group_by()`. This changes the unit of analysis from the complete dataset to individual groups. Then, when you use the dplyr verbs on a grouped data frame they'll be automatically applied "by group". For example, if we applied exactly the same code to a data frame grouped by date, we get the average delay per date:
```{r}
by_day <- group_by(flights, year, month, day)
@ -540,12 +565,12 @@ ggplot(delay, aes(dist, delay)) +
There are three steps to prepare this data:
1. Group flights by destination
1. Group flights by destination.
2. Summarise to compute distance, average delay, and number of flights.
1. Summarise to compute distance, average delay, and number of flights.
3. Filter to remove noisy points and Honolulu airport, which is almost
twice as far away as the next closest airport.
1. Filter to remove noisy points and Honolulu airport, which is almost
twice as far away as the next closest airport.
This code is a little frustrating to write because we have to give each intermediate data frame a name, even though we don't care about it. Naming things well is hard, so this slows us down.
@ -564,9 +589,9 @@ delays <- flights %>%
This focuses on the transformations, not what's being transformed, which makes the code easier to read. You can read it as a series of imperative statements: group, then summarise, then filter. As suggested by this reading, a good way to pronounce `%>%` when reading code is "then".
Behind the scenes, `x %>% f(y)` turns into `f(x, y)`, and `x %>% f(y) %>% g(z)` turns into `g(f(x, y), z)` and so on. You can use the pipe to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom. We'll use piping frequently from now on because it considerably improves the readability of code, and we'll come back to it in more detail in Chapter XYZ.
Behind the scenes, `x %>% f(y)` turns into `f(x, y)`, and `x %>% f(y) %>% g(z)` turns into `g(f(x, y), z)` and so on. You can use the pipe to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom. We'll use piping frequently from now on because it considerably improves the readability of code, and we'll come back to it in more detail in [pipes].
Most of the packages you'll learn through this book have been designed to work with the pipe (tidyr, dplyr, stringr, purrr, ...). The only exception is ggplot2: it was developed considerably before the pipe was discovered. Unfortunately the next iteration of ggplot2, ggvis, which does use the pipe, isn't ready for prime time yet.
Working with the pipe is one of the key criteria for belonging to the tidyverse. The only exception is ggplot2: it was written before the pipe was discovered. Unfortunately, the next iteration of ggplot2, ggvis, which does use the pipe, isn't yet ready for prime time.
### Missing values
@ -578,7 +603,7 @@ flights %>%
summarise(mean = mean(dep_delay))
```
We get a lot of missing values! That's because aggregation functions obey the usual rule of missing values: if there's any missing value in the input, the output will be a missing value. `x %>% f(y)` turns into `f(x, y)` you'll learn more about aggregation functions in Section 5.7.4. Fortunately, all aggregation functions have an `na.rm` argument which removes the missing values prior to computation:
We get a lot of missing values! That's because aggregation functions obey the usual rule of missing values: if there's any missing value in the input, the output will be a missing value. Fortunately, all aggregation functions have an `na.rm` argument which removes the missing values prior to computation:
```{r}
flights %>%
@ -586,7 +611,7 @@ flights %>%
summarise(mean = mean(dep_delay, na.rm = TRUE))
```
In this case, where missing values represent cancelled flights, we could also tackle the problem by first removing the cancelled flights:
In this case, where missing values represent cancelled flights, we could also tackle the problem by first removing the cancelled flights. We'll save this dataset so we can reuse in the next few examples.
```{r}
not_cancelled <- filter(flights, !is.na(dep_delay), !is.na(arr_delay))
@ -598,9 +623,7 @@ not_cancelled %>%
### Counts
Whenever you do any aggregation, it's always a good idea to include either a count (`n()`), or a count of non-missing values (`sum(!is.na(x))`). That way you can check that you're not drawing conclusions based on very small amounts of non-missing data.
For example, let's look at the planes (identified by their tail number) that have the highest average delays:
Whenever you do any aggregation, it's always a good idea to include either a count (`n()`), or a count of non-missing values (`sum(!is.na(x))`). That way you can check that you're not drawing conclusions based on very small amounts of data. For example, let's look at the planes (identified by their tail number) that have the highest average delays:
```{r}
delays <- not_cancelled %>%
@ -613,7 +636,7 @@ ggplot(delays, aes(delay)) +
geom_histogram(binwidth = 10)
```
Wow, there are some planes that have an _average_ delay of 5 hours!
Wow, there are some planes that have an _average_ delay of 5 hours (300 minutes)!
The story is actually a little more nuanced. We can get more insight if we draw a scatterplot of number of flights vs. average delay:
@ -629,9 +652,9 @@ ggplot(delays, aes(n, delay)) +
geom_point()
```
Not surprisingly, there is much more variation in the average delay when there are few flights. The shape of this plot is very characteristic: whenever you plot a mean (or many other summaries) vs. number of observations, you'll see that the variation decreases as the sample size increases.
Not surprisingly, there is much greater variation in the average delay when there are few flights. The shape of this plot is very characteristic: whenever you plot a mean (or most other summaries) vs. number of observations, you'll see that the variation decreases as the sample size increases.
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups. This is what the following code does, and also shows you a handy pattern for integrating ggplot2 into dplyr flows. It's a bit painful that you have to switch from `%>%` to `+`, but once you get the hang of it, it's quite convenient.
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups. This is what the following code does, as well as showing you a handy pattern for integrating ggplot2 into dplyr flows. It's a bit painful that you have to switch from `%>%` to `+`, but once you get the hang of it, it's quite convenient.
```{r}
delays %>%
@ -642,7 +665,7 @@ delays %>%
--------------------------------------------------------------------------------
RStudio tip: useful keyboard shortcut is Cmd + Shift + P. This resends the previously sent chunk from the editor to the console. This is very convenient when you're (e.g.) exploring the value of `n` in the example above. You send the whole block once with Cmd + Enter, then you modify the value of `n` and press Cmd + Shift + P to resend the complete block.
RStudio tip: a useful keyboard shortcut is Cmd/Ctrl + Shift + P. This resends the previously sent chunk from the editor to the console. This is very convenient when you're (e.g.) exploring the value of `n` in the example above. You send the whole block once with Cmd/Ctrl + Enter, then you modify the value of `n` and press Cmd/Ctrl + Shift + P to resend the complete block.
--------------------------------------------------------------------------------
@ -683,11 +706,13 @@ You can find a good explanation of this problem at <http://varianceexplained.org
Just using means, counts, and sum can get you a long way, but R provides many other useful summary functions:
* Measure of location: we've used `mean(x)`, but `median(x)` is also
* Measures of location: we've used `mean(x)`, but `median(x)` is also
useful. The mean is the sum divided by the length; the median is a value
where 50% of `x` is above, and 50% is below.
It's sometimes useful to combine aggregation with logical subsetting:
It's sometimes useful to combine aggregation with logical subsetting.
We haven't talked about this sort of subsetting yet, but you'll learn more
about it in [subsetting].
```{r}
not_cancelled %>%
@ -698,7 +723,7 @@ Just using means, counts, and sum can get you a long way, but R provides many ot
)
```
* Measure of spread: `sd(x)`, `IQR(x)`, `mad(x)`. The mean squared deviation,
* Measures of spread: `sd(x)`, `IQR(x)`, `mad(x)`. The mean squared deviation,
or standard deviation or sd for short, is the standard measure of spread.
The interquartile range `IQR()` and median absolute deviation `mad(x)`
are robust equivalents that maybe more useful if you have outliers.
@ -723,10 +748,10 @@ Just using means, counts, and sum can get you a long way, but R provides many ot
)
```
* Measures of position: `first(x)`, `nth(x, 2)`, `last(x)`. These work similarly to
`x[1]`, `n <- 2; x[n]`, and `x[length(x)]` but let you set a default value if that
position does not exist (i.e. you're trying to get the 3rd element from a
group that only has two elements).
* Measures of position: `first(x)`, `nth(x, 2)`, `last(x)`. These work
similarly to `x[1]`, `x[2]`, and `x[length(x)]` but let you set a default
value if that position does not exist (i.e. you're trying to get the 3rd
element from a group that only has two elements).
These functions are complementary to filtering on ranks. Filtering gives
you all variables, with each observation in a separate row. Summarising
@ -740,7 +765,10 @@ Just using means, counts, and sum can get you a long way, but R provides many ot
not_cancelled %>%
group_by(year, month, day) %>%
summarise(first_dep = first(dep_time), last_dep = last(dep_time))
summarise(
first_dep = first(dep_time),
last_dep = last(dep_time)
)
```
* Counts: You've seen `n()`, which takes no arguments, and returns the
@ -756,8 +784,8 @@ Just using means, counts, and sum can get you a long way, but R provides many ot
arrange(desc(carriers))
```
Counts are so useful that dplyr provides a helper if all you want is a
count:
Counts are so useful that dplyr provides a simple helper if all you want is
a count:
```{r}
not_cancelled %>% count(dest)
@ -773,8 +801,8 @@ Just using means, counts, and sum can get you a long way, but R provides many ot
* Counts and proportions of logical values: `sum(x > 10)`, `mean(y == 0)`.
When used with numeric functions, `TRUE` is converted to 1 and `FALSE` to 0.
This makes `sum()` and `mean()` particularly useful: `sum(x)` gives the
number of `TRUE`s in `x`, and `mean(x)` gives the proportion.
This makes `sum()` and `mean()` very useful: `sum(x)` gives the number of
`TRUE`s in `x`, and `mean(x)` gives the proportion.
```{r}
# How many flights left before 5am? (these usually indicate delayed
@ -800,7 +828,7 @@ daily <- group_by(flights, year, month, day)
(per_year <- summarise(per_month, flights = sum(flights)))
```
Be careful when progressively rolling up summaries: it's OK for sums and counts, but you need to think about weighting for means and variances, and it's not possible to do it exactly for rank-based statistics like the median (i.e. the sum of groupwise sums is the overall sum, but the median of groupwise medians is not the overall median).
Be careful when progressively rolling up summaries: it's OK for sums and counts, but you need to think about weighting means and variances, and it's not possible to do it exactly for rank-based statistics like the median. In otherwords, the sum of groupwise sums is the overall sum, but the median of groupwise medians is not the overall median.
### Ungrouping
@ -836,6 +864,9 @@ daily %>%
effects of bad airports vs. bad carriers? Why/why not? (Hint: think about
`flights %>% group_by(carrier, dest) %>% summarise(n())`)
1. For each plane, count the number of flights before the first delay
of greater than 1 hour.
## Grouped mutates (and filters)
Grouping is most useful in conjunction with `summarise()`, but you can also do convenient operations with `mutate()` and `filter()`:
@ -866,7 +897,7 @@ Grouping is most useful in conjunction with `summarise()`, but you can also do c
A grouped filter is a grouped mutate followed by an ungrouped filter. I generally avoid them except for quick and dirty manipulations: otherwise it's hard to check that you've done the manipulation correctly.
Functions that work most naturally in grouped mutates and filters are known as window functions (vs. summary functions used for summaries). You can learn more about useful window functions in the corresponding vignette: `vignette("window-functions")`.
Functions that work most naturally in grouped mutates and filters are known as window functions (vs. the summary functions used for summaries). You can learn more about useful window functions in the corresponding vignette: `vignette("window-functions")`.
### Exercises