r4ds/numbers.Rmd

692 lines
25 KiB
Plaintext
Raw Normal View History

2022-03-23 22:52:38 +08:00
# Numeric vectors {#numbers}
2022-03-17 22:46:35 +08:00
```{r, results = "asis", echo = FALSE}
2022-04-15 22:08:10 +08:00
status("polishing")
2022-03-17 22:46:35 +08:00
```
## Introduction
2022-04-15 22:08:10 +08:00
In this chapter, you'll learn useful tools for creating and manipulating numeric vectors.
We'll start by going into a little more detail of `count()` before diving into various numeric transformations.
You'll then learn about more general transformations that can be applied to other types of vector, but are often used with numeric vectors.
2022-04-18 22:42:43 +08:00
Then you'll learn about a few more useful summaries and how they can also be used with `mutate()`.
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
### Prerequisites
This chapter mostly uses functions from base R, which are available without loading any packages.
But we still need the tidyverse because we'll use these base R functions inside of tidyverse functions like `mutate()` and `filter()`.
2022-04-15 22:08:10 +08:00
Like in the last chapter, we'll use real examples from nycflights13, as well as toy examples made with `c()` and `tribble()`.
2022-03-17 22:46:35 +08:00
2022-03-24 21:53:11 +08:00
```{r setup, message = FALSE}
2022-03-17 22:46:35 +08:00
library(tidyverse)
library(nycflights13)
```
### Counts
2022-04-15 22:08:10 +08:00
It's surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with `count()`.
This function is great for quick exploration and checks during analysis:
2022-03-17 22:46:35 +08:00
2022-03-18 03:15:24 +08:00
```{r}
2022-03-23 22:52:38 +08:00
flights |> count(dest)
2022-03-18 03:15:24 +08:00
```
2022-03-23 22:52:38 +08:00
(Despite the advice in Chapter \@ref(code-style), I usually put `count()` on a single line because I'm usually using it at the console for a quick check that my calculation is working as expected.)
2022-04-15 22:08:10 +08:00
If you want to see the most common values add `sort = TRUE`:
```{r}
flights |> count(dest, sort = TRUE)
```
And remember that if you want to see all the values, you can use `|> View()` or `|> print(n = Inf)`.
You can perform the same computation "by hand" with `group_by()`, `summarise()` and `n()`.
This is useful because it allows you to compute other summaries at the same time:
2022-03-23 22:52:38 +08:00
```{r}
flights |>
group_by(dest) |>
2022-03-31 20:57:41 +08:00
summarise(
n = n(),
delay = mean(arr_delay, na.rm = TRUE)
)
2022-03-23 22:52:38 +08:00
```
2022-04-15 22:08:10 +08:00
`n()` is a special summary function that doesn't take any arguments and instead access information about the "current" group.
This means that it only works inside dplyr verbs:
2022-03-23 22:52:38 +08:00
```{r, error = TRUE}
n()
```
2022-04-15 22:08:10 +08:00
There are a couple of variants of `n()` that you might find useful:
2022-03-23 22:52:38 +08:00
2022-03-31 20:57:41 +08:00
- `n_distinct(x)` counts the number of distinct (unique) values of one or more variables.
2022-04-15 22:08:10 +08:00
For example, we could figure out which destinations are served by the most carriers:
2022-03-17 22:46:35 +08:00
```{r}
2022-03-23 22:52:38 +08:00
flights |>
2022-03-17 22:46:35 +08:00
group_by(dest) |>
2022-03-31 20:57:41 +08:00
summarise(
carriers = n_distinct(carrier)
) |>
2022-03-17 22:46:35 +08:00
arrange(desc(carriers))
```
2022-04-15 22:08:10 +08:00
- A weighted count is a sum.
2022-03-23 22:52:38 +08:00
For example you could "count" the number of miles each plane flew:
2022-03-17 22:46:35 +08:00
```{r}
2022-03-23 22:52:38 +08:00
flights |>
group_by(tailnum) |>
summarise(miles = sum(distance))
2022-03-17 22:46:35 +08:00
```
2022-04-15 22:08:10 +08:00
Weighted counts are a common problem so `count()` has a `wt` argument that does the same thing:
2022-03-17 22:46:35 +08:00
```{r}
2022-03-23 22:52:38 +08:00
flights |> count(tailnum, wt = distance)
2022-03-17 22:46:35 +08:00
```
2022-04-15 22:08:10 +08:00
- You can count missing values by combining `sum()` and `is.na()`.
In the flights dataset this represents flights that are cancelled:
2022-03-17 22:46:35 +08:00
```{r}
2022-03-23 22:52:38 +08:00
flights |>
group_by(dest) |>
summarise(n_cancelled = sum(is.na(dep_time)))
2022-03-17 22:46:35 +08:00
```
2022-03-23 22:52:38 +08:00
### Exercises
2022-03-24 21:53:11 +08:00
1. How can you use `count()` to count the number rows with a missing value for a given variable?
2022-04-15 22:08:10 +08:00
2. Expand the following calls to `count()` to instead use `group_by()`, `summarise()`, and `arrange()`:
2022-03-29 21:26:18 +08:00
1. `flights |> count(dest, sort = TRUE)`
2. `flights |> count(tailnum, wt = distance)`
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
## Numeric transformations
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
Transformation functions work well with `mutate()` because their output is the same length as the input.
The vast majority of transformation functions are already built into base R.
It's impractical to list them all so this section will give show the most useful.
As an example, while R provides all the trigonometric functions that you might dream of, I don't list them here because they're rarely needed for data science.
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
### Arithmetic and recycling rules
2022-03-17 22:46:35 +08:00
2022-03-24 21:53:11 +08:00
We introduced the basics of arithmetic (`+`, `-`, `*`, `/`, `^`) in Chapter \@ref(workflow-basics) and have used them a bunch since.
2022-04-15 22:08:10 +08:00
These functions don't need a huge amount of explanation because they do what you learned in grade school.
But we need to briefly talk about the **recycling rules** which determine what happens when the left and right hand sides have different lengths.
This is important for operations like `flights |> mutate(air_time = air_time / 60)` because there are 336,776 numbers on the left of `/` but only one on the right.
2022-03-31 20:57:41 +08:00
2022-04-15 22:08:10 +08:00
R handles mismatched lengths by **recycling,** or repeating, the short vector.
2022-03-31 20:57:41 +08:00
We can see this in operation more easily if we create some vectors outside of a data frame:
2022-03-23 22:52:38 +08:00
```{r}
x <- c(1, 2, 10, 20)
x / 5
2022-03-24 21:53:11 +08:00
# is shorthand for
2022-03-23 22:52:38 +08:00
x / c(5, 5, 5, 5)
```
2022-04-15 22:08:10 +08:00
Generally, you only want to recycle single numbers (i.e. vectors of length 1), but R will recycle any shorter length vector.
It usually (but not always) warning if the longer vector isn't a multiple of the shorter:
2022-03-23 22:52:38 +08:00
```{r}
x * c(1, 2)
x * c(1, 2, 3)
```
2022-04-15 22:08:10 +08:00
These recycling rules are also applied to logical comparisons (`==`, `<`, `<=`, `>`, `>=`, `!=`) and can lead to a surprising result if you accidentally use `==` instead of `%in%` and the data frame has an unfortunate number of rows.
2022-03-24 21:53:11 +08:00
For example, take this code which attempts to find all flights in January and February:
2022-03-23 22:52:38 +08:00
```{r}
flights |>
filter(month == c(1, 2))
```
2022-03-24 21:53:11 +08:00
The code runs without error, but it doesn't return what you want.
2022-04-15 22:08:10 +08:00
Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February.
And unforuntately there's no warning because `nycflights` has an even number of rows.
2022-03-23 22:52:38 +08:00
2022-04-15 22:08:10 +08:00
To protect you from this type of silent failure, most tidyverse functions use a stricter form of recycling that only recycles single values.
Unfortunately that doesn't help here, or in many other cases, because the key computation is performed by the base R function `==`, not `filter()`.
2022-03-23 22:52:38 +08:00
### Minimum and maximum
2022-03-24 21:53:11 +08:00
The arithmetic functions work with pairs of variables.
Two closely related functions are `pmin()` and `pmax()`, which when given two or more variables will return the smallest or largest value in each row:
```{r}
df <- tribble(
~x, ~y,
1, 3,
5, 2,
7, NA,
)
df |>
mutate(
2022-04-15 22:08:10 +08:00
min = pmin(x, y, na.rm = TRUE),
max = pmax(x, y, na.rm = TRUE)
2022-03-24 21:53:11 +08:00
)
```
2022-04-18 22:42:43 +08:00
Note that these are different to the summary functions `min()` and `max()` which take multiple observations and return a single value.
You can tell that you've used the wrong form when all the minimums and all the maximums have the same value:
```{r}
df |>
mutate(
min = min(x, y, na.rm = TRUE),
max = max(x, y, na.rm = TRUE)
)
```
2022-03-23 22:52:38 +08:00
### Modular arithmetic
2022-04-15 22:08:10 +08:00
Modular arithmetic is the technical name for the type of math you did before you learned about real numbers, i.e. division that yields a whole number and a remainder.
In R, `%/%` does integer division and `%%` computes the remainder:
2022-03-23 22:52:38 +08:00
```{r}
1:10 %/% 3
1:10 %% 3
```
2022-03-17 22:46:35 +08:00
2022-03-24 21:53:11 +08:00
Modular arithmetic is handy for the flights dataset, because we can use it to unpack the `sched_dep_time` variable into and `hour` and `minute`:
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-03-24 21:53:11 +08:00
flights |>
mutate(
hour = sched_dep_time %/% 100,
minute = sched_dep_time %% 100,
.keep = "used"
)
2022-03-23 22:52:38 +08:00
```
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
We can combine that with the `mean(is.na(x))` trick from Section \@ref(logical-summaries) to see how the proportion of delayed flights varies over the course of the day.
The results are shown in Figure \@ref(fig:prop-cancelled).
```{r prop-cancelled}
#| fig.cap: >
#| A line plot with scheduled departure hour on the x-axis, and proportion
#| of cancelled flights on the y-axis. Cancellations seem to accumulate
#| over the course of the day until 8pm, very late flights are much
#| less likely to be cancelled.
#| fig.alt: >
#| A line plot showing how proportion of cancelled flights changes over
#| the course of the day. The proportion starts low at around 0.5% at
#| 6am, then steadily increases over the course of the day until peaking
#| at 4% at 7pm. The proportion of cancelled flights then drops rapidly
#| getting down to around 1% by midnight.
2022-03-17 22:46:35 +08:00
flights |>
group_by(hour = sched_dep_time %/% 100) |>
summarise(prop_cancelled = mean(is.na(dep_time)), n = n()) |>
filter(hour > 1) |>
ggplot(aes(hour, prop_cancelled)) +
2022-03-24 21:53:11 +08:00
geom_line(colour = "grey50") +
geom_point(aes(size = n))
2022-03-17 22:46:35 +08:00
```
2022-03-31 20:57:41 +08:00
### Logarithms
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
Logarithms are an incredibly useful transformation for dealing with data that ranges across multiple orders of magnitude.
2022-04-15 22:08:10 +08:00
They also convert exponential growth to linear growth.
2022-03-31 20:57:41 +08:00
For example, take compounding interest --- the amount of money you have at `year + 1` is the amount of money you had at `year` multiplied by the interest rate.
That gives a formula like `money = starting * interest ^ year`:
```{r}
starting <- 100
interest <- 1.05
money <- tibble(
year = 2000 + 1:50,
money = starting * interest^(1:50)
)
```
2022-04-15 22:08:10 +08:00
If you plot this data, you'll get an exponential curve:
2022-03-31 20:57:41 +08:00
```{r}
ggplot(money, aes(year, money)) +
geom_line()
```
Log transforming the y-axis gives a straight line:
```{r}
ggplot(money, aes(year, money)) +
geom_line() +
scale_y_log10()
```
2022-04-15 22:08:10 +08:00
This a straight line because a little algebra reveals that `log(money) = log(starting) + n * log(interest)`, which matches the pattern for a line, `y = m * x + b`.
This is a useful pattern: if you see a (roughly) straight line after log-transforming the y-axis, you know that there's underlying exponential growth.
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
If you're log-transforming your data with dplyr you have a choice of three logarithms provided by base R: `log()` (the natural log, base e), `log2()` (base 2), and `log10()` (base 10).
2022-03-23 22:52:38 +08:00
I recommend using `log2()` or `log10()`.
`log2()` is easy to interpret because difference of 1 on the log scale corresponds to doubling on the original scale and a difference of -1 corresponds to halving; whereas `log10()` is easy to back-transform because (e.g) 3 is 10\^3 = 1000.
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
The inverse of `log()` is `exp()`; to compute the inverse of `log2()` or `log10()` you'll need to use `2^` or `10^`.
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
### Rounding
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
Use `round(x)` to round a number to the nearest integer:
```{r}
round(123.456)
```
You can control the precision of the rounding with the second argument, `digits`.
2022-04-15 22:08:10 +08:00
`round(x, digits)` rounds to the nearest `10^-n` so `digits = 2` will round to the nearest 0.01.
This definition is useful because it implies `round(x, -3)` will round to the nearest thousand, which indeed it does:
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-03-31 20:57:41 +08:00
round(123.456, 2) # two digits
round(123.456, 1) # one digit
round(123.456, -1) # round to nearest ten
round(123.456, -2) # round to nearest hundred
2022-03-23 22:52:38 +08:00
```
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
There's one weirdness with `round()` that seems surprising at first glance:
2022-03-24 21:53:11 +08:00
```{r}
round(c(1.5, 2.5))
```
2022-04-15 22:08:10 +08:00
`round()` uses what's known as "round half to even" or Banker's rounding: if a number is half way between two integers, it will be rounded to the **even** integer.
This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
2022-03-24 21:53:11 +08:00
2022-04-15 22:08:10 +08:00
`round()` is paired with `floor()` which always rounds down and `ceiling()` which always rounds up:
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-03-24 21:53:11 +08:00
x <- 123.456
2022-03-31 20:57:41 +08:00
floor(x)
ceiling(x)
```
2022-04-15 22:08:10 +08:00
These functions don't have a digits argument, so you can instead scale down, round, and then scale back up:
2022-03-31 20:57:41 +08:00
```{r}
2022-03-24 21:53:11 +08:00
# Round down to nearest two digits
floor(x / 0.01) * 0.01
# Round up to nearest two digits
ceiling(x / 0.01) * 0.01
2022-03-23 22:52:38 +08:00
```
2022-03-17 22:46:35 +08:00
2022-03-31 20:57:41 +08:00
You can use the same technique if you want to `round()` to a multiple of some other number:
2022-03-24 21:53:11 +08:00
```{r}
# Round to nearest multiple of 4
round(x / 4) * 4
2022-03-17 22:46:35 +08:00
2022-03-24 21:53:11 +08:00
# Round to nearest 0.25
round(x / 0.25) * 0.25
```
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
### Cumulative and rolling aggregates
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
Base R provides `cumsum()`, `cumprod()`, `cummin()`, `cummax()` for running, or cumulative, sums, products, mins and maxes.
dplyr provides `cummean()` for cumulative means.
Cumulative sums tend to come up the most in practice:
2022-03-17 22:46:35 +08:00
```{r}
x <- 1:10
cumsum(x)
```
2022-03-24 21:53:11 +08:00
If you need more complex rolling or sliding aggregates, try the [slider](https://davisvaughan.github.io/slider/) package by Davis Vaughan.
2022-04-15 22:08:10 +08:00
The following example illustrates some of its features.
2022-03-24 21:53:11 +08:00
```{r}
library(slider)
# Same as a cumulative sum
slide_vec(x, sum, .before = Inf)
# Sum the current element and the one before it
slide_vec(x, sum, .before = 1)
# Sum the current element and the two before and after it
slide_vec(x, sum, .before = 2, .after = 2)
# Only compute if the window is complete
slide_vec(x, sum, .before = 2, .after = 2, .complete = TRUE)
```
### Exercises
2022-03-31 20:57:41 +08:00
1. Explain in words what each line of the code used to generate Figure \@ref(fig:prop-cancelled) does.
2022-03-24 21:53:11 +08:00
2022-04-18 22:42:43 +08:00
2. What trigonometric functions does R provide? Guess some names and look up the documentation. Do they use degrees or radians?
3. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they're not really continuous numbers. You can see the basic problem in this plot: there's a gap between each hour.
```{r}
flights |>
filter(month == 1, day == 1) |>
ggplot(aes(sched_dep_time, dep_delay)) +
geom_point()
```
Convert them to a more truthful representation of time (either fractional hours or minutes since midnight).
4.
2022-03-23 22:52:38 +08:00
## General transformations
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
The following sections describe some general transformations which are often used with numeric vectors, but can be applied to all other column types.
2022-03-17 22:46:35 +08:00
2022-04-18 22:42:43 +08:00
### Fill in missing values {#missing-values-numbers}
2022-03-31 21:10:52 +08:00
2022-04-15 22:08:10 +08:00
You can fill in missing values with dplyr's `coalesce()`:
```{r}
x <- c(1, NA, 5, NA, 10)
coalesce(x, 0)
```
`coalesce()` is vectorised, so you can find the non-missing values from a pair of vectors:
```{r}
y <- c(2, 3, 4, NA, 5)
coalesce(x, y)
```
2022-03-31 21:10:52 +08:00
2022-03-23 22:52:38 +08:00
### Ranks
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
dplyr provides a number of ranking functions inspired by SQL, but you should always start with `dplyr::min_rank()`.
It uses the typical method for dealing with ties, e.g. 1st, 2nd, 2nd, 4th.
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-04-15 22:08:10 +08:00
x <- c(1, 2, 2, 3, 4, NA)
min_rank(x)
2022-03-23 22:52:38 +08:00
```
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
Note that the smallest values get the lowest ranks; use `desc(x)` to give the largest values the smallest ranks:
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
```{r}
min_rank(desc(x))
```
If `min_rank()` doesn't do what you need, look at the variants `dplyr::row_number()`, `dplyr::dense_rank()`, `dplyr::percent_rank()`, and `dplyr::cume_dist()`.
See the documentation for details.
```{r}
2022-04-18 22:42:43 +08:00
df <- tibble(x = x)
df |>
mutate(
row_number = row_number(x),
dense_rank = dense_rank(x),
percent_rank = percent_rank(x),
cume_dist = cume_dist(x)
)
2022-04-15 22:08:10 +08:00
```
You can achieve many of the same results by picking the appropriate `ties.method` argument to base R's `rank()`; you'll probably also want to set `na.last = "keep"` to keep `NA`s as `NA`.
2022-04-18 22:42:43 +08:00
`row_number()` can also be used without any arguments when inside a dplyr verb.
In this case, it'll give the number of the "current" row.
When combined with `%%` or `%/%` this can be a useful tool for dividing data into similarly sized groups:
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-04-18 22:42:43 +08:00
df <- tibble(x = runif(10))
df |>
2022-03-23 22:52:38 +08:00
mutate(
2022-04-18 22:42:43 +08:00
row0 = row_number() - 1,
three_groups = row0 %/% (n() / 3),
three_in_each_group = row0 %/% 3,
2022-03-23 22:52:38 +08:00
)
```
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
### Offsets
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
`dplyr::lead()` and `dplyr::lag()` allow you to refer the values just before or just after the "current" value.
2022-04-18 22:42:43 +08:00
They return a vector of the same length as the input, padded with `NA`s at the start or end:
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
```{r}
2022-04-15 22:08:10 +08:00
x <- c(2, 5, 11, 11, 19, 35)
2022-03-23 22:52:38 +08:00
lag(x)
lead(x)
```
2022-03-17 22:46:35 +08:00
2022-03-24 21:53:11 +08:00
- `x - lag(x)` gives you the difference between the current and previous value.
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
```{r}
x - lag(x)
```
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
- `x == lag(x)` tells you when the current value changes.
This is often useful combined with the cumulative tricks describe in Section \@ref(cumulative-tricks).
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
```{r}
x == lag(x)
```
2022-03-17 22:46:35 +08:00
2022-04-18 22:42:43 +08:00
You can lead or lag by more than one position by using the second argument, `n`.
2022-03-17 22:46:35 +08:00
### Exercises
1. Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
2. Which plane (`tailnum`) has the worst on-time record?
3. What time of day should you fly if you want to avoid delays as much as possible?
2022-04-15 22:08:10 +08:00
4. What does `flights |> group_by(dest() |> filter(row_number() < 4)` do?
What does `flights |> group_by(dest() |> filter(row_number(dep_delay) < 4)` do?
5. For each destination, compute the total minutes of delay.
2022-03-17 22:46:35 +08:00
For each flight, compute the proportion of the total delay for its destination.
2022-04-15 22:08:10 +08:00
6. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave.
2022-04-18 22:42:43 +08:00
Using `lag()`, explore how the average flight delay for an hour is related to the average delay for the previous hour.
```{r, results = FALSE}
flights |>
mutate(hour = dep_time %/% 100) |>
group_by(year, month, day, hour) |>
summarise(
dep_delay = mean(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
filter(n > 5)
```
2022-03-17 22:46:35 +08:00
2022-04-15 22:08:10 +08:00
7. Look at each destination.
2022-03-17 22:46:35 +08:00
Can you find flights that are suspiciously fast?
(i.e. flights that represent a potential data entry error).
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
2022-04-15 22:08:10 +08:00
8. Find all destinations that are flown by at least two carriers.
2022-04-18 22:42:43 +08:00
Use those destinations to come up with a relative ranking of the carriers based on their performance for the same destination.
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
## Summaries
2022-04-18 22:42:43 +08:00
Just using the counts, means, and sums that we've introduced already can get you a long way, but R provides many other useful summary functions.
Here are a selection that you might find useful.
2022-03-23 22:52:38 +08:00
### Center
2022-04-18 22:42:43 +08:00
So far, we've mostly used `mean()` to summarize the center of a vector of values.
Because the mean is the sum divided by the count, it is sensitive to even just a few unusually high or low values.
An alternative is to use the `median()` which finds a value where 50% of the data is above it and 50% is below it.
Depending on the shape of the distribution of the variable you're interested in, mean or median might be a better measure of center.
For example, for symmetric distributions we geenerally report the mean while for skewed distributions we usually report the median.
Figure \@ref(fig:mean-vs-median) compares the hourly mean vs median departure delay.
You can see that the median delay is always smaller than the mean delay.
This is because there are a few very large delays, but flights never live much earlier.
Which is "better"?
It depends on the question you're asking --- I think the `mean()` is probably better reflection of the total suffering, but the median
```{r mean-vs-median}
#| fig.cap: >
#| Mean vs median
2022-03-23 22:52:38 +08:00
flights |>
2022-04-18 22:42:43 +08:00
group_by(year, month, day) |>
2022-03-23 22:52:38 +08:00
summarise(
2022-04-18 22:42:43 +08:00
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
ggplot(aes(mean, median)) +
geom_abline(slope = 1, intercept = 0, colour = "white", size = 2) +
geom_point()
2022-03-23 22:52:38 +08:00
```
Don't forget what you learned in Section \@ref(sample-size): whenever creating numerical summaries, it's a good idea to include the number of observations in each group.
2022-04-18 22:42:43 +08:00
You might also wonder about the "mode", the most common value in the dataset.
Generally, this is a summary that works well for very simple cases (which is why you might have learned about it in school), but it doesn't work well for many real datasets since there are often multiple most common values, or because all the values are slightly different (due to floating point issues), there's no one most common value.
You might use something like <https://pkg.robjhyndman.com/hdrcde/>
2022-03-24 21:53:11 +08:00
### Minimum, maximum, and quantiles {#min-max-summary}
2022-03-23 22:52:38 +08:00
Quantiles are a generalization of the median.
For example, `quantile(x, 0.25)` will find a value of `x` that is greater than 25% of the values, and less than the remaining 75%.
`min()` and `max()` are like the 0% and 100% quantiles: they're the smallest and biggest numbers.
```{r}
# When do the first and last flights leave each day?
flights |>
group_by(year, month, day) |>
summarise(
first = min(dep_time, na.rm = TRUE),
last = max(dep_time, na.rm = TRUE)
)
```
2022-03-24 21:53:11 +08:00
Using the median and 95% quantile is coming in performance monitoring.
`median()` shows you what the (bare) majority of people experience, and 95% shows you the worst case, excluding 5% of outliers.
2022-04-18 22:42:43 +08:00
```{r}
flights |>
group_by(year, month, day) |>
summarise(
median = median(dep_delay, na.rm = TRUE),
q95 = quantile(dep_delay, 0.95, na.rm = TRUE),
.groups = "drop"
)
```
2022-03-23 22:52:38 +08:00
### Spread
The root mean squared deviation, or standard deviation `sd(x)`, is the standard measure of spread.
2022-04-18 22:42:43 +08:00
It's the square root of the mean squared distance to the mean.
2022-03-23 22:52:38 +08:00
```{r}
# Why is distance to some destinations more variable than to others?
flights |>
group_by(origin, dest) |>
summarise(distance_sd = sd(distance), n = n()) |>
filter(distance_sd > 0)
2022-03-17 22:46:35 +08:00
2022-03-23 22:52:38 +08:00
# Did it move?
flights |>
filter(dest == "EGE") |>
select(time_hour, dest, distance, origin) |>
ggplot(aes(time_hour, distance, colour = origin)) +
geom_point()
```
<https://en.wikipedia.org/wiki/Eagle_County_Regional_Airport> --- seasonal airport.
Nothing in wikipedia suggests a move in 2013.
2022-04-18 22:42:43 +08:00
The interquartile range `IQR(x)` is a simple that is useful for skewed data or data with outliers.
2022-03-23 22:52:38 +08:00
IQR is `quantile(x, 0.75) - quantile(x, 0.25)`.
2022-04-18 22:42:43 +08:00
It gives you the range that the middle 50% of the data lies within.
### Distributions
It's worth remembering that all of these summary statistics are a way of reducing the distribution down to a single number.
This means that they're fundamentally reductive, and if you pick the wrong summary, you can easily miss important differences between groups.
That's why it's always a good idea to visualize the distribution before committing to your summary statistics.
The departure delay histgram is highly skewed suggesting that the median would be a better summary of the "middle" than the mean.
```{r}
flights |>
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 15)
flights |>
filter(dep_delay < 360) |>
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 5)
```
It's also good to check that the individual distributions look similar to the overall.
The following plot draws a frequency polygon that suggests the distribution of departure delays looks roughly similar for each day.
```{r}
flights |>
filter(dep_delay < 360) |>
ggplot(aes(dep_delay, group = interaction(day, month))) +
geom_freqpoly(binwidth = 15, alpha = 1/5)
```
Don't be afraid to explore your own custom summaries that are tailor made for the situation you're working with.
In this case, that might mean separating out the distribution of delayed vs flights that left early.
Or given that the values are so heavily skewed, you might try a log-transformation to see if that revealed clearer patterns.
2022-03-23 22:52:38 +08:00
2022-04-15 22:08:10 +08:00
### Positions
2022-04-18 22:42:43 +08:00
There's one final type of summary that's useful for numeric vectors, but also works with every other type of value: extracting a value at specific position.
2022-04-15 22:08:10 +08:00
Base R provides a powerful tool for extracting subsets of vectors called `[`.
This book doesn't cover `[` until Section \@ref(vector-subsetting) so for now we'll introduce three specialized functions that are useful inside of `summarise()` if you want to extract values at a specified position: `first()`, `last()`, and `nth()`.
For example, we can find the first and last departure for each day:
```{r}
flights |>
group_by(year, month, day) |>
summarise(
first_dep = first(dep_time),
last_dep = last(dep_time)
)
```
2022-04-18 22:42:43 +08:00
Compared to `[`, these functions allow you to set a `default` value if requested position doesn't exist (e.g. you're trying to get the 3rd element from a group that only has two elements) and you can use `order_by` argument if you want to base your ordering on some variable, rather than the order in which the rows appear.
2022-04-15 22:08:10 +08:00
Extracting values at positions is complementary to filtering on ranks.
Filtering gives you all variables, with each observation in a separate row:
```{r}
flights |>
group_by(year, month, day) |>
mutate(r = min_rank(desc(sched_dep_time))) |>
filter(r %in% c(1, max(r)))
```
2022-03-23 22:52:38 +08:00
### With `mutate()`
2022-04-18 22:42:43 +08:00
As the names suggest, the summary functions are typically paired with `summarise()`.
However, because of the recycling rules we discussed in Section \@ref(scalars-and-recycling-rules) they can also be usefully paired with `mutate()`, particularly when you want do some sort of group standardization.
For example:
2022-03-23 22:52:38 +08:00
2022-03-24 21:53:11 +08:00
- `x / sum(x)` calculates the proportion of a total.
2022-04-18 22:42:43 +08:00
- `(x - mean(x)) / sd(x)` computes a Z-score (standardized to mean 0 and sd 1).
- `x / first(x)` computes an index based on the first observation.
2022-03-23 22:52:38 +08:00
### Exercises
2022-04-18 22:42:43 +08:00
1. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
2022-03-23 22:52:38 +08:00
Consider the following scenarios:
- A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
- A flight is always 10 minutes late.
- A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
- 99% of the time a flight is on time.
1% of the time it's 2 hours late.
2022-04-18 22:42:43 +08:00
Which do you think is more important: arrival delay or departure delay?