It's surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with `count()`.
This function is great for quick exploration and checks during analysis:
(Despite the advice in Chapter \@ref(code-style), I usually put `count()` on a single line because I'm usually using it at the console for a quick check that my calculation is working as expected.)
Transformation functions work well with `mutate()` because their output is the same length as the input.
The vast majority of transformation functions are already built into base R.
It's impractical to list them all so this section will give show the most useful.
As an example, while R provides all the trigonometric functions that you might dream of, I don't list them here because they're rarely needed for data science.
These functions don't need a huge amount of explanation because they do what you learned in grade school.
But we need to briefly talk about the **recycling rules** which determine what happens when the left and right hand sides have different lengths.
This is important for operations like `flights |> mutate(air_time = air_time / 60)` because there are 336,776 numbers on the left of `/` but only one on the right.
These recycling rules are also applied to logical comparisons (`==`, `<`, `<=`, `>`, `>=`, `!=`) and can lead to a surprising result if you accidentally use `==` instead of `%in%` and the data frame has an unfortunate number of rows.
Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February.
And unforuntately there's no warning because `nycflights` has an even number of rows.
Modular arithmetic is the technical name for the type of math you did before you learned about real numbers, i.e. division that yields a whole number and a remainder.
In R, `%/%` does integer division and `%%` computes the remainder:
We can combine that with the `mean(is.na(x))` trick from Section \@ref(logical-summaries) to see how the proportion of delayed flights varies over the course of the day.
The results are shown in Figure \@ref(fig:prop-cancelled).
```{r prop-cancelled}
#| fig.cap: >
#| A line plot with scheduled departure hour on the x-axis, and proportion
#| of cancelled flights on the y-axis. Cancellations seem to accumulate
#| over the course of the day until 8pm, very late flights are much
#| less likely to be cancelled.
#| fig.alt: >
#| A line plot showing how proportion of cancelled flights changes over
#| the course of the day. The proportion starts low at around 0.5% at
#| 6am, then steadily increases over the course of the day until peaking
#| at 4% at 7pm. The proportion of cancelled flights then drops rapidly
For example, take compounding interest --- the amount of money you have at `year + 1` is the amount of money you had at `year` multiplied by the interest rate.
That gives a formula like `money = starting * interest ^ year`:
This a straight line because a little algebra reveals that `log(money) = log(starting) + n * log(interest)`, which matches the pattern for a line, `y = m * x + b`.
This is a useful pattern: if you see a (roughly) straight line after log-transforming the y-axis, you know that there's underlying exponential growth.
If you're log-transforming your data with dplyr you have a choice of three logarithms provided by base R: `log()` (the natural log, base e), `log2()` (base 2), and `log10()` (base 10).
`log2()` is easy to interpret because difference of 1 on the log scale corresponds to doubling on the original scale and a difference of -1 corresponds to halving; whereas `log10()` is easy to back-transform because (e.g) 3 is 10\^3 = 1000.
`round()` uses what's known as "round half to even" or Banker's rounding: if a number is half way between two integers, it will be rounded to the **even** integer.
This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
If `min_rank()` doesn't do what you need, look at the variants `dplyr::row_number()`, `dplyr::dense_rank()`, `dplyr::percent_rank()`, and `dplyr::cume_dist()`.
You can achieve many of the same results by picking the appropriate `ties.method` argument to base R's `rank()`; you'll probably also want to set `na.last = "keep"` to keep `NA`s as `NA`.
6. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave.
An alternative is to use the `median()` which finds a value that lies in the "middle" of the vector, i.e. 50% of the values is above it and 50% are below it.
For example, for symmetric distributions we generally report the mean while for skewed distributions we usually report the median.
Figure \@ref(fig:mean-vs-median) compares the mean vs the median when looking at the hourly vs median departure delay.
The median delay is always smaller than the mean delay because because flight sometimes leave multiple hours late, but never leave multiple hours early.
You might also wonder about the **mode**, or the most common value.
This is a summary that only works well for very simple cases (which is why you might have learned about it in high school), but it doesn't work well for many real datasets.
If the data is discrete, there may be multiple most common values, and if the data is continuous, there might be no most common value because every value is every so slightly different.
For these reasons, the mode tends not to be used by statisticians and there's no mode function included in base R[^numbers-1].
What if you're interested in locations other than the center?
`min()` and `max()` will give you the largest and smallest values.
Another powerful tool is `quantile()` which is a generalization of the median: `quantile(x, 0.25)` will find a value of `x` that is greater than 25% of the values, `quantile(x, 0.5)` is equivalent to the median, and `quantile(x, 0.95)` will find a value that's greater than 95% of the values.
For the flights data, you might want to look at the 95% quantile of delays rather than the maximum, because it will ignore the 5% of most delayed flights which can be quite extreme.
Sometimes you're not so interested in where the bulk of the data lies, but how spread out it.
Two commonly used summaries are the standard deviation, `sd(x)`, and the inter-quartile range, `IQR()`.
I won't explain `sd()` here since you're probably already familiar with it, but `IQR()` might be new --- it's `quantile(x, 0.75) - quantile(x, 0.25)` and gives you the range that contains the middle 50% of the data.
We can use this to reveal a small oddity in the flights data.
You might expect that the spread of the distance between origin and destination to be zero, since airports are always in the same place.
But the code below makes it looks like one airport, [EGE](https://en.wikipedia.org/wiki/Eagle_County_Regional_Airport), might have moved.
Don't be afraid to explore your own custom summaries specifically tailored for the data that you're working with.
In this case, that might mean separately summarizing the flights that left early vs the flights that left late, or given that the values are so heavily skewed, you might try a log-transformation.
Finally, don't forget what you learned in Section \@ref(sample-size): whenever creating numerical summaries, it's a good idea to include the number of observations in each group.
There's one final type of summary that's useful for numeric vectors, but also works with every other type of value: extracting a value at specific position.
You can do this with the base R `[` function, but we're not cover it until Section \@ref(vector-subsetting), because it's a very powerful and general function.
For now we'll introduce three specialized functions that you can use to extract values at a specified position: `first(x)`, `last(x)`, and `nth(x, n)`.
(These functions currently lack an `na.rm` argument but will hopefully be fixed by the time you read this book: <https://github.com/tidyverse/dplyr/issues/6242>).
If you're familiar with `[`, you might wonder if you ever need these functions.
I think there are main reasons: the `default` argument and the `order_by` argument.
`default` allows you to set a default value that's use if the requested position doesn't exist, e.g. you're trying to get the 3rd element from a two element group.
`order_by` lets you locally override the existing ordering of the rows, so you can
As the names suggest, the summary functions are typically paired with `summarise()`.
However, because of the recycling rules we discussed in Section \@ref(scalars-and-recycling-rules) they can also be usefully paired with `mutate()`, particularly when you want do some sort of group standardization.