r4ds/model-many.Rmd

332 lines
14 KiB
Plaintext
Raw Normal View History

2016-06-15 23:16:21 +08:00
# Many models
2016-06-13 22:50:55 +08:00
2016-06-15 23:16:21 +08:00
In this chapter you're going to learn two powerful ideas that help you to work with large numbers of models with ease:
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
1. You'll use list-columns to store arbitrary data structures in a data frame.
This, for example, will allow you to have a column of your data frame that
consists of linear models.
1. Turn models into tidy data with the broom package, by David Robinson.
This is a powerful technique for working with large numbers of models
because once you have tidy data, you can apply many techniques that you've
learned about in early chapters of this book.
These ideas are particularly powerful in conjunction with the ideas of functional programming, so make sure you've read [iteration] and [handling hierarchy] before starting this chapter.
We'll start by diving in to a motivating example using data about life expectancy. It's a small dataset but it illustrates how important modelling can be for improving your visualisation. The following sections dive into more detail about the individual techniques: list-columns and nesting & unnesting.
This chapter focusses on models generated from subsets of your data. This is a powerful technique to help you understand your data, and is often a key step on the way to creating a single complex model that combines the information from all subsets. In the next chapter, you'll learn about another family of techniques for generating multiple models: resampling. Resampling is a powerful tool to help you understand the inferential properties of a model.
2016-06-14 22:02:08 +08:00
### Prerequisites
2016-06-15 23:16:21 +08:00
Working with many models requires a combination of packages that you're familiar with from data exploration, wrangling, programming, and modelling.
2016-06-14 22:02:08 +08:00
```{r setup, message = FALSE}
# Standard data manipulation and visulisation
library(dplyr)
library(ggplot2)
# Tools for working with models
library(modelr)
# Tools for working with lots of models
2016-06-13 22:50:55 +08:00
library(purrr)
2016-06-14 22:02:08 +08:00
library(tidyr)
```
## gapminder
2016-06-15 23:16:21 +08:00
To motivate the power of many simple models, we're going to look into the "Gapminder" data. This data was popularised by Hans Rosling, a Swedish doctor and statistician. If you've never heard of him, I strongly recommend that you stop reading this chapter and go watch one of his videos. He is a fantastic data presenter! A good place to start is this short video filmed in conjunction with the BBC: <https://www.youtube.com/watch?v=jbkSRLYSojo>.
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
The gapminder data summarises the progression of countries over time, looking at statistics like life expentancy and GDP. The data is easy to access in R, thanks to the work of Jenny Bryan who created the gapminder package:
2016-06-14 22:02:08 +08:00
```{r}
library(gapminder)
gapminder
```
2016-06-15 23:16:21 +08:00
In this case study, we're going to focus on just three variables to answer the question "How does life expectancy (`lifeExp`) change over time (`year`) for each country (`country`)?". A good place to start is with a plot:
2016-06-14 22:02:08 +08:00
```{r}
gapminder %>%
ggplot(aes(year, lifeExp, group = country)) +
2016-06-15 23:16:21 +08:00
geom_line(alpha = 1/3)
2016-06-14 22:02:08 +08:00
```
2016-06-15 23:16:21 +08:00
This is a small dataset: it only has around 1,700 observations and three variables. But it's still hard to see what's going on in this plot! Overall, it looks like life expectency has been steadily improving. However, if you look closely, you might notice some countries that don't follow this pattern. How can we make those countries easier to see?
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
One way is to use the same approach as in the last chapter: there's a strong signal (over all linear growth) that makes it hard to see the smaller pattern. We'll tease these factors apart by fitting a model with a linear trend. The model captures steady growth over time, and the residuals will show what's left.
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
You already know how to do that if we had a single country:
```{r, out.width = "33%", fig.asp = 1, fig.width = 3, fig.show = "hold"}
2016-06-14 22:02:08 +08:00
nz <- filter(gapminder, country == "New Zealand")
2016-06-15 23:16:21 +08:00
nz %>%
ggplot(aes(year, lifeExp)) +
geom_line() +
ggtitle("Full data = ")
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
nz_mod <- lm(lifeExp ~ year, data = nz)
2016-06-14 22:02:08 +08:00
nz %>%
add_predictions(pred = nz_mod) %>%
ggplot(aes(year, pred)) +
2016-06-15 23:16:21 +08:00
geom_line() +
ggtitle("Linear trend + ")
2016-06-14 22:02:08 +08:00
nz %>%
add_residuals(resid = nz_mod) %>%
ggplot(aes(year, resid)) +
2016-06-15 23:16:21 +08:00
geom_hline(yintercept = 0, colour = "white", size = 3) +
geom_line() +
ggtitle("Remaining pattern")
2016-06-14 22:02:08 +08:00
```
2016-06-15 23:16:21 +08:00
But how can we easily fit that model to every country?
2016-06-14 22:02:08 +08:00
### Nested data
2016-06-15 23:16:21 +08:00
You could imagine copy and pasting that code multiple times. But you've already learned a better way! Extract out the common code with a function and repeat using a map function from purrr.
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
This problem is structured a little differently to what you've seen before. Instead of repeating an action for each variable, we want to repeat an action for each country, a subset of rows. To do, we need a new data structure: the __nested data frame__. To create a nested data frame we start with a grouped data frame, and "nest" it:
2016-06-14 22:02:08 +08:00
```{r}
by_country <- gapminder %>%
group_by(country, continent) %>%
nest()
by_country
```
2016-06-15 23:16:21 +08:00
This creates an data frame that has one row per group (per country), and a rather unusual column: `data`. `data` is a list of data frames. This seems like crazy idea: we have a data frame with a column that is a list of other data frames! I'll explain shortly why I think this is a good idea.
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
The `data` column is a little tricky to look at because it's a moderately complicated list (we're still working on better tools to explore these objects). But if you look at one of the elements of the `data` column you'll see that it contains all the data for that country (Afghanastan in this case).
2016-06-14 22:02:08 +08:00
```{r}
by_country$data[[1]]
```
2016-06-15 23:16:21 +08:00
Note the difference between a standard grouped data frame and a nested data frame: in a grouped data frame, each row is an observation; in a nested data frame, each row is a group. Another way to think about this nested dataset is that an observation is now the complete time course for a country, rather than a single point in time.
2016-06-14 22:02:08 +08:00
### List-columns
2016-06-15 23:16:21 +08:00
Now that we have our nested data frame, we're in a good position to fit some models because we can think about transforming each data frame into a model. Transforming each element of a list is the job of `purrr:map()`:
2016-06-14 22:02:08 +08:00
2016-06-15 23:27:01 +08:00
```{r}
2016-06-14 22:02:08 +08:00
country_model <- function(df) {
lm(lifeExp ~ year, data = df)
}
models <- map(by_country$data, country_model)
```
2016-06-15 23:16:21 +08:00
However, rather than leaving leaving the list of models as a free-floating object, I think it's better to store it as a variable in the `by_country` data frame. This is why I think list-columns are such a good idea. In the course of working with these countries, we are going to have lots of lists where we have one element per country. So why not store them all together in one data frame?
2016-06-15 23:41:55 +08:00
%>%
2016-06-15 23:16:21 +08:00
In other words, instead of creating a new object in the global environment, we're going to create a new variable in the `by_country` data frame. That's a job for `dplyr::mutate()`:
2016-06-14 22:02:08 +08:00
```{r}
by_country <- by_country %>%
mutate(model = map(data, country_model))
by_country
```
2016-06-15 23:16:21 +08:00
This has a big advantage: because all the related objects are stored together, you don't need to manually keep them in sync when you filter or arrange. Dplyr takes take of that for you:
2016-06-14 22:02:08 +08:00
```{r}
by_country %>% filter(continent == "Europe")
by_country %>% arrange(continent, country)
```
2016-06-15 23:16:21 +08:00
If your list of data frames and list of models where separate objects, you have to remember that whenever you re-order or subset one vector, you need to re-order or subset all the others in order to keep them in sync. If you forget, your code will continue to work, but it will give the wrong answer!
2016-06-14 22:02:08 +08:00
### Unnesting
2016-06-15 23:16:21 +08:00
Previously we computed the residuals of a single model with a single dataset. Now we have 142 data frames and 142 models. To compute the residuals, we need to call `add_residuals()` with each model-data pair:
2016-06-14 22:02:08 +08:00
```{r}
2016-06-15 23:16:21 +08:00
by_country <- by_country %>% mutate(
resids = map2(data, model, ~ add_residuals(.x, resid = .y))
2016-06-14 22:02:08 +08:00
)
2016-06-15 23:16:21 +08:00
by_country
2016-06-14 22:02:08 +08:00
```
2016-06-15 23:16:21 +08:00
But how you can plot a list of data frames? Instead of struggling to answer that question, let's turn the list of data frames back into a regular data frame. Previously we used `nest()` to turn a regular data frame into an nested data frame, now we need to do the opposite with `unnest()`:
2016-06-14 22:02:08 +08:00
```{r}
2016-06-15 23:16:21 +08:00
resids <- unnest(by_country, resids)
2016-06-14 22:02:08 +08:00
resids
```
2016-06-15 23:16:21 +08:00
Then we can plot the residuals. Facetting by continent is partiuclarly revealing:
2016-06-14 22:02:08 +08:00
```{r}
2016-06-15 23:16:21 +08:00
resids %>%
ggplot(aes(year, resid)) +
geom_line(aes(group = country), alpha = 1 / 3) +
geom_smooth(se = FALSE)
2016-06-14 22:02:08 +08:00
resids %>%
ggplot(aes(year, resid, group = country)) +
geom_line(alpha = 1 / 3) +
facet_wrap(~continent)
2016-06-13 22:50:55 +08:00
```
2016-06-15 23:16:21 +08:00
It looks like overall we've missed some mild quadratic pattern. There's also something intersting going on in Africa: we see some very large residuals which suggests our model isn't fitting so well there. We'll explore that more in the next section attacking it from a slightly different angle.
2016-06-14 22:02:08 +08:00
### Model quality
2016-06-15 23:16:21 +08:00
Instead of looking at the residuals from the model, we could look at some general measurements of model quality. You learned how to compute some specific measures in the previous chapter. Here we'll show a different approach using the broom package.
2016-06-14 22:02:08 +08:00
2016-06-15 23:16:21 +08:00
The broom package provides three general tools for turning models in to tidy data frames:
1. `broom::glance(model)` returns a row for each model. Each column gives a
model summary: either a measure of model quality, or complexity, or a
combination of the two.
1. `broom:tidy(model)` returns a row for each coefficient in the model. Each
column gives information about the estimate or its variability.
1. `broom::augment(model, data)` returns a row for each row in `data`, adding
extra values like residuals, and influence statistics.
Here we'll use `broom::glance()` to extract some model quality metrics. If we apply it to a model, we get a data frame with a single row:
2016-06-14 22:02:08 +08:00
```{r}
2016-06-15 23:41:55 +08:00
broom::glance(nz_mod)
2016-06-14 22:02:08 +08:00
```
2016-06-15 23:16:21 +08:00
We can use `mutate()` and `unnest()` to create a data frame with a row for each country:
2016-06-14 22:02:08 +08:00
```{r}
by_country %>%
2016-06-15 23:16:21 +08:00
mutate(glance = map(model, broom::glance)) %>%
2016-06-14 22:02:08 +08:00
unnest(glance)
```
2016-06-15 23:16:21 +08:00
This isn't quite the output we want, because it still includes all the list columns. This is default behaviour when `unnest()` works on single row data frames. To suppress these columns we use `.drop = TRUE`:
2016-06-14 22:02:08 +08:00
```{r}
glance <- by_country %>%
2016-06-15 23:16:21 +08:00
mutate(glance = map(model, broom::glance)) %>%
2016-06-14 22:02:08 +08:00
unnest(glance, .drop = TRUE)
glance
```
2016-06-15 23:16:21 +08:00
With this data frame in hand, we can start to look for models that don't fit well:
2016-06-14 22:02:08 +08:00
```{r}
glance %>% arrange(r.squared)
```
2016-06-15 23:16:21 +08:00
The worst models all appear to be in Africa. Let's double check that with a plot. Here we have a relatively small number of observations and a discrete variable, so `geom_jitter()` is effective:
2016-06-14 22:02:08 +08:00
```{r}
glance %>%
ggplot(aes(continent, r.squared)) +
geom_jitter(width = 0.5)
```
We could put out the countries with particularly bad $R^2$ and plot the data:
```{r}
bad_fit <- filter(glance, r.squared < 0.25)
2016-06-15 23:16:21 +08:00
2016-06-14 22:02:08 +08:00
gapminder %>%
semi_join(bad_fit, by = "country") %>%
ggplot(aes(year, lifeExp, colour = country)) +
geom_line()
```
2016-06-15 23:16:21 +08:00
We see two main effects here: the tragedies of the HIV/AIDS epidemic, and the Rwandan genocide.
2016-06-14 22:02:08 +08:00
### Exercises
2016-06-15 23:16:21 +08:00
1. A linear trend seems to be slightly too simple for the overall trend.
Can you do better with a natural spline with two or three degrees of
freedom?
2016-06-14 22:02:08 +08:00
1. Explore other methods for visualsiation the distribution of $R^2$ per
continent. You might want to try `ggbeeswarm`, which provides similar
methods for avoiding overlaps as jitter, but with less randomness.
2016-06-15 23:16:21 +08:00
1. To create the last plot (showing the data for the countries with the
worst model fits), we needed two steps: we created a data frame with
one row per country and then semi-joined it to the original dataset.
It's possible avoid this join if we use `unnest()` instead of
`unnest(.drop = TRUE)`. How?
2016-06-14 22:02:08 +08:00
## List-columns
The idea of a list column is powerful. The contract of a data frame is that it's a named list of vectors, where each vector has the same length. A list is a vector, and a list can contain anything, so you can put anything in a list-column of a data frame.
Generally, you should make sure that your list columns are homogeneous: each element should contain the same type of thing. There are no checks to make sure this is true, but if you use purrr and remember what you've learned about type-stable functions you should find it happens naturally.
### Compared to base R
List columns are possible in base R, but conventions in `data.frame()` make creating and printing them a bit of a headache:
```{r, error = TRUE}
# Doesn't work
data.frame(x = list(1:2, 3:5))
# Works, but doesn't print particularly well
data.frame(x = I(list(1:2, 3:5)), y = c("1, 2", "3, 4, 5"))
```
The functions in tibble don't have this problem:
```{r}
data_frame(x = list(1:2, 3:5), y = c("1, 2", "3, 4, 5"))
```
### With `mutate()` and `summarise()`
You might find yourself creating list-columns with mutate and summarise. For example:
```{r}
data_frame(x = c("a,b,c", "d,e,f,g")) %>%
mutate(x = stringr::str_split(x, ","))
```
`unnest()` knows how to handle these lists of vectors as well as lists of data frames.
```{r}
data_frame(x = c("a,b,c", "d,e,f,g")) %>%
mutate(x = stringr::str_split(x, ",")) %>%
unnest()
```
(If you find yourself using this pattern alot, make sure to check out `separate_rows()`)
This can be useful for summary functions like `quantile()` that return a vector of values:
```{r}
mtcars %>%
group_by(cyl) %>%
summarise(q = list(quantile(mpg))) %>%
print() %>%
unnest()
```
Although you probably also want to keep track of which output corresponds to which input:
```{r}
probs <- c(0.01, 0.25, 0.5, 0.75, 0.99)
mtcars %>%
group_by(cyl) %>%
summarise(p = list(probs), q = list(quantile(mpg, probs))) %>%
unnest()
```
And even just `list()` can be a useful summary function (when?). It is a summary function because it takes a vector of length n, and returns a vector of length 1:
```{r}
mtcars %>% group_by(cyl) %>% summarise(list(mpg))
```
This an effective replacement to `split()` in base R (but instead of working with vectors it works with data frames).
### Exercises
## Nesting and unnesting
2016-06-13 22:50:55 +08:00
2016-06-14 22:02:08 +08:00
More details about `unnest()` options.