r4ds/model-basics.Rmd

524 lines
26 KiB
Plaintext

# Model basics
## Introduction
The goal of a model is to provide a simple low-dimensional summary of a dataset. Ideally, the fitted model will capture true "signals" (i.e. patterns generated by the phenomenon of interest), and ignore "noise" (i.e. random variation that you're not interested in).
SEGUE
This chapter of the book is unique because it only uses simulated datasets. These datasets are very simple, and not at all interesting, but they will help you understand the essence of modelling before you apply the same techniques to real data in the next chapter.
SEGUE
A fitted model is a function: you give it a set of inputs (often called the predictors), and it returns a value (typically called the prediction). Fitting a model takes place in two steps:
1. You define a __family of models__ that you think will capture an
interesting pattern in your data. This class will look like a mathematical
formula that relates your predictors to the response. For a simple 1d
model, this might look like `y = a * x + b` or `y = a * x ^ b`.
1. You generate a __fitted model__ by finding the one model from the family
that is the closest to your data. This takes the generic model family
and makes it specific, like `y = 3 * x + 7` or `y = 9 * x ^ 2`.
It's important to understand that a fitted model is just the closest model from a family of models. That implies that you have the "best" model (according to some criteria); it doesn't imply that you have a good model and it certainly doesn't imply that the model is "true". George Box puts this well in his famous aphorism that you've probably heard before:
> All models are wrong, but some are useful.
It's worth reading the fuller context of the quote:
> Now it would be very remarkable if any system existing in the real world
> could be exactly represented by any simple model. However, cunningly chosen
> parsimonious models often do provide remarkably useful approximations. For
> example, the law PV = RT relating pressure P, volume V and temperature T of
> an "ideal" gas via a constant R is not exactly true for any real gas, but it
> frequently provides a useful approximation and furthermore its structure is
> informative since it springs from a physical view of the behavior of gas
> molecules.
>
> For such a model there is no need to ask the question "Is the model true?".
> If "truth" is to be the "whole truth" the answer must be "No". The only
> question of interest is "Is the model illuminating and useful?".
The goal of a model is not to uncover Truth, but to discover a simple approximation that is still useful. This tension is at the heart of modelling: you want your model to be simpler than reality (so you can understand it more easily) while still generating good predictions. There is almost always a tradeoff between understandability and quality of predictions.
### Prerequisites
We need a couple of packages specifically designed for modelling, and all the packages you've used before for EDA. I also recommend setting some options that tweak the default linear model behaviour to be a little less surprising.
```{r setup, message = FALSE}
# Modelling functions
library(modelr)
library(broom)
# Modelling requires plently of visualisation and data manipulation
library(ggplot2)
library(dplyr)
library(tidyr)
# Options that make your life easier
options(
contrasts = c("contr.treatment", "contr.treatment"),
na.option = na.exclude
)
```
## A simple model
Lets take a look at the simulated dataset `sim1`. It contains two continuous variables, `x` and `y`, which are related in a fairly straightforward way:
```{r}
ggplot(sim1, aes(x, y)) +
geom_point()
```
You can see a strong pattern in the data. We can use a model to capture that pattern and make it explicit. It's our job to supply the basic form of the model. In this case, the relationship looks linear, i.e. `y = a * x + b`. To fit the model, we need to find the single model from the family that does the best job for this data.
Let's start by getting a feel for that family of models by randomly generating a few and overlaying them on the data. For this simple case, we can use `geom_abline()` which takes a slope and intercept as paramaters. Later on we'll learn more general techniques that work with any model.
```{r}
models <- tibble(
a = runif(250, -5, 5),
b = runif(250, -20, 80)
)
ggplot(sim1, aes(x, y)) +
geom_abline(aes(slope = a, intercept = b), data = models, alpha = 1/4) +
geom_point()
```
Of the 250 random models some seem much better than others. Intuitively, a good model will be close to the data. So to fit the model, finding the best model from this family, we need to make our intution precise. We need some way to measure the distance between the data and a model.
First, we need a way to generate the predicted y values given a model and some data. Here we represent the model as a numeric vector of length two.
```{r}
make_prediction <- function(mod, data) {
data$x * mod[1] + mod[2]
}
make_prediction(c(2, 5), sim1)
```
Next, we need some way to compute an overall distance between the predicted and actual y for each observation. One common way to do this in statistics is the to use the "root-mean-squared deviation". We compute the difference between actual and predicted, square them, average them, and the take the square root. This distance has lots of appealing mathematical properties, which we're not going to talk about here. You'll just have to take my word for it!
```{r}
measure_distance <- function(mod, data) {
diff <- data$y - make_prediction(mod, data)
sqrt(mean(diff ^ 2))
}
measure_distance(c(2, 5), sim1)
```
Now we can use purrrr to compute the distance for all the models defined above. We need a helper function because our distance function expects the model as a numeric vector of length 2.
```{r}
sim1_dist <- function(a, b) {
measure_distance(c(a, b), sim1)
}
models <- models %>% mutate(dist = purrr::map2_dbl(a, b, sim1_dist))
models
```
Next, let's overlay the 10 best models on to the data. I've coloured the models by `-dist`: this is an easy way to make sure that the best models (i.e. the ones with the smallest distance) get the brighest colours.
```{r}
ggplot(sim1, aes(x, y)) +
geom_point(size = 2, colour = "grey30") +
geom_abline(
aes(slope = a, intercept = b, colour = -dist),
data = filter(models, rank(dist) <= 10)
)
```
Another way of thinking about these models is to draw a scatterplot of `a` and `b`, again coloured by `-dist`. We can no longer see how the model compares to the data, but we can see many models at once. Again, I've highlighted the 10 best models, this time by drawing red circles underneath them.
```{r}
ggplot(models, aes(a, b)) +
geom_point(data = filter(models, rank(dist) < 10), size = 4, colour = "red") +
geom_point(aes(colour = -dist))
```
Instead of trying lots of random models, we could be more systematic and generate an evenly spaced grid of points, a so called grid search. I picked the parameters of the grid roughly by looking at where the best models were in the plot above.
```{r}
grid <- expand.grid(
a = seq(1, 3, length = 25),
b = seq(-5, 20, length = 25)
) %>%
mutate(dist = purrr::map2_dbl(a, b, sim1_dist))
grid %>%
ggplot(aes(a, b)) +
geom_point(data = filter(grid, rank(dist) < 10), size = 4, colour = "red") +
geom_point(aes(colour = -dist))
```
When you overlay the best 10 models back on the original data, they all look pretty good:
```{r}
ggplot(sim1, aes(x, y)) +
geom_point(size = 2, colour = "grey30") +
geom_abline(
aes(slope = a, intercept = b, colour = -dist),
data = filter(grid, rank(dist) <= 10)
)
```
Now you could imagine iteratively making the grid finer and finer until you narrowed in on the best model. But there's a better way to tackle that problem: a numerical minimisation tool called Newton-Raphson search. The intuition of Newton-Raphson is pretty simple: you pick a starting point and look around for the steepest slope. You then ski down that slope a little way, and then repeat again and again, until you can't go any lower. In R, we can do that with `optim()`:
```{r}
best <- optim(c(0, 0), measure_distance, data = sim1)
best$par
ggplot(sim1, aes(x, y)) +
geom_point(size = 2, colour = "grey30") +
geom_abline(slope = best$par[1], intercept = best$par[2])
```
Don't worry too much about the details - it's the intuition that's important here. If you have a function that defines the distance between a model and a dataset you can use existing mathematical tools to find the best model. The neat thing about this approach is that it will work for any family of models that you can write an equation for.
However, this particular model is a special case of a broader family: linear models. A linear model has the general form `y = a_1 * x_1 + a_2 * x_2 + ... + a_n * x_n`. So this simple model is equivalent to a general linear model where n is 2, `a_1` is `a`, `x_1` is `x`, `a_2` is `b` and `x_2` is a constant, 1. R has a tool specifically designed for linear models called `lm()`. `lm()` has a special way to specify the model family: a formula like `y ~ x` which `lm()` translates to `y = a * x + b`. We can fit the model and look at the output:
```{r}
sim1_mod <- lm(y ~ x, data = sim1)
coef(sim1_mod)
```
These are exactly the same values we got with `optim()`! However, behind the scenes `lm()` doesn't use `optim()` but instead takes advantage of the mathematical structure of linear models. Using some connection between geometry, calculus, and linear algebra, it actually finds the closest model by (effectively) inverting a matrix.
### Exercises
1. One downside of the linear model is that it is sensitive to unusual values
because the distance incorporates a squared term. Fit a linear model to
the simulated data below, and visualise the results. Rerun a few times to
generate different simulated datasets. What do you notice about the model?
```{r}
sim1a <- tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2)
)
```
1. One way to make linear models more robust is to use a different distance
measure. For example, instead of root-mean-squared distance, you could use
mean-absolute distance:
```{r}
measure_distance <- function(mod, data) {
diff <- data$y - make_prediction(mod, data)
mean(abs(diff))
}
```
Use `optim()` to fit this model to the simulated data above and compare it
to the linear model.
## Linear models
For simple models, like the one above, you can figure out what the model says about the data by carefully studying the coefficients. And if you ever take a statistics course on modelling, you're likely to spend a lot of time doing just that. Here, however, we're going to take a different tack. In this book, we're going to focus on understanding a model by looking at its predictions. This has a big advantage: every type of model makes predictions (otherwise what use would it be?) so we can use the same set of techniques to understand simple linear models or complex random forrests. We'll see that advantage later on when we explore some other families of models.
We are also going to take advantage of a powerful feature of linear models: they are additive. That means you can partition the data into patterns and residuals. This allows us to see what subtler patterns remain after we have removed the biggest trend.
### Predictions
To visualise the predictions from a model, we start by generating an evenly spaced grid of values that covers the region where our data lies. The easiest way to do that is to use `tidyr::expand()`. Its first argument is a data frame, and for each subsequent argument it finds the unique variables and then generates all combinations:
```{r}
grid <- sim1 %>% expand(x)
grid
```
(This will get more interesting when we start to add more variables to our model.)
Next we add predictions. We'll use `modelr::add_predictions()` which takes a data frame and a model. It adds the predictions from the model to the data frame:
```{r}
grid <- grid %>% add_predictions(sim1_mod)
grid
```
Next, we plot the predictions. Here the choice of plot is easy because we only have two variables. In general, however, figuring out how to the visualise the predictions can be quite challenging, and you'll often need to try a few alternatives before you get the most useful plot. For more complex models, you're likely to need multiple plots, not just one.
```{r}
ggplot(sim1, aes(x)) +
geom_point(aes(y = y)) +
geom_line(aes(y = pred), data = grid, colour = "red", size = 1)
```
Note that this plot takes advantage of ggplot2's ability to plot different datasets with different aesthetics on different layers.
### Residuals
The flip-side of predictions are __residuals__. The predictions tells you the pattern that the model has captured, and the residuals tell you what the model has missed. The residuals are just the distances between the observed and predicted values that we computed above!
We add residuals to the data with `add_residuals()`, which works much like `add_predictions()`. Note, however, that we use the original dataset, not a manufactured grid. Otherwise where would you get the value of the response?
```{r}
sim1 <- sim1 %>% add_residuals(sim1_mod)
```
There are a few different ways to understand what the residuals tell us about the model. One way is to simply draw a frequency polygon to help us understand the spread of the residuals:
```{r}
ggplot(sim1, aes(resid)) +
geom_freqpoly(binwidth = 0.5)
```
(I prefer the frequency polygon over the histogram here because it makes it easier to display other categorical variables on the same plot.)
Another way is to plot the residuals against the other variables in the dataset:
```{r}
ggplot(sim1, aes(x, resid)) +
geom_point()
```
There doesn't seem to be any pattern remaining here.
### Exercises
1. Instead of using `lm()` to fit a straight line, you can use `loess()`
to fit a smooth curve. Repeat the process of model fitting,
grid generation, predictions, and visualisation on `sim1` using
`loess()` instead of `lm()`. How does the result compare to
`geom_smooth()`?
## Formulas and model families
You've seen formulas before when using `facet_wrap()` and `facet_grid()`. Formulas provide a general way of getting "special behaviour" in R. Rather than evaluating the values of the variables right away, they capture them so they can be processed more by the function.
The majority of modelling functions in R use a standard conversion from formulas to functions. You've seen one simple conversion already: `y ~ x` is translated to `y = a_0 + a_1 * x`. You can make this arbitrarily more complicated by adding more variables: `y ~ x1 + x2 + x3` is translated to `y = a_0 + a_1 * x1 + a_2 * x2 + a_3 * x3`. Note that R always adds an intercept term. If you don't want it, you have to deliberately remove it: `y ~ x - 1` is translated to `y = a_1 * x`.
When you add variables with `+`, the model will estimate each effect independent of all the others. It's possible to fit the so-called interaction by using `*`. For example, `y ~ x1 * x2` is translated to `y = a_0 + a_1 * a1 + a_2 * a2 + a_12 * a1 * a2`. Note that whenever you use `*`, both the interaction and the individual components are included in the model.
You can also perform transformations inside the model formula. For example, `log(y) ~ sqrt(x1) + x2` is transformed to `y = a_0 + x1 * sqrt(x) + a_2 * x2`. If your transformation involves `+` or `*` you'll need to wrap it in `I()` so R doesn't treat it like part of the model specification. For example, `y ~ x + I(x ^ 2)` is translated to `y = a_0 + a_1 * x + a_2 * x^2`. If you forget the `I()` and specify `y ~ x^2 + x`, R considers this to be the same as `y ~ x * x + x`. `x * x` means the interaction of `x` with itself, so it's the same as `x`. R automatically drops redundant variables so `x + x` become `x`, meaning that `y ~ x ^ 2 + x` specifies the function `y = a_0 + a_1 * x`.
This formula notation is known as "Wilkinson-Rogers notation", and was initially described in _Symbolic Description of Factorial Models for Analysis of Variance_, G. N. Wilkinson and C. E. Rogers. Journal of the Royal Statistical Society. Series C (Applied Statistics). Vol. 22, No. 3 (1973), pp. 392-399. <https://www.jstor.org/stable/2346786>. It's worth digging up and reading the original paper if you'd like to understand the full details of the underlying algebra.
### Categorical variables
Generating a function from a formula is straight forward when the predictor is continuous, but things get a bit more complicated when the predictor is categorical. Imagine you have a formula like `y ~ sex`, where sex could either be male or female. It doesn't make sense to convert that to a formula like `y = x_0 + x_1 * sex` because `sex` isn't a number - you can't multiply it! Instead what R does is convert it to `y = x_0 + x_1 * sex_male` where `sex_male` is one if `sex` is male and 0 otherwise.
You might wonder why R doesn't convert it to `y = x_0 + x_1 * sex_male + x_2 * sex_female`. This unfortunately can't work because don't have enough information to figure out what the "intercept" is in this case. Understanding why this doesn't work in general requires someone linear algebra ideas (like full rank), but in this simple case you can understand it with a small thought experiment. The estimates `x_0` corresponds to the global mean, `x_1` to the effect for males, and `x_2` the effect for females. All you can measure is the average response for males and the average response for females. What does the global average mean?
With the simpler function, `y = x_0 + x_1 * sex_male`, `x_0` corresponds to the average for females, and `x_1` difference between females and males. You'll always see this if you look at the coefficients of a model with a categorical variable: there is always one less coefficient than you expect. Fortunately, however, if you stick with visualising predictions from the model you don't need to worry about this.
The process of turning a categorical variable into a 0-1 matrix has different names. Sometimes the individual 0-1 columns are called dummy variables. In machine learning, it's called one-hot encoding. In statistics, the process is called creating a contrast matrix. General example of "feature generation": taking things that aren't continuous variables and figuring out how to represent them.
Let's look at some data and models to make that concrete.
```{r}
ggplot(sim2) +
geom_point(aes(x, y))
```
```{r}
mod2 <- lm(y ~ x, data = sim2)
grid <- sim2 %>%
expand(x) %>%
add_predictions(mod2)
ggplot(sim2, aes(x)) +
geom_point(aes(y = y)) +
geom_point(data = grid, aes(y = pred), colour = "red", size = 4)
```
Effectively, a model with a categorical `x` will predict the mean value for each category. (Why? Because the mean minimise the root-mean-squared distance.)
Note that (obviously) you can't make predictions about levels that you didn't observe. Sometimes you'll do this by accident so it's good to recognise this error message:
```{r, error = TRUE}
tibble(x = "e") %>% add_predictions(mod2)
```
### A continuous and categorical
What happens when you combine a continuous an a categorical variable? `sim3` contains a categorical predictor and a continuous predictor. We can visualise it with a simple plot:
```{r}
ggplot(sim3, aes(x1, y)) +
geom_point(aes(colour = x2))
```
There are two possible models you could fit to this data:
```{r}
mod1 <- lm(y ~ x1 + x2, data = sim3)
mod2 <- lm(y ~ x1 * x2, data = sim3)
```
To visualise these models we need two new tricks:
1. We have two predictors, so we need to give `expand()` two variables.
It finds all the unique values of `x1` and `x2` and then generates all
combinations.
1. To generate predictions from both models simultaneously, we can use
`gather_predictions()` which adds each prediction as a row. The
complete of `gather_predictions()` is `spread_predictions()` which adds
each prediction to a new column.
Together this gives us:
```{r}
grid <- sim3 %>%
expand(x1, x2) %>%
gather_predictions(mod1, mod2)
grid
```
We can visualise the results for both models on one plot using facetting:
```{r}
ggplot(sim3, aes(x1, y, colour = x2)) +
geom_point() +
geom_line(data = grid, aes(y = pred)) +
facet_wrap(~ model)
```
Note that the model that uses `+` has the same slope for each line, but different intercepts. The model that uses `*` has a different slope and intercept for each line.
Which model is better for this data? We can take look at the residuals. Here I've facetted by both model and `x2` because it makes it easier to see the pattern within each group.
```{r}
sim3 <- sim3 %>%
gather_residuals(mod1, mod2)
ggplot(sim3, aes(x1, resid, colour = x2)) +
geom_point() +
facet_grid(model ~ x2)
```
There is little obvious pattern in the residuals for `mod2`. The residuals for `mod1` show that the model has clearly missed some pattern in `b`, and less so, but still present is pattern in `c`, and `d`. You might wonder if there's a precise way to tell which of `mod1` or `mod2` is better. There is, but it requires a lot of mathematical background, and we don't really care. Here, we're interested in a qualitative assessment of whether or not the model has captured the pattern that we're interested in.
### Two continuous variables
Let's take a look at the equivalent model for two continuous variables. Initially things proceed almost identically to the previous example:
```{r}
mod1 <- lm(y ~ x1 + x2, data = sim4)
mod2 <- lm(y ~ x1 * x2, data = sim4)
grid <- sim4 %>%
expand(
x1 = seq_range(x1, 5),
x2 = seq_range(x2, 5)
) %>%
gather_predictions(mod1, mod2)
grid
```
Note my use of `seq_range()` inside `expand()`. Instead of using every unique value of `x`, I'm going to use a regularly spaced grid of five values between the minimum and maximum numbers. It's probably not super important here, but it's a useful technique in general.
Next let's try and visualise that model. We have two continuous predictors, so you can imagine the model like a 3d surface. We could display that using `geom_tile()`:
```{r}
ggplot(grid, aes(x1, x2)) +
geom_tile(aes(fill = pred)) +
facet_wrap(~ model)
```
That doesn't suggest that the models are very different! But that's partly an illusion our eyes + brains are not very good at accurately comparing shades of colour. Instead of looking at the surface from the top, we could look at it from either side, showing multiple slices:
```{r, asp = 1/2}
ggplot(grid, aes(x1, pred, colour = x2, group = x2)) +
geom_line() +
facet_wrap(~ model)
ggplot(grid, aes(x2, pred, colour = x1, group = x1)) +
geom_line() +
facet_wrap(~ model)
```
This shows you that interaction between two continuous variables works basically the same way as for a categorical and continuous variable. An interaction says that there's not a fixed offset: you need to consider both values of `x1` and `x2` simultaneously in order to predict `y`.
You can see that even with just two continuous variables, coming up with good visualisations are hard. But that's reasonable: you shouldn't expect it will be easy to understand how three or more variables simultaneously interact! But again, we're saved a little because we're using models for exploration, and you can gradually build up your model over time. The model doesn't have to be perfect, it just has to help you reveal a little more about your data.
I spent some time looking at the residuals to see if I could figure if `mod2` did `mod1`. I think it does but it's pretty subtle. You'll have a chance to work on it in the exercises.
### Exercises
1. Using the basic principles, convert the formuals in the following two
models into functions. (Hint: start by converting the categorical variable
into 0-1 variables.)
```{r, eval = FALSE}
mod1 <- lm(y ~ x1 + x2, data = sim3)
mod2 <- lm(y ~ x1 * x2, data = sim3)
```
1. For `sim4`, which of `mod1` and `mod2` is better? I think `mod2` does a
slighty better job at removing patterns, but it's pretty subtle. Can you
come up with a plot to support my claim?
## Missing values
Missing values obviously can not convey any information:
```{r}
df <- tibble::frame_data(
~x, ~y,
1, 2.2,
2, NA,
3, 3.5,
4, 8.3,
NA, 10
)
mod <- lm(y ~ x, data = df)
```
Unfortunately this is one of the rare cases in R where missing values will go silently missing.
```{r}
nobs(mod)
nrow(df)
```
Note that this is not the default behaviour of `predict()`, which normally just drops rows with missing values when generating predictions. That's what this setting does:
```{r}
options(na.option = na.exclude)
```
I don't really understand why it's called `na.exclude` when it causes missing values to be include in the predictions but there you have it.
## Other model families
Here we've focussed on linear models, which is a fairly limited space (but it does include a first-order linear approximation of any more complicated model).
Some extensions of linear models are:
* Generalised linear models, e.g. `stats::glm()`. Linear models assume that
the predictor is continuous and the errors has a normal distribution.
Generalised linear models extend linear models to include non-continuous
predictors (e.g. binary data or counts). They work by defining a distance
metric based on the statistical idea of likelihood.
* Generalised additive models, e.g. `mgcv::gam()`, extend generalised
linear models to incorporate arbitrary smooth functions. That means you can
write a formula like `y ~ s(x)` which becomes an equation like
`y = f(x)` and the `gam()` estimate what that function is (subject to some
smoothness constraints to make the problem tractable).
* Penalised linear models, e.g. `glmnet::glmnet()`, add a penalty term to
the distance which penalises complex models (as defined by the distance
between the parameter vector and the origin). This tends to make
models that generalise better to new datasets from the same population.
* Robust linear models, e.g. `MASS:rlm()`, tweak the distance to downweight
points that are very far away. This makes them less sensitive to the presence
of outliers, at the cost of being not quite as good when there are no
outliers.
* Trees, e.g. `rpart::rpart()`, attack the problem in a complete different
way to linear models. They fit a piece-wise constant model, splitting the
data into progressively smaller and smaller pieces. Trees aren't terribly
effective by themselves, but they are very powerful when used in aggregated
by models like random forrests (e.g. `randomForest::randomForrest()`) or
in gradient boosting machines (e.g. `xgboost::xgboost`.)