In the previous chapter you learned how linear models worked, and learned some basic tools for understanding what a model is telling you about your data. The previous chapter focussed on simulated datasets to help you learn about how models work. This chapter will focus on real data, showing you how you can progressively build up a model to aid your understanding of the data.
We will take advantage of the fact that you can think about a model partition your data into pattern and residuals. We'll find patterns with visualisation, then make the concrete and precise with a model. We'll them repeat the process, replace the old response variable with the residuals from the model. The goal is to transition from implicit knowledge in the data and your head to explicit knowledge in a quantitative model. This makes it easier to apply to new domains, and easier for others to use.
For very large and complex datasets this will a lot of work. There are certainly alternative approaches - a more machine learning approach is simply to focus on the predictive ability of the model. These approaches tend to produce black boxes: the model does a really good job at generating predictions, but you don't know why. This is a totally reasonable approach, but it does make it hard to apply your real world knowledge to the model. That, in turn, makes it difficult to assess whether or not the model will continue work in the long-term, as fundamentals change. For most real models, I'd expect you to use some combination of this approach and a more classic automated approach.
It's a challenge to know when to stop. You need to figure out when your model is good enough, and when additional investment is unlikely to pay off. I particularly this quote from reddit user Broseidon241:
We'll start with modelling and EDA tools we needed in the last chapter. Then we'll add in some real datasets: `diamonds` from ggplot2, and `flights` from nycflights13. We'll also need lubridate to extract useful components of date-times.
In previous chapters we've seen a surprising relationship between the quality of diamonds and their price: low quality diamonds (poor cuts, bad colours, and inferior clarity) have higher prices.
Log-transforming is very useful here because it make a linear relationship, and linear relationships are generally much easier to work with. We could go one step further and use a linear model to remove the strong effect of `lcarat` on `lprice`. First, let's see what a linear model tells about the data on the original scale:
```{r}
mod_diamond <- lm(lprice ~ lcarat, data = diamonds2)
grid <- diamonds2 %>%
expand(carat = seq_range(carat, 20)) %>%
mutate(lcarat = log2(carat)) %>%
add_predictions(mod_diamond, "lprice") %>%
mutate(price = 2 ^ lprice)
ggplot(diamonds2, aes(carat, price)) +
geom_hex(bins = 50) +
geom_line(data = grid, colour = "red", size = 1)
```
That's interesting! If we believe our model, then it suggests that the large diamonds we have are much cheaper than expected. This may because no diamond in this dataset costs more than $19,000.
We can also look at the residuals from this model. This verifies that we have successfully removed the strong linear pattern:
```{r}
diamonds2 <- diamonds2 %>%
add_residuals(mod_diamond, "lresid")
ggplot(diamonds2, aes(lcarat, lresid)) +
geom_hex(bins = 50)
```
Importantly, we can now use those residuals in plots instead of `price`.
Here we see the relationship we'd expect. Now that we've removed the effect of size on price,
To interpret the `y` axis, we need to think about what the residuals are telling us, and what scale they are on. A residual of -1 indicates that `lprice` was 1 unit lower than expected, based on the `carat` alone. $2^{-1}$ is 1/2, so that sugggests diamonds with colour I1 are half the price you'd expect.
### A model complicated model
We could continue this process, making our model complex:
```{r}
mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2)
add_predictions_trans <- function(df, mod) {
df %>%
add_predictions(mod, "lpred") %>%
mutate(pred = 2 ^ lpred)
}
diamonds2 %>%
expand_model(mod_diamond2, cut) %>%
add_predictions_trans(mod_diamond2) %>%
ggplot(aes(cut, pred)) +
geom_point()
```
### Exercises
1. In the plot of `lcarat` vs. `lprice`, there are some bright vertical
strips. What do they represent?
1. If `log(price) = a_0 + a_1 * log(carat)`, what does that say about
the relationship between `price` and `carat?
1. Extract the diamonds that have very high and very low residuals.
Is there any unusual about these diamonds? Are the particularly bad
or good, or do you think these are pricing errors?
Let's explore the number of flights that leave NYC per day. We're not going to end up with a fully realised model, but as you'll see, the steps along the way will help us better understand the data. Let's get started by counting the number of flights per day and visualising it with ggplot2.
This is a really small dataset --- only 365 rows and 2 columns, but because as you'll see there's a rich set of interesting variables buried in the date.
Understanding the long-term trend is challenging because there's a very strong day-of-week effect that dominates the subtler patterns. Let's summarise the number of flights per day-of-week:
There are fewer flights on weekends because most travel is for business. The effect is particularly pronounced on Saturday: you might sometimes have to leave on Sunday for a Monday morning meeting, but it's very rare that you'd leave on Saturday as you'd much rather be at home with your family.
Note the change in the y-axis: now we are seeing the deviation from the expected number of flights, given the day of week. This plot is useful because now that we've removed much of the large day-of-week effect, we can see some of the subtler patterns that remain:
Let's first tackle our failure to accurately predict the number of flights on Saturday. A good place to start is to go back to the raw numbers, focussing on Saturdays:
(I've used both points and lines to make it more clear what is data and what is interpolation.)
I suspect pattern is caused by summer holidays: many people go on holiday in the summer, and people don't mind travelling on Saturdays for vacation. Looking at this plot, we might guess that summer holidays are from early June to late August. That seems to line up fairly well with the [state's school terms](http://schools.nyc.gov/Calendar/2013-2014+School+Year+Calendars.htm): summer break in 2013 was Jun 26--Sep 9.
Why are there Saturday flights in the Fall than the Spring? I asked some American friends and they suggested that it's less common to plan family vacations during the Fall becuase of the big Thanksgiving and Christmas holidays. We can't tell if that's exactly the reason, but it seems like a plausible working hypothesis.
Lets create a "term" variable that roughly captures the three school terms, and check our work with a plot:
(I manually tweaked the dates to get nice breaks in the plot. Using a visualisation to help you understand what your function is doing is a really powerful and general technique.)
It's useful to see how this new variable affects the other days of the week:
```{r}
daily %>%
ggplot(aes(wday, n, colour = term)) +
geom_boxplot()
```
It looks like there is significant variation across the terms, so fitting a separate day of week effect for each term is reasonable. This improves our model, but not as much as we might hope:
We can see the problem by overlaying the predictions from the model on to the raw data:
```{r}
grid <- daily %>%
expand(wday, term) %>%
add_predictions(mod2, "n")
ggplot(daily, aes(wday, n)) +
geom_boxplot() +
geom_point(data = grid, colour = "red") +
facet_wrap(~ term)
```
Our model is finding the _mean_ effect, but we have a lot of big outliers, so they tend to drag the mean far away from the typical value. We can alleviate this problem by using a model that is robust to the effect of outliers: `MASS::rlm()`. This greatly reduces the impact of the outliers on our estimates, and gives a model that does a good job of removing the day of week pattern:
In the previous section we used our domain knowledge (how the US school term affects travel) to improve the model. An alternative to using making our knowledge explicit in the model is to give the data more room to speak. We could use a more flexible model and allow that to capture the pattern we're interested in. We know that a simple linear trend isn't adeqaute, so instead we could use a natural spline to allow a smoothly varying trend across the year:
We see a strong pattern in the numbers of Saturday flights. This is reassuring, because we also saw that pattern in the raw data. It's a good sign when you see the same signal from multiple approaches.
How do you decide how many parameters to use for the spline? You can either either it pick by eye, or you could use automated techniques which you'll learn about in [model assessment]. For exploration, picking by eye to capture the most important patterns is fine.
If you're experimenting with many models and many visualisations, it's a good idea to bundle the creation of variables up into a function so there's no chance of accidentally applying a different transformation in different places. For example, we could write:
Either approach is reasonable. Making the transformed variable explicit is useful if you want to check your work, or use them in a visualisation. But you can't easily use transformations (like splines) that return multiple columns. Including the transformations in the model function makes life a little easier when you're working with many different datasets because the model is self contained.