In the previous chapter you learned how linear models work, and learned some basic tools for understanding what a model is telling you about your data.
The previous chapter focussed on simulated datasets.
This chapter will focus on real data, showing you how you can progressively build up a model to aid your understanding of the data.
We will take advantage of the fact that you can think about a model partitioning your data into pattern and residuals.
We'll find patterns with visualisation, then make them concrete and precise with a model.
We'll then repeat the process, but replace the old response variable with the residuals from the model.
The goal is to transition from implicit knowledge in the data and your head to explicit knowledge in a quantitative model.
This makes it easier to apply to new domains, and easier for others to use.
For very large and complex datasets this will be a lot of work.
There are certainly alternative approaches - a more machine learning approach is simply to focus on the predictive ability of the model.
These approaches tend to produce black boxes: the model does a really good job at generating predictions, but you don't know why.
This is a totally reasonable approach, but it does make it hard to apply your real world knowledge to the model.
That, in turn, makes it difficult to assess whether or not the model will continue to work in the long-term, as fundamentals change.
For most real models, I'd expect you to use some combination of this approach and a more classic automated approach.
It's a challenge to know when to stop.
You need to figure out when your model is good enough, and when additional investment is unlikely to pay off.
I particularly like this quote from reddit user Broseidon241:
> A long time ago in art class, my teacher told me "An artist needs to know when a piece is done. You can't tweak something into perfection - wrap it up. If you don't like it, do it over again. Otherwise begin something new".
> Later in life, I heard "A poor seamstress makes many mistakes. A good seamstress works hard to correct those mistakes. A great seamstress isn't afraid to throw out the garment and start over."
In previous chapters we've seen a surprising relationship between the quality of diamonds and their price: low quality diamonds (poor cuts, bad colours, and inferior clarity) have higher prices.
We can make it easier to see how the other attributes of a diamond affect its relative `price` by fitting a model to separate out the effect of `carat`.
But first, lets make a couple of tweaks to the diamonds dataset to make it easier to work with:
If we wanted to, we could continue to build up our model, moving the effects we've observed into the model to make them explicit.
For example, we could include `color`, `cut`, and `clarity` into the model so that we also make explicit the effect of these three categorical variables:
This plot indicates that there are some diamonds with quite large residuals - remember a residual of 2 indicates that the diamond is 4x the price that we expected.
It's often useful to look at unusual values individually:
Nothing really jumps out at me here, but it's probably worth spending time considering if this indicates a problem with our model, or if there are errors in the data.
If there are mistakes in the data, this could be an opportunity to buy diamonds that have been priced low incorrectly.
Let's work through a similar process for a dataset that seems even simpler at first glance: the number of flights that leave NYC per day.
This is a really small dataset --- only 365 rows and 2 columns --- and we're not going to end up with a fully realised model, but as you'll see, the steps along the way will help us better understand the data.
Let's get started by counting the number of flights per day and visualising it with ggplot2.
There are fewer flights on weekends because most travel is for business.
The effect is particularly pronounced on Saturday: you might sometimes leave on Sunday for a Monday morning meeting, but it's very rare that you'd leave on Saturday as you'd much rather be at home with your family.
Our model fails to accurately predict the number of flights on Saturday: during summer there are more flights than we expect, and during fall there are fewer.
We'll see how we can do better to capture this pattern in the next section.
I suspect this pattern is caused by summer holidays: many people go on holiday in the summer, and people don't mind travelling on Saturdays for vacation.
Looking at this plot, we might guess that summer holidays are from early June to late August.
That seems to line up fairly well with the [state's school terms](http://schools.nyc.gov/Calendar/2013-2014+School+Year+Calendars.htm): summer break in 2013 was Jun 26--Sep 9.
Why are there more Saturday flights in spring than fall?
I asked some American friends and they suggested that it's less common to plan family vacations during fall because of the big Thanksgiving and Christmas holidays.
We don't have the data to know for sure, but it seems like a plausible working hypothesis.
(I manually tweaked the dates to get nice breaks in the plot. Using a visualisation to help you understand what your function is doing is a really powerful and general technique.)
It's useful to see how this new variable affects the other days of the week:
If you're experimenting with many models and many visualisations, it's a good idea to bundle the creation of variables up into a function so there's no chance of accidentally applying a different transformation in different places.
Making the transformed variable explicit is useful if you want to check your work, or use them in a visualisation.
But you can't easily use transformations (like splines) that return multiple columns.
Including the transformations in the model function makes life a little easier when you're working with many different datasets because the model is self contained.
3. Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e. it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`.
How does this model compare with the model with every combination of `wday` and `term`?
4. Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays.
7. We hypothesised that people leaving on Sundays are more likely to be business travellers who need to be somewhere on Monday.
Explore that hypothesis by seeing how it breaks down based on distance and time: if it's true, you'd expect to see more Sunday evening flights to places that are far away.
We have only scratched the absolute surface of modelling, but you have hopefully gained some simple, but general-purpose tools that you can use to improve your own data analyses.
It's OK to start simple!
As you've seen, even very simple models can make a dramatic difference in your ability to tease out interactions between variables.
These modelling chapters are even more opinionated than the rest of the book.
I approach modelling from a somewhat different perspective to most others, and there is relatively little space devoted to it.
Modelling really deserves a book on its own, so I'd highly recommend that you read at least one of these three books:
- *Statistical Modeling: A Fresh Approach* by Danny Kaplan, <http://project-mosaic-books.com/?page_id=13>.
This book provides a gentle introduction to modelling, where you build your intuition, mathematical tools, and R skills in parallel.
The book replaces a traditional "introduction to statistics" course, providing a curriculum that is up-to-date and relevant to data science.
- *An Introduction to Statistical Learning* by Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani, <http://www-bcf.usc.edu/~gareth/ISL/> (available online for free).
This book presents a family of modern modelling techniques collectively known as statistical learning.
For an even deeper understanding of the math behind the models, read the classic *Elements of Statistical Learning* by Trevor Hastie, Robert Tibshirani, and Jerome Friedman, <https://web.stanford.edu/~hastie/Papers/ESLII.pdf> (also available online for free).
- *Applied Predictive Modeling* by Max Kuhn and Kjell Johnson, <http://appliedpredictivemodeling.com>.
This book is a companion to the **caret** package and provides practical tools for dealing with real-life predictive modelling challenges.