# (PART) Model {-}
# Introduction {#model-intro}
Now that you are equipped with powerful programming tools we can finally return to modelling. You'll use your new tools of data wrangling and programming, to fit many models and understand how they work. The focus of this book is on exploration, not confirmation or formal inference. But you'll learn a few basic tools that help you understand the variation within your models.
```{r echo = FALSE, out.width = "75%"}
knitr::include_graphics("diagrams/data-science-model.png")
```
The goal of a model is to provide a simple low-dimensional summary of a dataset. Ideally, the model will capture true "signals" (i.e. patterns generated by the phenomenon of interest), and ignore "noise" (i.e. random variation that you're not interested in). Here we only cover "predictive" models, which, as the name suggests, generate predictions. There is another type of model that we're not going to discuss: "data discovery" models. These models don't make predictions, but instead help you discover interesting relationships within your data. (These two categories of models are sometimes called supervised and unsuperivsed, but I don't think that terminology is particularly illuminating.)
This book is not going to give you a deep understanding of the mathematical theory that underlies models. It will, however, build your intution about how statisitcal models work, and give you a family of useful tools that allow you to use models to better understand your data:
* In [model basics], you'll learn how models work, focussing on the important
family of linear models. You'll learn general tools for gaining insight
into what a predictive model tells you about your data, focussing on simple
simulated datasets.
* In [model building], you'll learn how to use models to pull out known
patterns in real data. Once you have recognised an important pattern
it's useful to make it explicit it in a model, because then you can
more easily see the subtler signals that remina.
* In [many models], you'll learn how to use many simple models to help
understand complex datasets. This is a powerful technique, but to access
it you'll need to combine modelling and programming tools.
* In [model assessment], you'll learn a little a bit about how you might
quantitatively assess whether a model is good or not. You'll learn two
powerful techniques, cross-validation and bootstrapping, that are built
on the idea of generating many random datasets which you fit many
models to.
In this book, we are going to use models as a tool for exploration, completing the trifacta of EDA tools introduced in Part 1. This is not how models are usually taught, but they make for a particularly useful tool in this context. Every exploratory analysis will involve some transformation, modelling, and visualisation.
Models are more common taught as tools for doing inference, or for confirming that an hypothesis is true. Doing this correctly is not complicated, but it is hard. There is a pair of ideas that you must understand in order to do inference correctly:
1. Each observation can either be used for exploration or confirmation,
not both.
1. You can use an observation as many times as you like for exploration,
but you can only use it once for confirmation. As soon as you use an
observation twice, you've switched from confirmation to exploration.
This is necessary because to confirm a hypothesis you must use data this is independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading. If you are serious about doing an confirmatory analysis, before you begin the analysis you should split your data up into three piecese:
1. 60% of your data goes into a __training__ (or exploration) set. You're
allowed to do anything you like with this data: visualise it and fit tons
of models to it.
1. 20% goes into a __query__ set. You can use this data to compare models
or visualisations by hand, but you're not allowed to use it as part of
an automated process.
1. 20% is held back for a __test__ set. You can only use this data ONCE, to
test your final model.
This partitioning allows you to explore the training data, occassionally generating candidate hypotheses that you check with the query set. When you are confident you have the right model, you can check it once with the test data.
### Other references
The modelling chapters are even more opinionated than the rest of the book. I approach modelling from a somewhat different perspective to most others, and there is relatively little space devoted to it. Modelling really deserves a book on its own, so I'd highly recommend that you read at least one of these three books:
* *Statistical Modeling: A Fresh Approach* by Danny Kaplan,
. This book provides
a gentle introduction to modelling, where you build your intuition,
mathematical tools, and R skills in parallel.
* *An Introduction to Statistical Learning* by Gareth James, Daniela Witten,
Trevor Hastie, and Robert Tibshirani,
(available online for free). This book presents a family of modern modelling
techniques collectively known as statistical learning.
* *Applied Predictive Modeling* by Max Kuhn and Kjell Johnson,
. This book is a companion to the
__caret__ package, and provides practical tools for dealing with real-life
predictive modelling challenges.