diff --git a/model-basics.Rmd b/model-basics.Rmd index bd955d6..848dda2 100644 --- a/model-basics.Rmd +++ b/model-basics.Rmd @@ -201,7 +201,7 @@ sim1_mod <- lm(y ~ x, data = sim1) coef(sim1_mod) ``` -These are exactly the same values we got with `optim()`! Behind the scenes `lm()` doesn't use `optim()` but instead takes advantage of the mathematical structure of linear models. Using some connections between geometry, calculus, and linear algebra, `lm()` actually finds the closest model by (effectively) inverting a matrix. +These are exactly the same values we got with `optim()`! Behind the scenes `lm()` doesn't use `optim()` but instead takes advantage of the mathematical structure of linear models. Using some connections between geometry, calculus, and linear algebra, `lm()` actually finds the closest model by (effectively) inverting a matrix. This approach is both faster, and guarantees that there is a global maximum. ### Exercises @@ -231,6 +231,16 @@ These are exactly the same values we got with `optim()`! Behind the scenes `lm() Use `optim()` to fit this model to the simulated data above and compare it to the linear model. +1. One challenge with performing numerical optimisation is that it's only + guaranteed to find one local optima. What's the problem with optimising + a three parameter model like this? + + ```{r} + model1 <- function(a, data) { + a[1] + data$x * a[2] + a[3] + } + ``` + ## Visualising models For simple models, like the one above, you can figure out what pattern the model captures by carefully studying the model family and the fitted coefficients. And if you ever take a statistics course on modelling, you're likely to spend a lot of time doing just that. Here, however, we're going to take a different tack. We're going to focus on understanding a model by looking at its predictions. This has a big advantage: every type of predictive model makes predictions (otherwise what use would it be?) so we can use the same set of techniques to understand any type of predictive model.