Merge branch 'master' of github.com:hadley/r4ds

This commit is contained in:
hadley 2016-03-21 08:55:09 -05:00
commit 39e4f27146
6 changed files with 34 additions and 34 deletions

View File

@ -189,7 +189,7 @@ impute_missing()
collapse_years()
```
If your function name is composed of multiple words, I recommend using "snake\_case", where each word is lower case and separated by an underscore. camelCase is a popular alternative alternative, but be consistent: pick one or the other and stick with it. R itself is not very consistent, but there's nothing you can do about that. Make sure you don't fall into the same trap by making your code as consistent as possible.
If your function name is composed of multiple words, I recommend using "snake\_case", where each word is lower case and separated by an underscore. camelCase is a popular alternative, but be consistent: pick one or the other and stick with it. R itself is not very consistent, but there's nothing you can do about that. Make sure you don't fall into the same trap by making your code as consistent as possible.
```{r, eval = FALSE}
# Never do this!

View File

@ -8,7 +8,7 @@ This chapter will explain how to build useful models with R.
*Section 1* will show you how to build linear models, the most commonly used type of model. Along the way, you will learn R's model syntax, a general syntax that you can reuse with most of R's modeling functions.
*Section 2* will show you the best ways to use R's model output, which is often reguires additional wrangling.
*Section 2* will show you the best ways to use R's model output, which is often requires additional wrangling.
*Section 3* will teach you to build and interpret multivariate linear models, models that use more than one explanatory variable to explain the values of a response variable.
@ -33,7 +33,7 @@ library(broom)
Have you heard that a relationship exists between your height and your income? It sounds far-fetched---and maybe it is---but many people believe that taller people will be promoted faster and valued more for their work, an effect that directly inflates the income of the vertically gifted. Do you think this is true?
Luckily, it is easy to measure someone's height, as well as their income (and a swath of other variables besides), which means that we can collect data relevant to the question. In fact, the Bureau of Labor Statistics has been doing this in a controlled way for over 50 years. The BLS [National Longitudinal Surveys (NLS)](https://www.nlsinfo.org/) track the income, education, and life circumstances of a large cohort of Americans across several decades. In case you are wondering, the point of the NLS is not to study the relationhip between height and income, that's just a lucky accident.
Luckily, it is easy to measure someone's height, as well as their income (and a swath of other variables besides), which means that we can collect data relevant to the question. In fact, the Bureau of Labor Statistics has been doing this in a controlled way for over 50 years. The BLS [National Longitudinal Surveys (NLS)](https://www.nlsinfo.org/) track the income, education, and life circumstances of a large cohort of Americans across several decades. In case you are wondering, the point of the NLS is not to study the relationship between height and income, that's just a lucky accident.
You can load the latest cross-section of NLS data, collected in 2013 with the code below.
@ -43,7 +43,7 @@ heights <- readRDS("data/heights.RDS")
I've narrowed the data down to 10 variables:
* `id` - A number ot identify each subject
* `id` - A number to identify each subject
* `income` - The self-reported income of each subject
* `height` - The height of each subject in inches
* `weight` - The weight of each subject in pounds
@ -66,7 +66,7 @@ ggplot(data = heights, mapping = aes(x = height, y = income)) +
geom_point()
```
First, let's address a distraction: the data is censored in an odd way. The y variable is income, which means that there are no y values less than zero. That's not odd. However, there are also no y values above $180,331. In fact, there are a line of unusual values at exactly $180,331. This is because the Burea of Labor Statistics removed the top 2% of income values and replaced them with the mean value of the top 2% of values, an action that was not designed to enhance the usefulness of the data for data science.
First, let's address a distraction: the data is censored in an odd way. The y variable is income, which means that there are no y values less than zero. That's not odd. However, there are also no y values above $180,331. In fact, there are a line of unusual values at exactly $180,331. This is because the Bureau of Labor Statistics removed the top 2% of income values and replaced them with the mean value of the top 2% of values, an action that was not designed to enhance the usefulness of the data for data science.
Also, you can see that heights have been rounded to the nearest inch.
@ -78,9 +78,9 @@ cor(heights$height, heights$income, use = "na")
A model describes the relationship between two or more variables. There are multiple ways to describe any relationship. Which is best?
A common choice: decide form of relationship, then minimize residuals.
A common choice: decide the form of the relationship, then minimize residuals.
Use R's `lm()` function to fit a linear model to your data. The first argument of `lm()` should be a formula, two or more varibles separated by a `~`. You've seen forumlas before, we used them in Chapter 2 to facet graphs.
Use R's `lm()` function to fit a linear model to your data. The first argument of `lm()` should be a formula, two or more variables separated by a `~`. You've seen formulas before, we used them in Chapter 2 to facet graphs.
```{r}
income ~ height
@ -113,7 +113,7 @@ lm(income ~ 0 + height, data = heights)
## Using model output
R model output is not very tidy. It is designed to provide a data store that you can extract information from with helper functions.
R's model output is not very tidy. It is designed to provide a data store from which you can extract information with helper functions.
```{r}
coef(h)
@ -121,7 +121,7 @@ predict(h)[1:5]
resid(h)[1:5]
```
The `broom` package provides the most useful helper functions for working with R models. `broom` functions return the most useful model information as a data frames, which lets you quickly embed the information into your data science workflow.
The `broom` package provides the most useful helper functions for working with R models. `broom` functions return the most useful model information as data frames, which lets you quickly embed the information into your data science workflow.
### tidy()
@ -155,7 +155,7 @@ There appears to be a relationship between a person's education and how poorly t
Patterns in the residuals suggest that relationships exist between y and other variables, even when the effect of heights is accounted for.
Add variables to a model by adding variables to the righthand side of the model formula.
Add variables to a model by adding variables to the right-hand side of the model formula.
```{r}
income ~ height + education
@ -165,7 +165,7 @@ tidy(he)
### Interpretation
The coefficient of each variable displays the change of income that is associated with a one unit change in the variable _when all other variables are held constant_.
The coefficient of each variable represents the increase in income associated with a one unit increase in the variable _when all other variables are held constant_.
### Interaction effects
@ -196,7 +196,7 @@ Each level of the factor (i.e. unique value) is encoded as an integer and displa
If you use factors outside of a model, you will notice some limiting behavior:
* You cannot add values to a factor that do not appear in its levels attribute
* You cannot add to a factor values that do not appear in its levels attribute
* factors retain all of their levels attribute when you subset them. To avoid this use `drop = TRUE`.
```{r}
fac[1]
@ -208,7 +208,7 @@ num_fac <- factor(1:3, levels = 1:3, labels = c("100", "200", "300"))
num_fac
as.numeric(num_fac)
```
To coerce these labels to a different data type, first coerce the factor to a charater string with `as.character()`
To coerce these labels to a different data type, first coerce the factor to a character string with `as.character()`
```{r}
as.numeric(as.character(num_fac))
```
@ -333,7 +333,7 @@ gam(y ~ s(x, z), data = df)
We've avoided two things in this chapter that are usually conflated with models: hypothesis testing and predictive analysis.
There are other types of modeling algorithms; each provides a valid description about the data.
There are other types of modeling algorithms; each provides a valid description of the data.
Which description will be best? Does the relationship have a known form? Does the data have a known structure? Are you going to attempt hypothesis testing that imposes its own constraints?

View File

@ -20,7 +20,7 @@ In the following chapters, you'll learn important programming skills:
and functional programming.
1. As you start to write more powerful functions, you'll need a solid
grouning in R's data structures. You must master the four common atomic
grounding in R's data structures. You must master the four common atomic
vectors, the three important S3 classes built on top of them, and
understand the mysteries of the list and data frame.

View File

@ -58,7 +58,7 @@ One way to show the relationships between the different tables is with a drawing
knitr::include_graphics("diagrams/relational-nycflights.png")
```
This diagram is a little overwhelming, and even so it's simple compared to some you'll see in the wild! The key to understanding diagrams like this is to remember each relation always concerns a pair of tables. You don't need to understand the whole thing; you just need the understand the chain of relations between the tables that you are interested in.
This diagram is a little overwhelming, and even so it's simple compared to some you'll see in the wild! The key to understanding diagrams like this is to remember each relation always concerns a pair of tables. You don't need to understand the whole thing; you just need to understand the chain of relations between the tables that you are interested in.
For nycflights13:

View File

@ -4,7 +4,7 @@ library(magrittr)
# Robust code
(This is an advanced topic. You shouldn't worry too much about it when you first start writing functions. Instead you should focus on getting a function that works right for the easiest 80% of the problem. Then in time, you'll learn how to get to 99% with minimal extra effort. The defaults in this book should steer you in the right direction: we avoid teaching you functions with major suprises.)
(This is an advanced topic. You shouldn't worry too much about it when you first start writing functions. Instead you should focus on getting a function that works right for the easiest 80% of the problem. Then in time, you'll learn how to get to 99% with minimal extra effort. The defaults in this book should steer you in the right direction: we avoid teaching you functions with major surprises.)
In this section you'll learn an important principle that lends itself to reliable and readable code: favour code that can be understood with a minimum of context. On one extreme, take this code:
@ -18,7 +18,7 @@ What does it do? You can glean only a little from the context: `foo()` is a func
df2 <- arrange(df, qux)
```
It's now much easier to see what's going on! Function and variable names are important because they tell you about (or at least jog your memory of) what the code does. That helps you understand code in isolation, even if you don't completely understand all the details. Unfortunately naming things is hard, and its hard to give concrete advice apart from giving objects short but evocative names. As autocomplete in RStudio has gotten better, I've tended to use longer names that are more descriptive. Short names are faster to type, but you write code relatively infrequently compared to the number of times that you read it.
It's now much easier to see what's going on! Function and variable names are important because they tell you about (or at least jog your memory of) what the code does. That helps you understand code in isolation, even if you don't completely understand all the details. Unfortunately naming things is hard, and it's hard to give concrete advice apart from giving objects short but evocative names. As autocomplete in RStudio has gotten better, I've tended to use longer names that are more descriptive. Short names are faster to type, but you write code relatively infrequently compared to the number of times that you read it.
The idea of minimising the context needed to understand your code goes beyond just good naming. You also want to favour functions with predictable behaviour and few surprises. If a function does radically different things when its inputs differ slightly, you'll need to carefully read the surrounding context in order to predict what it will do. The goal of this section is to educate you about the most common ways R functions can be surprising and to provide you with unsurprising alternatives.
@ -67,7 +67,7 @@ df$x
### Unpredictable types
One of the most frustrating for programming is they way `[` returns a vector if the result has a single column, and returns a data frame otherwise. In other words, if you see code like `df[x, ]` you can't predict what it will return without knowing the value of `x`. This can trip you up in surprising ways. For example, imagine you've written this function to return the last row of a data frame:
One of the aspects most frustrating for programming is that `[` returns a vector if the result has a single column, and returns a data frame otherwise. In other words, if you see code like `df[x, ]` you can't predict what it will return without knowing the value of `x`. This can trip you up in surprising ways. For example, imagine you've written this function to return the last row of a data frame:
```{r}
last_row <- function(df) {
@ -116,7 +116,7 @@ df[3:4] %>% sapply(class) %>% str()
In the next chapter, you'll learn about the purrr package which provides a variety of alternatives. In this case, you could use `map_chr()` which always returns a character vector: if it can't, it will throw an error. Another option is the base `vapply()` function which takes a third argument indicating what the output should look like.
This doesn't make `sapply()` bad and `vapply()` and `map_chr()` good. `sapply()` is nice because you can use it interactively without having to think about what `f` will return. 95% of the time it will do the right thing, and if it doesn't you can quickly fix it. `map_chr()` is more important when your programming because a clear error message is more valuable when an operation is buried deep inside a tree of function calls. At this point its worth thinking more about
This doesn't make `sapply()` bad and `vapply()` and `map_chr()` good. `sapply()` is nice because you can use it interactively without having to think about what `f` will return. 95% of the time it will do the right thing, and if it doesn't you can quickly fix it. `map_chr()` is more important when you're programming because a clear error message is more valuable when an operation is buried deep inside a tree of function calls. At this point it's worth thinking more about
### Non-standard evaluation
@ -184,7 +184,7 @@ big_x <- function(df, threshold) {
}
```
Because dplyr currently has no way to force a name to be interpreted as either a local or parent variable, as I've only just realised that's really you should avoid NSE. In a future version you should be able to do:
Because dplyr currently has no way to force a name to be interpreted as either a local or parent variable, as I've only just realised, that's really why you should avoid NSE. In a future version you should be able to do:
```{r}
big_x <- function(df, threshold) {
@ -212,7 +212,7 @@ Functions are easiest to reason about if they have two properties:
The first property is particularly important. If a function has hidden additional inputs, it's very difficult to even know where the important context is!
The biggest breaker of this rule in base R are functions that create data frames. Most of these functions have a `stringsAsFactors` argument that defaults to `getOption("stringsAsFactors")`. This means that a global option affects the operation of a very large number of functions, and you need to be aware that depending on an external state a function might produce either a character vector or a factor. In this book, we steer you away from that problem by recommnding functions like `readr::read_csv()` and `dplyr::data_frame()` that don't rely on this option. But be aware of it! Generally if a function is affected by a global option, you should avoid setting it.
The biggest breakers of this rule in base R are functions that create data frames. Most of these functions have a `stringsAsFactors` argument that defaults to `getOption("stringsAsFactors")`. This means that a global option affects the operation of a very large number of functions, and you need to be aware that, depending on an external state, a function might produce either a character vector or a factor. In this book, we steer you away from that problem by recommending functions like `readr::read_csv()` and `dplyr::data_frame()` that don't rely on this option. But be aware of it! Generally if a function is affected by a global option, you should avoid setting it.
Only use `options()` to control side-effects of a function. The value of an option should never affect the return value of a function. There are only three violations of this rule in base R: `stringsAsFactors`, `encoding`, `na.action`. For example, base R lets you control the number of digits printed in default displays with (e.g.) `options(digits = 3)`. This is a good use of an option because it's something that people frequently want control over, but doesn't affect the computation of a result, just its display. Follow this principle with your own use of options.

View File

@ -51,7 +51,7 @@ One of the most useful tools in this quest are the values themselves, the values
The distribution of a variable reveals information about the probabilities associated with the variable. As you collect more data, the proportion of observations that occur at a value (or in an interval) will match the probability that the variable will take that value (or take a value in that interval) in a future measurement.
In theory, it is easy to visualize the distribution of a variable; simply display how many observations occur at each value of the variable. In practice, how you do this will depend on the type of variable that you wish to visualize.
In theory, it is easy to visualize the distribution of a variable: simply display how many observations occur at each value of the variable. In practice, how you do this will depend on the type of variable that you wish to visualize.
##### Discrete distributions
@ -139,7 +139,7 @@ Several geoms exist to help you visualize continuous distributions. They almost
* `binwidth` - the width to use for the bins in the same units as the x variable
* `origin` - origin of the first bin interval
* `right` - if `TRUE` bins will be right closed (e.g. points that fall on the border of two bins will be counted with the bin to the left)
* `breaks` - a vector of actual bin breaks to use. If you set the breaks argument, it will overide the binwidth and origin arguments.
* `breaks` - a vector of actual bin breaks to use. If you set the breaks argument, it will override the binwidth and origin arguments.
Use `geom_histogram()` to make a traditional histogram. The height of each bar reveals how many observations fall within the width of the bar.
@ -155,7 +155,7 @@ ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 1)
```
Notice how different binwidths reveal different information. The plot above shows that the availability of diamonds decreases quickly as carat size increases. The plot below shows that there are more diamonds than you would expect at whole carat sizes (and common fractions of carat sizes). Moreover, for each popular size, there are more diamonds that are slightly larger than the size than there are that are slightly smaller than the size.
Notice how different binwidths reveal different information. The plot above shows that the availability of diamonds decreases quickly as carat size increases. The plot below shows that there are more diamonds than you would expect at whole carat sizes (and common fractions of carat sizes). Moreover, for each popular size, there are more diamonds slightly larger than the size than diamonds slightly smaller than the size.
```{r}
@ -288,7 +288,7 @@ You've probably heard that "correlation (covariation) does not prove causation."
Visualization is one of the best ways to spot covariation. How you look for covariation will depend on the structural relationship between two variables. The simplest structure occurs when two continuous variables have a functional relationship, where each value of one variable corresponds to a single value of the second variable.
In this scenario, covariation will appear as a pattern in the relationship. If two variables o not covary, their functional relationship will look like a random walk.
In this scenario, covariation will appear as a pattern in the relationship. If two variables do not covary, their functional relationship will look like a random walk.
The variables `date` and `unemploy` in the `economics` data set have a functional relationship. The `economics` data set comes with `ggplot2` and contains various economic indicators for the United States between 1967 and 2007. The `unemploy` variable measures the number of unemployed individuals in the United States in thousands.
@ -451,7 +451,7 @@ Control the appearance of the labels with the following arguments. You can also
* `hjust` - horizontal adjustment
* `vjust`- vertical adjustment
Scatterplots do not work well with large data sets because individual points will begin to occlude each other. As a result, you cannot tell where the mass of the data lies. Does a black region contain a single layer of points? Or hundreds of points stacked on top of each other.
Scatterplots do not work well with large data sets because individual points will begin to occlude each other. As a result, you cannot tell where the mass of the data lies. Does a black region contain a single layer of points? Or hundreds of points stacked on top of each other?
You can see this type of plotting in the `diamonds` data set. The data set only contains 53,940 points, but the points overplot each other in a way that we cannot fix with jittering.
@ -462,7 +462,7 @@ ggplot(data = diamonds) +
For large data, it is more useful to plot summary information that describes the raw data than it is to plot the raw data itself. Several geoms can help you do this.
The simplest way to summarize covariance between two variables is with a model line. The model line displays the trend of the relationship between the variables.
The simplest way to summarise covariance between two variables is with a model line. The model line displays the trend of the relationship between the variables.
Use `geom_smooth()` to display a model line between any two variables. As with `geom_rug()`, `geom_smooth()` works well as a second layer for a plot (See Section 3 for details).
@ -476,7 +476,7 @@ ggplot(data = diamonds) +
`geom_smooth()` will also plot a standard error band around the model line. You can remove the standard error band by setting the `se` argument of `geom_smooth()` to `FALSE`.
Use the `model` argument of `geom_smooth()` to adda specific type of model line to your data. `model` takes the name of an R modeling function. `geom_smooth()` will use the function to calculate the model line. For example, the code below uses R's `lm()` function to fit a linear model line to the data.
Use the `method` argument of `geom_smooth()` to add a specific type of model line to your data. `method` takes the name of an R modeling function. `geom_smooth()` will use the function to calculate the model line. For example, the code below uses R's `lm()` function to fit a linear model line to the data.
```{r}
ggplot(data = diamonds) +
@ -511,7 +511,7 @@ Useful arguments for `geom_smooth()` are:
* `level` - Confidence level to use for standard error ribbon
* `method` - Smoothing function to use, a model function in R
* `n` - The number of points to evaluate smoother at (defaults to 80)
* `se` - If TRUE` (the default), `geom_smooth()` will include a standard error ribbon
* `se` - If `TRUE` (the default), `geom_smooth()` will include a standard error ribbon
Be careful, `geom_smooth()` will overlay a trend line on every data set, even if the underlying data is uncorrelated. You can avoid being fooled by also inspecting the raw data or calculating the correlation between your variables, e.g. `cor(diamonds$carat, diamonds$price)`.
@ -540,7 +540,7 @@ Useful arguments for `geom_quantile()` are:
* `formula` - the formula to use in the smoothing function
* `quantiles` - Conditional quantiles of $y$ to display. Each quantile is displayed with a line.
`geom_smooth()` and `geom_quantile()` summarize the relationship between two variables as a function, but you can also summarize the relationship as a bivariate distribution.
`geom_smooth()` and `geom_quantile()` summarise the relationship between two variables as a function, but you can also summarise the relationship as a bivariate distribution.
`geom_bin2d()` divides the coordinate plane into a two dimensional grid and then displays the number of observations that fall into each bin in the grid. This technique let's you see where the mass of the data lies; bins with a light fill color contain more data than bins with a dark fill color. Bins with no fill contain no data at all.
@ -565,7 +565,7 @@ Useful arguments for `geom_bin2d()` are:
* `binwidth` - A vector like `c(0.1, 500)` that gives the binwidths to use in the horizontal and vertical directions. Overrides `bins` when set.
* `drop` - If `TRUE` (default) `geom_bin2d()` removes the fill from all bins that contain zero observations.
`geom_hex()` works similarly to `geom_bin2d()`, but it divides the coordinate plain into hexagon shaped bins. This can reduce visual artifacts that are introduced by the aligning edges of rectangular bins.
`geom_hex()` works similarly to `geom_bin2d()`, but it divides the coordinate plane into hexagon shaped bins. This can reduce visual artifacts that are introduced by the aligning edges of rectangular bins.
```{r}
ggplot(data = diamonds) +
@ -574,7 +574,7 @@ ggplot(data = diamonds) +
`geom_hex()` requires the `hexbin` package, which you can install with `install.packages("hexbin")`.
`geom_density2d()` uses density contours to display similar information. It is the two dimensional equivalent of `geom_density()`. Interpret a two dimensional density plot the same way you would interpret a contour map. Each line connects an area of equal density, which makes changes of slope easy to see.
`geom_density2d()` uses density contours to display similar information. It is the two dimensional equivalent of `geom_density()`. Interpret a two dimensional density plot the same way you would interpret a contour map. Each line connects points of equal density, which makes changes of slope easy to see.
As with other summary geoms, `geom_density2d()` makes a useful second layer.
@ -600,7 +600,7 @@ Useful arguments for `geom_density2d()` are:
##### Visualize correlations between three variables
There are two ways to add three (or more) variables to a two dimensional plot. You can map additional variables to aesthics within the plot, or you can use a geom that is designed to visualize three variables.
There are two ways to add three (or more) variables to a two dimensional plot. You can map additional variables to aesthetics within the plot, or you can use a geom that is designed to visualize three variables.
`ggplot2` provides three geoms that are designed to display three variables: `geom_raster()`, `geom_tile()` and `geom_contour()`. These geoms generalize `geom_bin2d()` and `geom_density()` to display a third variable instead of a count, or a density.