Merge branch 'master' of github.com:hadley/r4ds

This commit is contained in:
hadley 2016-02-11 08:11:13 -06:00
commit b543ddc9f6
5 changed files with 173 additions and 173 deletions

View File

@ -45,19 +45,19 @@ There are many ways to read flat files into R. If you've be using R for a while,
sometimes need to supply a few more arguments when using them the first sometimes need to supply a few more arguments when using them the first
time, but they'll definitely work on other peoples computers. The base R time, but they'll definitely work on other peoples computers. The base R
functions take a number of settings from system defaults, which means that functions take a number of settings from system defaults, which means that
code that works on your computer might not work on someone elses. code that works on your computer might not work on someone else's.
Make sure you have the readr package (`install.packages("readr")`). Make sure you have the readr package (`install.packages("readr")`).
Most of readr's functions are concerned with turning flat files into data frames: Most of readr's functions are concerned with turning flat files into data frames:
* `read_csv()` read comma delimited files, `read_csv2()` reads semi-colon * `read_csv()` reads comma delimited files, `read_csv2()` reads semi-colon
separated files (common in countries where `,` is used as the decimal place), separated files (common in countries where `,` is used as the decimal place),
`read_tsv()` reads tab delimited files, and `read_delim()` reads in files `read_tsv()` reads tab delimited files, and `read_delim()` reads in files
with a user supplied delimiter. with a user supplied delimiter.
* `read_fwf()` reads fixed width files. You can specify fields either by their * `read_fwf()` reads fixed width files. You can specify fields either by their
widths with `fwf_widths()` or theirs position with `fwf_positions()`. widths with `fwf_widths()` or their position with `fwf_positions()`.
`read_table()` reads a common variation of fixed width files where columns `read_table()` reads a common variation of fixed width files where columns
are separated by white space. are separated by white space.
@ -73,7 +73,7 @@ readr also provides a number of functions for reading files off disk into simple
These might be useful for other programming tasks. These might be useful for other programming tasks.
As well as reading data frame disk, readr also provides tools for working with data frames and character vectors in R: As well as reading data from disk, readr also provides tools for working with data frames and character vectors in R:
* `type_convert()` applies the same parsing heuristics to the character columns * `type_convert()` applies the same parsing heuristics to the character columns
in a data frame. You can override its choices using `col_types`. in a data frame. You can override its choices using `col_types`.
@ -94,7 +94,7 @@ The first two arguments of `read_csv()` are:
* `TRUE` (the default), which reads column names from the first row * `TRUE` (the default), which reads column names from the first row
of the file of the file
* `FALSE` number columns sequentially from `X1` to `Xn`. * `FALSE` numbers columns sequentially from `X1` to `Xn`.
* A character vector, used as column names. If these don't match up * A character vector, used as column names. If these don't match up
with the columns in the data, you'll get a warning message. with the columns in the data, you'll get a warning message.
@ -109,7 +109,7 @@ EXAMPLE
Typically, you'll see a lot of warnings if readr has guessed the column type incorrectly. This most often occurs when the first 1000 rows are different to the rest of the data. Perhaps there are a lot of missing data there, or maybe your data is mostly numeric but a few rows have characters. Fortunately, it's easy to fix these problems using the `col_type` argument. Typically, you'll see a lot of warnings if readr has guessed the column type incorrectly. This most often occurs when the first 1000 rows are different to the rest of the data. Perhaps there are a lot of missing data there, or maybe your data is mostly numeric but a few rows have characters. Fortunately, it's easy to fix these problems using the `col_type` argument.
(Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iteration while you're finding common problems) (Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iterations while you're finding common problems)
Specifying the `col_type` looks like this: Specifying the `col_type` looks like this:
@ -128,7 +128,7 @@ You can use the following types of columns
* `col_number()` (n) is a more flexible parsed for numbers embedded in other * `col_number()` (n) is a more flexible parsed for numbers embedded in other
strings. It will look for the first number in a string, ignoring non-numeric strings. It will look for the first number in a string, ignoring non-numeric
prefixes and suffixes. It will also ignoring the grouping mark specified by prefixes and suffixes. It will also ignore the grouping mark specified by
the locale (see below for more details). the locale (see below for more details).
* `col_factor()` (f) allows you to load data directly into a factor if you know * `col_factor()` (f) allows you to load data directly into a factor if you know
@ -139,7 +139,7 @@ You can use the following types of columns
* `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates, * `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates,
date times, and times as described below. date times, and times as described below.
You might have noticed that each column parser has a one letter abbreviation, which you can instead of the full function call (assuming you're happy with the default arguments): You might have noticed that each column parser has a one letter abbreviation, which you can use instead of the full function call (assuming you're happy with the default arguments):
```{r, eval = FALSE} ```{r, eval = FALSE}
read_csv("mypath.csv", col_types = cols( read_csv("mypath.csv", col_types = cols(
@ -203,7 +203,7 @@ If these defaults don't work for your data you can supply your own date time for
* AM/PM indicator: `%p`. * AM/PM indicator: `%p`.
* Non-digits: `%.` skips one non-digit charcter, `%*` skips any number of * Non-digits: `%.` skips one non-digit character, `%*` skips any number of
non-digits. non-digits.
The best way to figure out the correct string is to create a few examples in a character vector, and test with one of the parsing functions. For example: The best way to figure out the correct string is to create a few examples in a character vector, and test with one of the parsing functions. For example:
@ -360,11 +360,11 @@ There are three key differences between tbl_dfs and data.frames:
You can control the default appearance with options: You can control the default appearance with options:
* `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `n` * `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `m`
rows print `m` rows. Use `options(dplyr.print_max = Inf)` to always rows print `m` rows. Use `options(dplyr.print_max = Inf)` to always
show all rows. show all rows.
* `options(dply.width = Inf)` will always print all columns, regardless * `options(dplyr.width = Inf)` will always print all columns, regardless
of the width of the screen. of the width of the screen.

View File

@ -18,7 +18,7 @@ It's rare that a data analysis involves only a single table of data. Typically y
Relations are always defined between a pair of tables. All other relations are built up from this simple idea: the relations of three or more tables are always a property of the relations between each pair; sometimes both elements of a pair can be the same table. Relations are always defined between a pair of tables. All other relations are built up from this simple idea: the relations of three or more tables are always a property of the relations between each pair; sometimes both elements of a pair can be the same table.
To work with relational data you need verbs that work with pairs of tables. There are three families of verbs design to work with relational data: To work with relational data you need verbs that work with pairs of tables. There are three families of verbs designed to work with relational data:
* __Mutating joins__, which add new variables to one data frame from matching * __Mutating joins__, which add new variables to one data frame from matching
rows in another. rows in another.
@ -28,11 +28,11 @@ To work with relational data you need verbs that work with pairs of tables. Ther
* __Set operations__, which treat observations like they were set elements. * __Set operations__, which treat observations like they were set elements.
The most common place to find relational data is in a _relational_ database management system, a term that encompasses almost all modern databases. If you've used a database before, you've almost certainly used SQL. If so, you should find the concepts in this chapter familiar, although their expression in dplyr is little different. Generally, dplyr is a little easier to use than SQL because it's specialised to data analysis: it makes common data analysis operations easier, at the expense of making it difficult to do other things. The most common place to find relational data is in a _relational_ database management system, a term that encompasses almost all modern databases. If you've used a database before, you've almost certainly used SQL. If so, you should find the concepts in this chapter familiar, although their expression in dplyr is a little different. Generally, dplyr is a little easier to use than SQL because it's specialised to data analysis: it makes common data analysis operations easier, at the expense of making it difficult to do other things.
## nycflights13 {#nycflights13-relational} ## nycflights13 {#nycflights13-relational}
You'll learn about relational data with other datasets from the nycflights13 package. As well as the `flights` table that you've worked with so far, nycflights13 contains a four related data frames: You'll learn about relational data with other datasets from the nycflights13 package. As well as the `flights` table that you've worked with so far, nycflights13 contains four other related data frames:
* `airlines` lets you look up the full carrier name from its abbreviated * `airlines` lets you look up the full carrier name from its abbreviated
code: code:
@ -112,7 +112,7 @@ There are two types of keys:
each plane. each plane.
* A __foreign key__ uniquely identifies an observation in another table. * A __foreign key__ uniquely identifies an observation in another table.
For example, the `flights$tailnum` is a foregin key because it matches each For example, the `flights$tailnum` is a foreign key because it matches each
flight to a unique plane. flight to a unique plane.
A variable can be both part of primary key _and_ a foreign key. For example, `origin` is part of the `weather` primary key, and is also a foreign key for the `airport` table. A variable can be both part of primary key _and_ a foreign key. For example, `origin` is part of the `weather` primary key, and is also a foreign key for the `airport` table.
@ -124,16 +124,16 @@ planes %>% count(tailnum) %>% filter(n > 1)
weather %>% count(year, month, day, hour, origin) %>% filter(n > 1) weather %>% count(year, month, day, hour, origin) %>% filter(n > 1)
``` ```
Sometimes a table does't have an explicit primary key: each row is an observation, but no combination of variables reliably identifies it. For example, what's the primary key in the `flights` table? You might think it would be the date plus the flight or tail number, but neither of those are unique: Sometimes a table doesn't have an explicit primary key: each row is an observation, but no combination of variables reliably identifies it. For example, what's the primary key in the `flights` table? You might think it would be the date plus the flight or tail number, but neither of those are unique:
```{r} ```{r}
flights %>% count(year, month, day, flight) %>% filter(n > 1) flights %>% count(year, month, day, flight) %>% filter(n > 1)
flights %>% count(year, month, day, tailnum) %>% filter(n > 1) flights %>% count(year, month, day, tailnum) %>% filter(n > 1)
``` ```
When starting to work with this data, I had naively assumed that each flight number would be only used once per day: that would make it much easiser to communicate problems with a specific flight. Unfortunately that is not the case! If a table lacks a primary key, it's sometimes useful to add one with `row_number()`. That makes it easier to match observations if you've done some filtering and want to check back in with the original data. This is called a surrogate key. When starting to work with this data, I had naively assumed that each flight number would be only used once per day: that would make it much easier to communicate problems with a specific flight. Unfortunately that is not the case! If a table lacks a primary key, it's sometimes useful to add one with `row_number()`. That makes it easier to match observations if you've done some filtering and want to check back in with the original data. This is called a surrogate key.
A primary key and the corresponding foreign key in another table form a __relation__. Relations are typically one-to-many. For example, each flight has one plane, but each plane has many flights. In other data, you'll occassionaly see a 1-to-1 relationship. You can think of this as a special case of 1-to-many. It's possible to model many-to-many relations with a many-to-1 relation plus a 1-to-many relation. For example, in this data there's a many-to-many relationship between airlines and airports: each airport flies to many airlines; each airport hosts many airlines. A primary key and the corresponding foreign key in another table form a __relation__. Relations are typically one-to-many. For example, each flight has one plane, but each plane has many flights. In other data, you'll occasionally see a 1-to-1 relationship. You can think of this as a special case of 1-to-many. It's possible to model many-to-many relations with a many-to-1 relation plus a 1-to-many relation. For example, in this data there's a many-to-many relationship between airlines and airports: each airport flies to many airlines; each airport hosts many airlines.
### Exercises ### Exercises
@ -243,7 +243,7 @@ Graphically, that looks like:
knitr::include_graphics("diagrams/join-outer.png") knitr::include_graphics("diagrams/join-outer.png")
``` ```
The most commonly used join is the left join: you use this when ever you lookup additional data out of another table, becasuse it preserves the original observations even when there isn't a match. The left join should be your default join: use it unless you have a strong reason to prefer one of the others. The most commonly used join is the left join: you use this whenever you lookup additional data out of another table, because it preserves the original observations even when there isn't a match. The left join should be your default join: use it unless you have a strong reason to prefer one of the others.
Another way to depict the different types of joins is with a Venn diagram: Another way to depict the different types of joins is with a Venn diagram:
@ -352,7 +352,7 @@ So far, the pairs of tables have always been joined by a single variable, and th
1. What weather conditions make it more likely to see a delay? 1. What weather conditions make it more likely to see a delay?
1. What happened on June 13 2013? Display the spatial pattern of delays, 1. What happened on June 13 2013? Display the spatial pattern of delays,
and then use google to cross-reference with the weather. and then use Google to cross-reference with the weather.
```{r, eval = FALSE, include = FALSE} ```{r, eval = FALSE, include = FALSE}
worst <- filter(not_cancelled, month == 6, day == 13) worst <- filter(not_cancelled, month == 6, day == 13)
@ -385,17 +385,17 @@ SQL is the inspiration for dplyr's conventions, so the translation is straightfo
dplyr | SQL dplyr | SQL
-----------------------------|------------------------------------------- -----------------------------|-------------------------------------------
`inner_join(x, y, by = "z")` | `SELECT * FROM x INNER JOIN y USING (z)` `inner_join(x, y, by = "z")` | `SELECT * FROM x INNER JOIN y USING (z)`
`left_join(x, y, by = "z")` | `SELECT * FROM x LEFT OUTER JOIN USING (z)` `left_join(x, y, by = "z")` | `SELECT * FROM x LEFT OUTER JOIN y USING (z)`
`right_join(x, y, by = "z")` | `SELECT * FROM x RIGHT OUTER JOIN USING (z)` `right_join(x, y, by = "z")` | `SELECT * FROM x RIGHT OUTER JOIN y USING (z)`
`full_join(x, y, by = "z")` | `SELECT * FROM x FULL OUTER JOIN USING (z)` `full_join(x, y, by = "z")` | `SELECT * FROM x FULL OUTER JOIN y USING (z)`
Note that "INNER" and "OUTER" are optional, and often ommitted. Note that "INNER" and "OUTER" are optional, and often omitted.
Joining different variables between the tables, e.g. `inner_join(x, y, by = c("a" = "b"))` uses a slightly different syntax in SQL: `SELECT * FROM x INNER JOIN y ON x.a = y.b`. As this syntax suggests SQL supports a wide range of join types than dplyr because you can connect the tables using constraints other than equiality (sometimes called non-equijoins). Joining different variables between the tables, e.g. `inner_join(x, y, by = c("a" = "b"))` uses a slightly different syntax in SQL: `SELECT * FROM x INNER JOIN y ON x.a = y.b`. As this syntax suggests SQL supports a wide range of join types than dplyr because you can connect the tables using constraints other than equality (sometimes called non-equijoins).
## Filtering joins {#filtering-joins} ## Filtering joins {#filtering-joins}
Filtering joins match obserations in the same way as mutating joins, but affect the observations, not the variables. There are two types: Filtering joins match observations in the same way as mutating joins, but affect the observations, not the variables. There are two types:
* `semi_join(x, y)` __keeps__ all observations in `x` that have a match in `y`. * `semi_join(x, y)` __keeps__ all observations in `x` that have a match in `y`.
* `anti_join(x, y)` __drops__ all observations in `x` that have a match in `y`. * `anti_join(x, y)` __drops__ all observations in `x` that have a match in `y`.
@ -494,7 +494,7 @@ Be aware that simply checking the number of rows before and after the join is no
## Set operations {#set-operations} ## Set operations {#set-operations}
The final type of two-table verb is set operations. Generally, I use these the least frequently, but they are occassionally useful when you want to break a single complex filter into simpler pieces that you then combine. The final type of two-table verb is set operations. Generally, I use these the least frequently, but they are occasionally useful when you want to break a single complex filter into simpler pieces that you then combine.
All these operations work with a complete row, comparing the values of every variable. These expect the `x` and `y` inputs to have the same variables, and treat the observations like sets: All these operations work with a complete row, comparing the values of every variable. These expect the `x` and `y` inputs to have the same variables, and treat the observations like sets:

View File

@ -29,7 +29,7 @@ The idea of minimising the context needed to understand your code goes beyond ju
There are three common classes of surprises in R: There are three common classes of surprises in R:
1. Unstable types: What what will `df[, x]` return? You can assume that `df` 1. Unstable types: What will `df[, x]` return? You can assume that `df`
is a data frame and `x` is a vector because of their names. But you don't is a data frame and `x` is a vector because of their names. But you don't
know whether this code will return a data frame or a vector because the know whether this code will return a data frame or a vector because the
behaviour of `[` depends on the length of x. behaviour of `[` depends on the length of x.

View File

@ -87,7 +87,7 @@ knitr::include_graphics("images/tidy-2.png")
*A data frame is a list of vectors that R displays as a table. When your data is tidy, the values of each variable fall in their own column vector.* *A data frame is a list of vectors that R displays as a table. When your data is tidy, the values of each variable fall in their own column vector.*
As a result, you can extract the all of the values of a variable in a tidy data set by extracting the column vector that contains the variable. You can do this easily with R's list syntax, i.e. As a result, you can extract all the values of a variable in a tidy data set by extracting the column vector that contains the variable. You can do this easily with R's list syntax, i.e.
```{r} ```{r}
table1$cases table1$cases
@ -247,7 +247,7 @@ Every cell in a table of data contains one half of a key value pair, as does eve
table2 table2
``` ```
In `table2`, the `key` column contains only keys (and not just because the column is labelled `key`). Conveniently, the `value` column contains the values associated with those keys. In `table2`, the `key` column contains only keys (and not just because the column is labeled `key`). Conveniently, the `value` column contains the values associated with those keys.
You can use the `spread()` function to tidy this layout. You can use the `spread()` function to tidy this layout.
@ -269,7 +269,7 @@ knitr::include_graphics("images/tidy-8.png")
*`spread()` distributes a pair of key:value columns into a field of cells. The unique keys in the key column become the column names of the field of cells.* *`spread()` distributes a pair of key:value columns into a field of cells. The unique keys in the key column become the column names of the field of cells.*
You can see that `spread()` maintains each of the relationships expressed in the original data set. The output contains the four original variables, *country*, *year*, *population*, and *cases*, and the values of these variables are grouped according to the orginal observations. As a bonus, now the layout of these relationships is tidy. You can see that `spread()` maintains each of the relationships expressed in the original data set. The output contains the four original variables, *country*, *year*, *population*, and *cases*, and the values of these variables are grouped according to the original observations. As a bonus, now the layout of these relationships is tidy.
`spread()` takes three optional arguments in addition to `data`, `key`, and `value`: `spread()` takes three optional arguments in addition to `data`, `key`, and `value`:
@ -367,7 +367,7 @@ You can also pass an integer or vector of integers to `sep`. `separate()` will i
separate(table3, year, into = c("century", "year"), sep = 2) separate(table3, year, into = c("century", "year"), sep = 2)
``` ```
You can futher customize `separate()` with the `remove`, `convert`, and `extra` arguments: You can further customize `separate()` with the `remove`, `convert`, and `extra` arguments:
- **`remove`** - Set `remove = FALSE` to retain the column of values that were separated in the final data frame. - **`remove`** - Set `remove = FALSE` to retain the column of values that were separated in the final data frame.
- **`convert`** - By default, `separate()` will return new columns as character columns. Set `convert = TRUE` to convert new columns to double (numeric), integer, logical, complex, and factor columns with `type.convert()`. - **`convert`** - By default, `separate()` will return new columns as character columns. Set `convert = TRUE` to convert new columns to double (numeric), integer, logical, complex, and factor columns with `type.convert()`.

View File

@ -236,7 +236,7 @@ filter(flights, !(arr_delay > 120 | dep_delay > 120))
filter(flights, arr_delay <= 120, dep_delay <= 120) filter(flights, arr_delay <= 120, dep_delay <= 120)
``` ```
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used if `if` statements when programming. You'll learn about that later on. Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used in `if` statements when programming. You'll learn about that later on.
Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The cumulative functions `cumany()` and `cumall()` allow you to find these values: Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The cumulative functions `cumany()` and `cumall()` allow you to find these values:
@ -535,7 +535,7 @@ ggplot(flights, aes(dep_sched %% 60)) + geom_histogram(binwidth = 1)
ggplot(flights, aes(air_time - airtime2)) + geom_histogram() ggplot(flights, aes(air_time - airtime2)) + geom_histogram()
``` ```
1. Currently `dep_time()` and `arr_time()` are convenient to look at, but 1. Currently `dep_time` and `arr_time` are convenient to look at, but
hard to compute with because they're not really continuous numbers. hard to compute with because they're not really continuous numbers.
Convert them to a more convenient representation of number of minutes Convert them to a more convenient representation of number of minutes
since midnight. since midnight.