Merge branch 'master' of github.com:hadley/r4ds

This commit is contained in:
hadley 2016-02-11 08:11:13 -06:00
commit b543ddc9f6
5 changed files with 173 additions and 173 deletions

View File

@ -27,14 +27,14 @@ There are many ways to read flat files into R. If you've be using R for a while,
* These functions are typically much faster (~10x) than the base equivalents.
Long run running jobs also have a progress bar, so you can see what's
happening. (If you're looking for raw speed, try `data.table::fread()`,
happening. (If you're looking for raw speed, try `data.table::fread()`,
it's slightly less flexible than readr, but can be twice as fast.)
* They have more flexible parsers: they can read in dates, times, currencies,
percentages, and more.
* They fail to do some annoying things like converting character vectors to
factors, munging the column headers to make sure they're valid R
percentages, and more.
* They fail to do some annoying things like converting character vectors to
factors, munging the column headers to make sure they're valid R
variable names, and using row names.
* They return objects with class `tbl_df`. As you saw in the dplyr chapter,
@ -45,24 +45,24 @@ There are many ways to read flat files into R. If you've be using R for a while,
sometimes need to supply a few more arguments when using them the first
time, but they'll definitely work on other peoples computers. The base R
functions take a number of settings from system defaults, which means that
code that works on your computer might not work on someone elses.
code that works on your computer might not work on someone else's.
Make sure you have the readr package (`install.packages("readr")`).
Most of readr's functions are concerned with turning flat files into data frames:
* `read_csv()` read comma delimited files, `read_csv2()` reads semi-colon
* `read_csv()` reads comma delimited files, `read_csv2()` reads semi-colon
separated files (common in countries where `,` is used as the decimal place),
`read_tsv()` reads tab delimited files, and `read_delim()` reads in files
with a user supplied delimiter.
* `read_fwf()` reads fixed width files. You can specify fields either by their
widths with `fwf_widths()` or theirs position with `fwf_positions()`.
widths with `fwf_widths()` or their position with `fwf_positions()`.
`read_table()` reads a common variation of fixed width files where columns
are separated by white space.
* `read_log()` reads Apache style logs. (But also check out
[webreadr](https://github.com/Ironholds/webreadr) which is built on top
[webreadr](https://github.com/Ironholds/webreadr) which is built on top
of `read_log()`, but provides many more helpful tools.)
readr also provides a number of functions for reading files off disk into simpler data structures:
@ -73,29 +73,29 @@ readr also provides a number of functions for reading files off disk into simple
These might be useful for other programming tasks.
As well as reading data frame disk, readr also provides tools for working with data frames and character vectors in R:
As well as reading data from disk, readr also provides tools for working with data frames and character vectors in R:
* `type_convert()` applies the same parsing heuristics to the character columns
in a data frame. You can override its choices using `col_types`.
For the rest of this chapter we'll focus on `read_csv()`. If you understand how to use this function, it will be straightforward to your knowledge to all the other functions in readr.
### Basics
The first two arguments of `read_csv()` are:
* `file`: path (or URL) to the file you want to load. Readr can automatically
* `file`: path (or URL) to the file you want to load. Readr can automatically
decompress files ending in `.zip`, `.gz`, `.bz2`, and `.xz`. This can also
be a literal csv file, which is useful for experimenting and creating
reproducible examples.
* `col_names`: column names. There are three options:
* `TRUE` (the default), which reads column names from the first row
* `TRUE` (the default), which reads column names from the first row
of the file
* `FALSE` number columns sequentially from `X1` to `Xn`.
* `FALSE` numbers columns sequentially from `X1` to `Xn`.
* A character vector, used as column names. If these don't match up
with the columns in the data, you'll get a warning message.
@ -109,7 +109,7 @@ EXAMPLE
Typically, you'll see a lot of warnings if readr has guessed the column type incorrectly. This most often occurs when the first 1000 rows are different to the rest of the data. Perhaps there are a lot of missing data there, or maybe your data is mostly numeric but a few rows have characters. Fortunately, it's easy to fix these problems using the `col_type` argument.
(Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iteration while you're finding common problems)
(Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iterations while you're finding common problems)
Specifying the `col_type` looks like this:
@ -122,24 +122,24 @@ read_csv("mypath.csv", col_types = col(
You can use the following types of columns
* `col_integer()` (i) and `col_double()` (d) specify integer and doubles.
* `col_integer()` (i) and `col_double()` (d) specify integer and doubles.
`col_logical()` (l) parses TRUE, T, FALSE and F into a logical vector.
`col_character()` (c) leaves strings as is.
`col_character()` (c) leaves strings as is.
* `col_number()` (n) is a more flexible parsed for numbers embedded in other
strings. It will look for the first number in a string, ignoring non-numeric
prefixes and suffixes. It will also ignoring the grouping mark specified by
* `col_number()` (n) is a more flexible parsed for numbers embedded in other
strings. It will look for the first number in a string, ignoring non-numeric
prefixes and suffixes. It will also ignore the grouping mark specified by
the locale (see below for more details).
* `col_factor()` (f) allows you to load data directly into a factor if you know
* `col_factor()` (f) allows you to load data directly into a factor if you know
what the levels are.
* `col_skip()` (_, -) completely ignores a column.
* `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates,
* `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates,
date times, and times as described below.
You might have noticed that each column parser has a one letter abbreviation, which you can instead of the full function call (assuming you're happy with the default arguments):
You might have noticed that each column parser has a one letter abbreviation, which you can use instead of the full function call (assuming you're happy with the default arguments):
```{r, eval = FALSE}
read_csv("mypath.csv", col_types = cols(
@ -196,14 +196,14 @@ If these defaults don't work for your data you can supply your own date time for
* Seconds: `%S` (integer seconds), `%OS` (partial seconds).
* Time zone: `%Z` (as name, e.g. `America/Chicago`), `%z` (as offset from UTC,
e.g. `+0800`). If you're American, note that "EST" is a Canadian time zone
that does not have daylight savings time. It is \emph{not} Eastern Standard
* Time zone: `%Z` (as name, e.g. `America/Chicago`), `%z` (as offset from UTC,
e.g. `+0800`). If you're American, note that "EST" is a Canadian time zone
that does not have daylight savings time. It is \emph{not} Eastern Standard
Time!
* AM/PM indicator: `%p`.
* Non-digits: `%.` skips one non-digit charcter, `%*` skips any number of
* Non-digits: `%.` skips one non-digit character, `%*` skips any number of
non-digits.
The best way to figure out the correct string is to create a few examples in a character vector, and test with one of the parsing functions. For example:
@ -236,11 +236,11 @@ The settings you are most like to need to change are:
locale("fr")
locale("fr", asciify = TRUE)
```
* The character encoding used in the file. If you don't know the encoding
you can use `guess_encoding()`. It's not perfect, but if you have a decent
sample of text, it's likely to be able to figure it out.
Readr converts all strings into UTF-8 as this is safest to work with across
platforms. (It's also what every stringr operation does.)
@ -264,61 +264,61 @@ Needs to discuss how data types in different languages are converted to R. Simil
`data_frame()` is a nice way to create data frames. It encapsulates best practices for data frames:
* It never changes an input's type (i.e., no more `stringsAsFactors = FALSE`!).
```{r}
data.frame(x = letters) %>% sapply(class)
data_frame(x = letters) %>% sapply(class)
```
This makes it easier to use with list-columns:
```{r}
data_frame(x = 1:3, y = list(1:5, 1:10, 1:20))
```
List-columns are most commonly created by `do()`, but they can be useful to
create by hand.
* It never adjusts the names of variables:
```{r}
data.frame(`crazy name` = 1) %>% names()
data_frame(`crazy name` = 1) %>% names()
```
* It evaluates its arguments lazily and sequentially:
```{r}
data_frame(x = 1:5, y = x ^ 2)
```
* It adds the `tbl_df()` class to the output so that if you accidentally print a large
* It adds the `tbl_df()` class to the output so that if you accidentally print a large
data frame you only get the first few rows.
```{r}
data_frame(x = 1:5) %>% class()
```
* It changes the behaviour of `[` to always return the same type of object:
subsetting using `[` always returns a `tbl_df()` object; subsetting using
subsetting using `[` always returns a `tbl_df()` object; subsetting using
`[[` always returns a column.
You should be aware of one case where subsetting a `tbl_df()` object
You should be aware of one case where subsetting a `tbl_df()` object
will produce a different result than a `data.frame()` object:
```{r}
df <- data.frame(a = 1:2, b = 1:2)
str(df[, "a"])
tbldf <- tbl_df(df)
str(tbldf[, "a"])
```
* It never uses `row.names()`. The whole point of tidy data is to
store variables in a consistent way. So it never stores a variable as
* It never uses `row.names()`. The whole point of tidy data is to
store variables in a consistent way. So it never stores a variable as
special attribute.
* It only recycles vectors of length 1. This is because recycling vectors of greater lengths
* It only recycles vectors of length 1. This is because recycling vectors of greater lengths
is a frequent source of bugs.
### Coercion
@ -326,13 +326,13 @@ Needs to discuss how data types in different languages are converted to R. Simil
To complement `data_frame()`, dplyr provides `as_data_frame()` to coerce lists into data frames. It does two things:
* It checks that the input list is valid for a data frame, i.e. that each element
is named, is a 1d atomic vector or list, and all elements have the same
is named, is a 1d atomic vector or list, and all elements have the same
length.
* It sets the class and attributes of the list to make it behave like a data frame.
This modification does not require a deep copy of the input list, so it's
very fast.
This is much simpler than `as.data.frame()`. It's hard to explain precisely what `as.data.frame()` does, but it's similar to `do.call(cbind, lapply(x, data.frame))` - i.e. it coerces each component to a data frame and then `cbinds()` them all together. Consequently `as_data_frame()` is much faster than `as.data.frame()`:
```{r}
@ -353,49 +353,49 @@ There are three key differences between tbl_dfs and data.frames:
* When you print a tbl_df, it only shows the first ten rows and all the
columns that fit on one screen. It also prints an abbreviated description
of the column type:
```{r}
data_frame(x = 1:1000)
```
You can control the default appearance with options:
* `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `n`
* `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `m`
rows print `m` rows. Use `options(dplyr.print_max = Inf)` to always
show all rows.
* `options(dply.width = Inf)` will always print all columns, regardless
* `options(dplyr.width = Inf)` will always print all columns, regardless
of the width of the screen.
* When you subset a tbl\_df with `[`, it always returns another tbl\_df.
* When you subset a tbl\_df with `[`, it always returns another tbl\_df.
Contrast this with a data frame: sometimes `[` returns a data frame and
sometimes it just returns a single column:
```{r}
df1 <- data.frame(x = 1:3, y = 3:1)
class(df1[, 1:2])
class(df1[, 1])
df2 <- data_frame(x = 1:3, y = 3:1)
class(df2[, 1:2])
class(df2[, 1])
```
To extract a single column it's use `[[` or `$`:
```{r}
class(df2[[1]])
class(df2$x)
```
* When you extract a variable with `$`, tbl\_dfs never do partial
* When you extract a variable with `$`, tbl\_dfs never do partial
matching. They'll throw an error if the column doesn't exist:
```{r, error = TRUE}
df <- data.frame(abc = 1)
df$a
df2 <- data_frame(abc = 1)
df2$a
```

View File

@ -16,34 +16,34 @@ knitr::opts_chunk$set(fig.path = "figures/", cache = TRUE)
It's rare that a data analysis involves only a single table of data. Typically you have many tables of data, and you must combine them to answer the questions that you're interested in. Collectively, multiple tables of data are called __relational data__ because it is the relations, not just the individual datasets, that are particularly important.
Relations are always defined between a pair of tables. All other relations are built up from this simple idea: the relations of three or more tables are always a property of the relations between each pair; sometimes both elements of a pair can be the same table.
Relations are always defined between a pair of tables. All other relations are built up from this simple idea: the relations of three or more tables are always a property of the relations between each pair; sometimes both elements of a pair can be the same table.
To work with relational data you need verbs that work with pairs of tables. There are three families of verbs design to work with relational data:
To work with relational data you need verbs that work with pairs of tables. There are three families of verbs designed to work with relational data:
* __Mutating joins__, which add new variables to one data frame from matching
* __Mutating joins__, which add new variables to one data frame from matching
rows in another.
* __Filtering joins__, which filter observations from one data frame based on
* __Filtering joins__, which filter observations from one data frame based on
whether or not they match an observation in the other table.
* __Set operations__, which treat observations like they were set elements.
The most common place to find relational data is in a _relational_ database management system, a term that encompasses almost all modern databases. If you've used a database before, you've almost certainly used SQL. If so, you should find the concepts in this chapter familiar, although their expression in dplyr is little different. Generally, dplyr is a little easier to use than SQL because it's specialised to data analysis: it makes common data analysis operations easier, at the expense of making it difficult to do other things.
The most common place to find relational data is in a _relational_ database management system, a term that encompasses almost all modern databases. If you've used a database before, you've almost certainly used SQL. If so, you should find the concepts in this chapter familiar, although their expression in dplyr is a little different. Generally, dplyr is a little easier to use than SQL because it's specialised to data analysis: it makes common data analysis operations easier, at the expense of making it difficult to do other things.
## nycflights13 {#nycflights13-relational}
You'll learn about relational data with other datasets from the nycflights13 package. As well as the `flights` table that you've worked with so far, nycflights13 contains a four related data frames:
You'll learn about relational data with other datasets from the nycflights13 package. As well as the `flights` table that you've worked with so far, nycflights13 contains four other related data frames:
* `airlines` lets you look up the full carrier name from its abbreviated
* `airlines` lets you look up the full carrier name from its abbreviated
code:
```{r}
airlines
```
* `airports` gives information about each airport, identified by the `faa`
airport code:
```{r}
airports
```
@ -53,7 +53,7 @@ You'll learn about relational data with other datasets from the nycflights13 pac
```{r}
planes
```
* `weather` gives the weather at each NYC airport for each hour:
```{r}
@ -66,16 +66,16 @@ One way to show the relationships between the different tables is with a drawing
knitr::include_graphics("diagrams/relational-nycflights.png")
```
This diagram is a little overwhelming, and even so it's simple compared to some you'll see in the wild! The key to understanding diagrams like this is to remember each relation always concerns a pair of tables. You don't need to understand the whole thing; you just need the understand the chain of relations between the tables that you are interested in.
This diagram is a little overwhelming, and even so it's simple compared to some you'll see in the wild! The key to understanding diagrams like this is to remember each relation always concerns a pair of tables. You don't need to understand the whole thing; you just need the understand the chain of relations between the tables that you are interested in.
For nycflights13:
* `flights` connects to `planes` via single variable, `tailnum`. `flights`
connect `airlines` with the `carrier` variable.
* `flights` connects to `airports` in two ways: via the `origin` or the
* `flights` connects to `airports` in two ways: via the `origin` or the
`dest`.
* `flights` connects to `weather` via `origin` (the location), and
`year`, `month`, `day` and `hour` (the time).
@ -87,7 +87,7 @@ For nycflights13:
1. I forgot to draw the a relationship between `weather` and `airports`.
What is the relationship and what should it look like in the diagram?
1. `weather` only contains information for the origin (NYC) airports. If
it contained weather records for all airports in the USA, what additional
relation would it define with `flights`?
@ -97,7 +97,7 @@ For nycflights13:
or reject this hypothesis using data.
1. We know that some days of the year are "special", and fewer people than
usual fly on them. How might you represent that data as a data frame?
usual fly on them. How might you represent that data as a data frame?
What would be the primary keys of that table? How would it connect to the
existing tables?
@ -108,11 +108,11 @@ The variables used to connect each pair of tables are called __keys__. A key is
There are two types of keys:
* A __primary key__ uniquely identifies an observation in its own table.
For example, `planes$tailnum` is a primary key because it uniquely identifies
For example, `planes$tailnum` is a primary key because it uniquely identifies
each plane.
* A __foreign key__ uniquely identifies an observation in another table.
For example, the `flights$tailnum` is a foregin key because it matches each
For example, the `flights$tailnum` is a foreign key because it matches each
flight to a unique plane.
A variable can be both part of primary key _and_ a foreign key. For example, `origin` is part of the `weather` primary key, and is also a foreign key for the `airport` table.
@ -124,36 +124,36 @@ planes %>% count(tailnum) %>% filter(n > 1)
weather %>% count(year, month, day, hour, origin) %>% filter(n > 1)
```
Sometimes a table does't have an explicit primary key: each row is an observation, but no combination of variables reliably identifies it. For example, what's the primary key in the `flights` table? You might think it would be the date plus the flight or tail number, but neither of those are unique:
Sometimes a table doesn't have an explicit primary key: each row is an observation, but no combination of variables reliably identifies it. For example, what's the primary key in the `flights` table? You might think it would be the date plus the flight or tail number, but neither of those are unique:
```{r}
flights %>% count(year, month, day, flight) %>% filter(n > 1)
flights %>% count(year, month, day, tailnum) %>% filter(n > 1)
```
When starting to work with this data, I had naively assumed that each flight number would be only used once per day: that would make it much easiser to communicate problems with a specific flight. Unfortunately that is not the case! If a table lacks a primary key, it's sometimes useful to add one with `row_number()`. That makes it easier to match observations if you've done some filtering and want to check back in with the original data. This is called a surrogate key.
When starting to work with this data, I had naively assumed that each flight number would be only used once per day: that would make it much easier to communicate problems with a specific flight. Unfortunately that is not the case! If a table lacks a primary key, it's sometimes useful to add one with `row_number()`. That makes it easier to match observations if you've done some filtering and want to check back in with the original data. This is called a surrogate key.
A primary key and the corresponding foreign key in another table form a __relation__. Relations are typically one-to-many. For example, each flight has one plane, but each plane has many flights. In other data, you'll occassionaly see a 1-to-1 relationship. You can think of this as a special case of 1-to-many. It's possible to model many-to-many relations with a many-to-1 relation plus a 1-to-many relation. For example, in this data there's a many-to-many relationship between airlines and airports: each airport flies to many airlines; each airport hosts many airlines.
A primary key and the corresponding foreign key in another table form a __relation__. Relations are typically one-to-many. For example, each flight has one plane, but each plane has many flights. In other data, you'll occasionally see a 1-to-1 relationship. You can think of this as a special case of 1-to-many. It's possible to model many-to-many relations with a many-to-1 relation plus a 1-to-many relation. For example, in this data there's a many-to-many relationship between airlines and airports: each airport flies to many airlines; each airport hosts many airlines.
### Exercises
1. Identify the keys in the following datasets
1. `Lahman::Batting`,
1. `Lahman::Batting`,
1. `babynames::babynames`
1. `nasaweather::atmos`
1. `fueleconomy::vehicles`
1. Draw a diagram illustrating the connections between the `Batting`,
`Master`, and `Salary` tables in the Lahman package. Draw another diagram
1. Draw a diagram illustrating the connections between the `Batting`,
`Master`, and `Salary` tables in the Lahman package. Draw another diagram
that shows the relationship between `Master`, `Managers`, `AwardsManagers`.
How would you characterise the relationship between the `Batting`,
How would you characterise the relationship between the `Batting`,
`Pitching`, and `Fielding` tables?
## Mutating joins {#mutating-joins}
The first tool we'll look at for combining a pair of tables is the __mutating join__. A mutating join allows you to combine variables from two tables. It first matches observations by their keys, then copies across variables from one table to the other.
The first tool we'll look at for combining a pair of tables is the __mutating join__. A mutating join allows you to combine variables from two tables. It first matches observations by their keys, then copies across variables from one table to the other.
Like `mutate()`, the join functions add variables to the right, so if you have a lot of variables already, the new variables won't get printed out. For these examples, we'll make it easier to see what's going on in the examples by creating a narrower dataset:
@ -161,19 +161,19 @@ Like `mutate()`, the join functions add variables to the right, so if you have a
(flights2 <- flights %>% select(year:day, hour, origin, dest, tailnum, carrier))
```
(When you're in RStudio, you can use `View()` to avoid this problem).
(When you're in RStudio, you can use `View()` to avoid this problem).
For example, imagine you want to add the full airline name to the `flights` data. You can combine the `airlines` and `carrier` data frames with `left_join()`:
```{r}
flights2 %>%
flights2 %>%
left_join(airlines, by = "carrier")
```
The result of joining airlines to flights is an additional variable: `carrier`. This is why I call this type of join a mutating join. In this case, you could have got to the same place using `mutate()` and basic subsetting:
```{r}
flights2 %>%
flights2 %>%
mutate(carrier = airlines$name[match(carrier, airlines$carrier)])
```
@ -243,7 +243,7 @@ Graphically, that looks like:
knitr::include_graphics("diagrams/join-outer.png")
```
The most commonly used join is the left join: you use this when ever you lookup additional data out of another table, becasuse it preserves the original observations even when there isn't a match. The left join should be your default join: use it unless you have a strong reason to prefer one of the others.
The most commonly used join is the left join: you use this whenever you lookup additional data out of another table, because it preserves the original observations even when there isn't a match. The left join should be your default join: use it unless you have a strong reason to prefer one of the others.
Another way to depict the different types of joins is with a Venn diagram:
@ -260,7 +260,7 @@ So far all the diagrams have assumed that the keys are unique. But that's not al
1. One table has duplicate keys. This is useful when you want to
add in additional information as there is typically a one-to-many
relationship.
```{r, echo = FALSE, out.width = "75%"}
knitr::include_graphics("diagrams/join-one-to-many.png")
```
@ -275,14 +275,14 @@ So far all the diagrams have assumed that the keys are unique. But that's not al
left_join(x, y, by = "key")
```
1. Both tables have duplicate keys. This is usually an error because in
1. Both tables have duplicate keys. This is usually an error because in
neither table do the keys uniquely identify an observation. When you join
duplicated keys, you get all possible combinations, the Cartesian product:
```{r, echo = FALSE, out.width = "75%"}
knitr::include_graphics("diagrams/join-many-to-many.png")
```
```{r}
x <- data_frame(key = c(1, 2, 2, 3), val_x = paste0("x", 1:4))
y <- data_frame(key = c(1, 2, 2, 3), val_y = paste0("y", 1:4))
@ -293,37 +293,37 @@ So far all the diagrams have assumed that the keys are unique. But that's not al
So far, the pairs of tables have always been joined by a single variable, and that variable has the same name in both tables. That constraint was encoded by `by = "key"`. You can use other values for `by` to connect the tables in other ways:
* The default, `by = NULL`, uses all variables that appear in both tables,
the so called __natural__ join. For example, the flights and weather tables
* The default, `by = NULL`, uses all variables that appear in both tables,
the so called __natural__ join. For example, the flights and weather tables
match on their common variables: `year`, `month`, `day`, `hour` and
`origin`.
```{r}
flights2 %>% left_join(weather)
```
* A character vector, `by = "x"`. This is like a natural join, but uses only
some of the common variables. For example, `flights` and `planes` have
`year` variables, but they mean different things so we only want to join by
* A character vector, `by = "x"`. This is like a natural join, but uses only
some of the common variables. For example, `flights` and `planes` have
`year` variables, but they mean different things so we only want to join by
`tailnum`.
```{r}
flights2 %>% left_join(planes, by = "tailnum")
```
Note that the `year` variables (which appear in both input data frames,
but are not constrained to be equal) are disambiguated in the output with
but are not constrained to be equal) are disambiguated in the output with
a suffix.
* A named character vector: `by = c("a" = "b")`. This will
match variable `a` in table `x` to variable `b` in table `y`. The
match variable `a` in table `x` to variable `b` in table `y`. The
variables from `x` will be used in the output.
For example, if we want to draw a map we need to combine the flights data
with the airports data which contains the location (`lat` and `long`) of
each airport. Each flight has an origin and destination `airport`, so we
each airport. Each flight has an origin and destination `airport`, so we
need to specify which one we want to join to:
```{r}
flights2 %>% left_join(airports, c("dest" = "faa"))
flights2 %>% left_join(airports, c("origin" = "faa"))
@ -334,33 +334,33 @@ So far, the pairs of tables have always been joined by a single variable, and th
1. Compute the average delay by destination, then join on the `airports`
data frame so you can show the spatial distribution of delays. Here's an
easy way to draw a map of the United States:
```{r, include = FALSE}
airports %>%
semi_join(flights, c("faa" = "dest")) %>%
ggplot(aes(lon, lat)) +
airports %>%
semi_join(flights, c("faa" = "dest")) %>%
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
You might want to use the `size` or `colour` of the points to display
the average delay for each airport.
1. Is there a relationship between the age of a plane and its delays?
1. What weather conditions make it more likely to see a delay?
1. What happened on June 13 2013? Display the spatial pattern of delays,
and then use google to cross-reference with the weather.
and then use Google to cross-reference with the weather.
```{r, eval = FALSE, include = FALSE}
worst <- filter(not_cancelled, month == 6, day == 13)
worst %>%
group_by(dest) %>%
summarise(delay = mean(arr_delay), n = n()) %>%
filter(n > 5) %>%
inner_join(airports, by = c("dest" = "faa")) %>%
worst %>%
group_by(dest) %>%
summarise(delay = mean(arr_delay), n = n()) %>%
filter(n > 5) %>%
inner_join(airports, by = c("dest" = "faa")) %>%
ggplot(aes(lon, lat)) +
borders("state") +
geom_point(aes(size = n, colour = delay)) +
@ -369,7 +369,7 @@ So far, the pairs of tables have always been joined by a single variable, and th
### Other implementations
`base::merge()` can perform all four types of mutating join:
`base::merge()` can perform all four types of mutating join:
dplyr | merge
-------------------|-------------------------------------------
@ -385,17 +385,17 @@ SQL is the inspiration for dplyr's conventions, so the translation is straightfo
dplyr | SQL
-----------------------------|-------------------------------------------
`inner_join(x, y, by = "z")` | `SELECT * FROM x INNER JOIN y USING (z)`
`left_join(x, y, by = "z")` | `SELECT * FROM x LEFT OUTER JOIN USING (z)`
`right_join(x, y, by = "z")` | `SELECT * FROM x RIGHT OUTER JOIN USING (z)`
`full_join(x, y, by = "z")` | `SELECT * FROM x FULL OUTER JOIN USING (z)`
`left_join(x, y, by = "z")` | `SELECT * FROM x LEFT OUTER JOIN y USING (z)`
`right_join(x, y, by = "z")` | `SELECT * FROM x RIGHT OUTER JOIN y USING (z)`
`full_join(x, y, by = "z")` | `SELECT * FROM x FULL OUTER JOIN y USING (z)`
Note that "INNER" and "OUTER" are optional, and often ommitted.
Note that "INNER" and "OUTER" are optional, and often omitted.
Joining different variables between the tables, e.g. `inner_join(x, y, by = c("a" = "b"))` uses a slightly different syntax in SQL: `SELECT * FROM x INNER JOIN y ON x.a = y.b`. As this syntax suggests SQL supports a wide range of join types than dplyr because you can connect the tables using constraints other than equiality (sometimes called non-equijoins).
Joining different variables between the tables, e.g. `inner_join(x, y, by = c("a" = "b"))` uses a slightly different syntax in SQL: `SELECT * FROM x INNER JOIN y ON x.a = y.b`. As this syntax suggests SQL supports a wide range of join types than dplyr because you can connect the tables using constraints other than equality (sometimes called non-equijoins).
## Filtering joins {#filtering-joins}
Filtering joins match obserations in the same way as mutating joins, but affect the observations, not the variables. There are two types:
Filtering joins match observations in the same way as mutating joins, but affect the observations, not the variables. There are two types:
* `semi_join(x, y)` __keeps__ all observations in `x` that have a match in `y`.
* `anti_join(x, y)` __drops__ all observations in `x` that have a match in `y`.
@ -403,7 +403,7 @@ Filtering joins match obserations in the same way as mutating joins, but affect
Semi-joins are useful for matching filtered summary tables back to the original rows. For example, imagine you've found the top ten most popular destinations:
```{r}
top_dest <- flights %>%
top_dest <- flights %>%
count(dest, sort = TRUE) %>%
head(10)
top_dest
@ -444,21 +444,21 @@ knitr::include_graphics("diagrams/join-anti.png")
Anti-joins are are useful for diagnosing join mismatches. For example, when connecting `flights` and `planes`, you might be interested to know that there are many `flights` that don't have a match in `planes`:
```{r}
flights %>%
anti_join(planes, by = "tailnum") %>%
flights %>%
anti_join(planes, by = "tailnum") %>%
count(tailnum, sort = TRUE)
```
### Exercises
1. What does it mean for a flight to have a missing `tailnum`? What do the
1. What does it mean for a flight to have a missing `tailnum`? What do the
tail numbers that don't have a matching record in `planes` have in common?
(Hint: one variable explains ~90% of the problem.)
1. Find the 48 hours (over the course of the whole year) that have the worst
delays. Cross-reference it with the `weather` data. Can you see any
patterns?
delays. Cross-reference it with the `weather` data. Can you see any
patterns?
1. What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you?
What does `anti_join(airports, flights, by = c("dest" = "faa"))` tell you?
@ -468,25 +468,25 @@ The data you've been working with in this chapter has been cleaned up so that yo
1. Start by identifying the variables that form the primary key in each table.
You should usually do this based on your understanding of the data, not
empirically by looking for a combination of variables that give a
empirically by looking for a combination of variables that give a
unique identifier. If you just look for variables without thinking about
what they mean, you might get (un)lucky and find a combination that's
unique in your current data but the relationship might not be true in
general.
what they mean, you might get (un)lucky and find a combination that's
unique in your current data but the relationship might not be true in
general.
```{r}
airports %>% count(alt, lat) %>% filter(n > 1)
```
1. Check that none of the variables in the primary key are missing. If
1. Check that none of the variables in the primary key are missing. If
a value is missing then it can't identify an observation!
1. Check that your foreign keys match primary keys in another table. The
best way to do this is with an `anti_join()`. It's common for keys
not to match because of data entry errors. Fixing these is often a lot of
work.
If you do have missing keys, you'll need to be thoughtful about your
work.
If you do have missing keys, you'll need to be thoughtful about your
use of inner vs. outer joins, carefully considering whether or not you
want to drop rows that don't have a match.
@ -494,7 +494,7 @@ Be aware that simply checking the number of rows before and after the join is no
## Set operations {#set-operations}
The final type of two-table verb is set operations. Generally, I use these the least frequently, but they are occassionally useful when you want to break a single complex filter into simpler pieces that you then combine.
The final type of two-table verb is set operations. Generally, I use these the least frequently, but they are occasionally useful when you want to break a single complex filter into simpler pieces that you then combine.
All these operations work with a complete row, comparing the values of every variable. These expect the `x` and `y` inputs to have the same variables, and treat the observations like sets:

View File

@ -29,7 +29,7 @@ The idea of minimising the context needed to understand your code goes beyond ju
There are three common classes of surprises in R:
1. Unstable types: What what will `df[, x]` return? You can assume that `df`
1. Unstable types: What will `df[, x]` return? You can assume that `df`
is a data frame and `x` is a vector because of their names. But you don't
know whether this code will return a data frame or a vector because the
behaviour of `[` depends on the length of x.

View File

@ -8,7 +8,7 @@ title: Tidy data
> "Tidy datasets are all alike but every messy dataset is messy in its
> own way." Hadley Wickham
Data science, at its heart, is a computer programming exercise. Data scientists use computers to store, transform, visualize, and model their data. Each computer program will expect your data to be organized in a predetermined way, which may vary from program to program. To be an effective data scientist, you will need to be able to reorganize your data to match the format required by your program.
Data science, at its heart, is a computer programming exercise. Data scientists use computers to store, transform, visualize, and model their data. Each computer program will expect your data to be organized in a predetermined way, which may vary from program to program. To be an effective data scientist, you will need to be able to reorganize your data to match the format required by your program.
In this chapter, you will learn the best way to organize your data for R, a task that we call data tidying. Tidying your data will save you hours of time and make your data much easier to visualize, transform, and model with R.
@ -79,7 +79,7 @@ At this point, you might think that tidy data is so obvious that it is trivial.
Tidy data works well with R because it takes advantage of R's traits as a vectorized programming language. Data structures in R are organized around vectors, and R's functions are optimized to work with vectors. Tidy data takes advantage of both of these traits.
Tidy data arranges values so that the relationships between variables in a data set will parallel the relationship between vectors in R's storage objects. R stores tabular data as a data frame, a list of atomic vectors arranged to look like a table. Each column in the table is an atomic vector in the list. In tidy data, each variable in the data set is assigned to its own column, i.e., its own vector in the data frame.
Tidy data arranges values so that the relationships between variables in a data set will parallel the relationship between vectors in R's storage objects. R stores tabular data as a data frame, a list of atomic vectors arranged to look like a table. Each column in the table is an atomic vector in the list. In tidy data, each variable in the data set is assigned to its own column, i.e., its own vector in the data frame.
```{r, echo = FALSE}
knitr::include_graphics("images/tidy-2.png")
@ -87,7 +87,7 @@ knitr::include_graphics("images/tidy-2.png")
*A data frame is a list of vectors that R displays as a table. When your data is tidy, the values of each variable fall in their own column vector.*
As a result, you can extract the all of the values of a variable in a tidy data set by extracting the column vector that contains the variable. You can do this easily with R's list syntax, i.e.
As a result, you can extract all the values of a variable in a tidy data set by extracting the column vector that contains the variable. You can do this easily with R's list syntax, i.e.
```{r}
table1$cases
@ -191,7 +191,7 @@ After you collect your input, you can calculate the rate.
```{r eval = FALSE}
# Data set four
cases <- c(table4$1999, table4$2000, table4$2001)
cases <- c(table4$1999, table4$2000, table4$2001)
population <- c(table5$1999, table5$2000, table5$2001)
cases / population * 10000
```
@ -214,7 +214,7 @@ The two most important functions in `tidyr` are `gather()` and `spread()`. Each
A key value pair is a simple way to record information. A pair contains two parts: a *key* that explains what the information describes, and a *value* that contains the actual information. So for example, this would be a key value pair:
Password: 0123456789
Password: 0123456789
`0123456789` is the value, and it is associated with the key `Password`.
@ -238,7 +238,7 @@ Data values form natural key value pairs. The value is the value of the pair and
Cases: 80488
Cases: 212258
Cases: 213766
However, the key value pairs would cease to be a useful data set because you no longer know which values belong to the same observation.
Every cell in a table of data contains one half of a key value pair, as does every column name. In tidy data, each cell will contain a value and each column name will contain a key, but this doesn't need to be the case for untidy data. Consider `table2`.
@ -247,7 +247,7 @@ Every cell in a table of data contains one half of a key value pair, as does eve
table2
```
In `table2`, the `key` column contains only keys (and not just because the column is labelled `key`). Conveniently, the `value` column contains the values associated with those keys.
In `table2`, the `key` column contains only keys (and not just because the column is labeled `key`). Conveniently, the `value` column contains the values associated with those keys.
You can use the `spread()` function to tidy this layout.
@ -269,7 +269,7 @@ knitr::include_graphics("images/tidy-8.png")
*`spread()` distributes a pair of key:value columns into a field of cells. The unique keys in the key column become the column names of the field of cells.*
You can see that `spread()` maintains each of the relationships expressed in the original data set. The output contains the four original variables, *country*, *year*, *population*, and *cases*, and the values of these variables are grouped according to the orginal observations. As a bonus, now the layout of these relationships is tidy.
You can see that `spread()` maintains each of the relationships expressed in the original data set. The output contains the four original variables, *country*, *year*, *population*, and *cases*, and the values of these variables are grouped according to the original observations. As a bonus, now the layout of these relationships is tidy.
`spread()` takes three optional arguments in addition to `data`, `key`, and `value`:
@ -367,7 +367,7 @@ You can also pass an integer or vector of integers to `sep`. `separate()` will i
separate(table3, year, into = c("century", "year"), sep = 2)
```
You can futher customize `separate()` with the `remove`, `convert`, and `extra` arguments:
You can further customize `separate()` with the `remove`, `convert`, and `extra` arguments:
- **`remove`** - Set `remove = FALSE` to retain the column of values that were separated in the final data frame.
- **`convert`** - By default, `separate()` will return new columns as character columns. Set `convert = TRUE` to convert new columns to double (numeric), integer, logical, complex, and factor columns with `type.convert()`.
@ -462,4 +462,4 @@ who <- spread(who, var, value)
who
```
The `who` data set is now tidy. It is far from sparkling (for example, it contains several redundant columns and many missing values), but it will now be much easier to work with in R.
The `who` data set is now tidy. It is far from sparkling (for example, it contains several redundant columns and many missing values), but it will now be much easier to work with in R.

View File

@ -236,7 +236,7 @@ filter(flights, !(arr_delay > 120 | dep_delay > 120))
filter(flights, arr_delay <= 120, dep_delay <= 120)
```
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used if `if` statements when programming. You'll learn about that later on.
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used in `if` statements when programming. You'll learn about that later on.
Sometimes you want to find all rows after the first `TRUE`, or all rows until the first `FALSE`. The cumulative functions `cumany()` and `cumall()` allow you to find these values:
@ -535,7 +535,7 @@ ggplot(flights, aes(dep_sched %% 60)) + geom_histogram(binwidth = 1)
ggplot(flights, aes(air_time - airtime2)) + geom_histogram()
```
1. Currently `dep_time()` and `arr_time()` are convenient to look at, but
1. Currently `dep_time` and `arr_time` are convenient to look at, but
hard to compute with because they're not really continuous numbers.
Convert them to a more convenient representation of number of minutes
since midnight.