r4ds/transform.Rmd

760 lines
28 KiB
Plaintext
Raw Normal View History

2015-12-12 02:34:20 +08:00
# Data transformation {#transform}
2015-12-12 03:28:10 +08:00
```{r setup-transform, include=FALSE}
2015-12-09 00:28:54 +08:00
library(dplyr)
library(nycflights13)
source("common.R")
```
2015-12-14 23:45:41 +08:00
Visualisation is an important tool for insight generation, but it is rare that you get the data in exactly the right form you need for visualisation. Often you'll need to create some new variables or summaries, or maybe you just want to rename the variables or reorder the observations in order to make the data a little easier to work with. You'll learn how to do all that (and more!) in this chapter which will teach you how to transform your data using the dplyr package.
2015-12-09 00:28:54 +08:00
When working with data you must:
2015-12-14 23:45:41 +08:00
1. Figure out what you want to do.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
1. Precisely describe what you want to do in such a way that the
compute can understand it (i.e. program it).
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
1. Execute the program.
2015-12-09 00:28:54 +08:00
The dplyr package makes these steps fast and easy:
2015-12-14 23:45:41 +08:00
* By constraining your options, it simplifies how you can think about
common data manipulation tasks.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* It provides simple "verbs", functions that correspond to the most
common data manipulation tasks, to help you translate those thoughts
into code.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* It uses efficient data storage backends, so you spend less time
waiting for the computer.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
In this chapter you'll learn the key verbs of dplyr in the context of a new dataset on flights departing New York City in 2013.
2015-12-09 00:28:54 +08:00
## Data: nycflights13
To explore the basic data manipulation verbs of dplyr, we'll start with the built in
2015-12-14 23:45:41 +08:00
`nycflights13` data frame. This dataset contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013. The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is documented in `?nycflights13`.
2015-12-09 00:28:54 +08:00
```{r}
2015-12-14 23:45:41 +08:00
library(dplyr)
2015-12-09 00:28:54 +08:00
library(nycflights13)
2015-12-14 23:45:41 +08:00
flights
2015-12-09 00:28:54 +08:00
```
2015-12-14 23:45:41 +08:00
The first important thing to notice about this dataset is that it prints a little differently to most data frames: it only shows the first ten rows and all the columns that fit on one screen. If you want to see the whole dataset, use `View()` which will open the dataset in the RStudio viewer.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
It also prints an abbreviated description of the column type:
* int: integer
* dbl: double (real)
* chr: character
* lgl: logical
* date: dates
* time: times
It prints differently because it has a different "class" to usual data frames:
```{r}
class(flights)
```
This is called a `tbl_df` (prounced tibble diff) or a `data_frame` (pronunced "data underscore frame"; cf. `data dot frame`)
You'll learn more about how that works in data structures. If you want to convert your own data frames to this special case, use `as.data_frame()`. I recommend it for large data frames as it makes interactive exploration much less painful.
To create your own new tbl\_df from individual vectors, use `data_frame()`:
```{r}
data_frame(x = 1:3, y = c("a", "b", "c"))
```
***
There are two other important differences between tbl_dfs and data.frames:
* When you subset a tbl\_df with `[`, it always returns another tbl\_df.
Contrast this with a data frame: sometimes `[` returns a data frame and
sometimes it just returns a single column:
```{r}
df1 <- data.frame(x = 1:3, y = 3:1)
class(df1[, 1:2])
class(df1[, 1])
df2 <- data_frame(x = 1:3, y = 3:1)
class(df2[, 1:2])
class(df2[, 1])
```
To extract a single column use `[[` or `$`:
```{r}
class(df2[[1]])
class(df2$x)
```
* When you extract a variable with `$`, tbl\_dfs never do partial
matching. They'll throw an error if the column doesn't exist:
```{r, error = TRUE}
df <- data.frame(abc = 1)
df$a
df2 <- data_frame(abc = 1)
df2$a
```
***
## Single table verbs
There are five key verbs:
* `filter()` picks observations based on their values.
* `arrange()` reorders observations.
* `select()` picks variables based on their names.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* `mutate()` allows you to add new variables that are functions of
existing variables.
* `summarise()` reduces many values to a single value.
These can all be used in conjunction with `group_by()` which changes the scope of each function from operating on the entire dataset to operating on it group-by-group. `group_by()` is most useful in conjunction with `summarise()`, but can also be useful with `mutate()`.
All verbs work very similarly:
1. The first argument is a data frame.
1. The subsequent arguments describe what to do with the data frame.
Notice that you can refer to columns in the data frame directly without
using `$`.
1. The result is a new data frame.
Together these properties make it easy to chain together multiple simple steps to achieve a complex result.
These five functions provide the basis of a language of data manipulation. At the most basic level, you can only alter a tidy data frame in five useful ways: you can reorder the rows (`arrange()`), pick observations and variables of interest (`filter()` and `select()`), add new variables that are functions of existing variables (`mutate()`), or collapse many values to a summary (`summarise()`). Each verb is described in turn in the sections below.
## Filter rows with `filter()`
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
`filter()` allows you to select a subset of rows in a data frame. The first argument is the name of the data frame. The second and subsequent arguments are the expressions that filter the data frame. For example, we can select all flights on January 1st with:
2015-12-09 00:28:54 +08:00
```{r}
filter(flights, month == 1, day == 1)
```
2015-09-21 21:41:14 +08:00
2015-12-14 23:45:41 +08:00
When you run this line of code, dplyr executes the filtering operation and returns the modified data frame. dplyr operators never modify their inputs, so if you want to save the results, you'll need to use the assignment operator `<-`:
```{r}
jan1 <- filter(flights, month == 1, day == 1)
```
--------------------------------------------------------------------------------
2015-12-09 00:28:54 +08:00
This is equivalent to the more verbose code in base R:
```{r, eval = FALSE}
flights[flights$month == 1 & flights$day == 1, ]
```
2015-12-14 23:45:41 +08:00
`filter()` works similarly to `subset()` except that you can give it any number of filtering conditions, which are joined together with `&`.
--------------------------------------------------------------------------------
### Comparisons
* Numeric values: `>`, `>=`, `<`, `<=`, `!=` (not equal), and `==`.
* Strings: as well as `==` and `!=`, `%in%` is very useful. You'll learn about
regular expressions, a powerful tool for matching patterns in string in
strings.
* Dates and times: you can use the same operators as numeric, or the special date
extractors you'll learn about in [dates and times]
When you're starting out with R, the easiest mistake to make is to use `=` instead of `==` when testing for equality. When this happens you'll get a somewhat uninformative error:
```{r, error = TRUE}
filter(flights, month = 1)
```
### Logical operators
Multiple arguments to `filter()` are combined with "and". To get more complicated expressions, you can use boolean operators yourself:
2015-12-09 00:28:54 +08:00
```{r, eval = FALSE}
filter(flights, month == 1 | month == 2)
```
2015-12-14 23:45:41 +08:00
The following figure shows the complete set of boolean operations for two sets.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
```{r bool-ops, echo = FALSE, fig.cap = "Complete set of boolean operations", out.width = "75%"}
knitr::include_graphics("diagrams/transform-logical.png")
2015-12-09 00:28:54 +08:00
```
2015-12-06 22:02:29 +08:00
2015-12-14 23:45:41 +08:00
Sometimes you can simplify complicated subsetting by remembering De Morgan's law: `!(x & y)` is the same as `!x | !y`, and `!(x | y)` is the same as `!x & !y`.
Note that R has both `&` and `|` and `&&` and `||`. `&` and `|` are vectorised: you give them two vectors of logical values and they return a vector of logical values. `&&` and `||` are scalar operators: you give them individual `TRUE`s or `FALSE`s. They're used if `if` statements when programming. You'll learn about that later on.
Cumulative operations: `cumany()`, `cumall()`.
2015-12-06 22:02:29 +08:00
### Missing values
2015-09-21 21:41:14 +08:00
2015-12-14 23:45:41 +08:00
One important feature of R that can make comparison tricky is the missing value, `NA`. This represents an unknown value, so any operation involving an unknown value will also be unknown:
```{r}
NA > 5
10 == NA
NA + 10
NA / 2
```
The most confusing result is this one:
```{r}
NA == NA
```
It's easiest to understand why this is true with a bit more context:
```{r}
# Let x be Mary's age. We don't know how old she is.
x <- NA
# Let y be John's age. We don't know how old he is.
y <- NA
# Are John and Mary the same age?
x == y
# We don't know!
```
If you want to determine if a value is missing, use `is.na()`. (And RStudio will remind you of this by giving a code warning whenever you use `x == NA`)
Note that `filter()` only includes rows where the condition is `TRUE`; it excludes both `FALSE` and `NA` values. If you want to preserve missing values, ask for them explicitly:
```{r}
df <- data_frame(x = c(1, NA, 3))
filter(df, x > 1)
filter(df, is.na(x) | x > 1)
```
### Exercises
1. Find all the flights that:
* Departed in summer.
* That flew to Houston (`IAH` or `HOU`).
* That were delayed by more two hours.
* That arrived more than two hours late, but didn't leave late.
* We delayed by at least an hour, but made up over 30 minutes in flight.
* Departed between midnight and 6am.
1. How many flights have a missing `dep_time`? What other variables are
missing? What might these rows represent?
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
## Arrange rows with `arrange()`
`arrange()` works similarly to `filter()` except that instead of filtering or selecting rows, it reorders them. It takes a data frame, and a set of column names (or more complicated expressions) to order by. If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns:
```{r}
arrange(flights, year, month, day)
```
Use `desc()` to order a column in descending order:
```{r}
arrange(flights, desc(arr_delay))
```
## Select columns with `select()`
Often you work with large datasets with many columns but only a few are actually of interest to you. `select()` allows you to rapidly zoom in on a useful subset using operations that usually only work on numeric variable positions:
```{r}
# Select columns by name
select(flights, year, month, day)
# Select all columns between year and day (inclusive)
select(flights, year:day)
# Select all columns except those from year to day (inclusive)
select(flights, -(year:day))
```
This function works similarly to the `select` argument in `base::subset()`. Because the dplyr philosophy is to have small functions that do one thing well, it's its own function in dplyr.
There are a number of helper functions you can use within `select()`, like `starts_with()`, `ends_with()`, `matches()` and `contains()`. These let you quickly match larger blocks of variables that meet some criterion. See `?select` for more details.
You can rename variables with `select()` by using named arguments:
```{r}
select(flights, tail_num = tailnum)
```
But because `select()` drops all the variables not explicitly mentioned, it's not that useful. Instead, use `rename()`:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
rename(flights, tail_num = tailnum)
```
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
## Add new variable with `mutate()`
2015-09-21 21:41:14 +08:00
2015-12-14 23:45:41 +08:00
Besides selecting sets of existing columns, it's often useful to add new columns that are functions of existing columns. This is the job of `mutate()`:
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
```{r}
2015-12-14 23:45:41 +08:00
flights_sml <- select(flights,
year:day,
ends_with("delay"),
distance,
air_time
)
mutate(flights_sml,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
2015-12-14 23:45:41 +08:00
speed = distance / air_time * 60
)
2015-12-09 00:28:54 +08:00
```
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
Note that you can refer to columns that you've just created:
```{r}
2015-12-14 23:45:41 +08:00
mutate(flights_sml,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
gain_per_hour = gain / (air_time / 60)
)
```
If you only want to keep the new variables, use `transmute()`:
```{r}
2015-12-14 23:45:41 +08:00
transmute(flights_sml,
2015-12-09 00:28:54 +08:00
gain = arr_delay - dep_delay,
gain_per_hour = gain / (air_time / 60)
)
```
2015-12-14 23:45:41 +08:00
### Useful functions
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
You'll learn about useful functions for strings and dates in their respective chapters. For numbers:
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* Arithmetic operators: `+`, `-`, `*`, `/`, `^`. These are all vectorised, so
you can work with multiple columns. If you give it a single number it will
be expanded to match the length of the column.
* Modulo arithmetic: `%%`, `%/%`. Modular arithmetic (division with reminder)
is a handy tool to have in your toolbox as it allows you to break integers
down into pieces. For example, in the flights dataset, you can compute
`hour` and `minute` from `dep_time` with:
```{r}
transmute(flights,
dep_time,
hour = dep_time %/% 100,
minute = dep_time %% 100
)
```
* Logs: `log()`, `log2()`, `log10()`. All else being equal, I recommend
using `log2()` because it's easy to interpret: an difference of 1 mean
doubled, a difference of -1 means halved. `log10()` is similarly easy to
interpret, as long as your have a very wide range of numbers.
* Cumulative calculations: `cumsum()`, `cumprod()`, `cummin()`, `cummax()`,
`cummean()`.
* Parallel computations: `pmin()`, `pmax()`. Need `psum()` etc for
correct `na.rm = TRUE`.
* Logical comparisons, which you learned about earlier. If you're doing
a complex sequence of logical operations it's often a good idea to
store the interim values in new variables so you can check that each
step is doing what you expect.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* `lead()` and `lag()` give offsets. Most useful in conjunction with
`group_by()` which you'll learn about shortly.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* Various types of ranking: `min_rank()`, `row_number()`, `dense_rank()`,
`cume_dist()`, `percent_rank()`, `ntile()`.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
## Summarise values with `summarise()`
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
The last verb is `summarise()`. It collapses a data frame to a single row:
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
```{r}
summarise(flights,
delay = mean(dep_delay, na.rm = TRUE)
)
```
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
It's most useful in conjunction with grouping, so we'll come back to it after we've learned about `group_by()`.
2015-12-09 00:28:54 +08:00
## Grouped operations
These verbs are useful on their own, but they become really powerful when you apply them to groups of observations within a dataset. In dplyr, you do this by with the `group_by()` function. It breaks down a dataset into specified groups of rows. When you then apply the verbs above on the resulting object they'll be automatically applied "by group". Most importantly, all this is achieved by using the same exact syntax you'd use with an ungrouped object.
Grouping affects the verbs as follows:
* grouped `select()` is the same as ungrouped `select()`, except that
grouping variables are always retained.
* grouped `arrange()` orders first by the grouping variables
* `mutate()` and `filter()` are most useful in conjunction with window
functions (like `rank()`, or `min(x) == x`). They are described in detail in
the windows function vignette `vignette("window-functions")`.
* `slice()` extracts rows within each group.
* `summarise()` is powerful and easy to understand, as described in
more detail below.
In the following example, we split the complete dataset into individual planes and then summarise each plane by counting the number of flights (`count = n()`) and computing the average distance (`dist = mean(Distance, na.rm = TRUE)`) and arrival delay (`delay = mean(ArrDelay, na.rm = TRUE)`). We then use ggplot2 to display the output.
```{r, warning = FALSE, message = FALSE, fig.width = 6}
library(ggplot2)
by_tailnum <- group_by(flights, tailnum)
delay <- summarise(by_tailnum,
count = n(),
dist = mean(distance, na.rm = TRUE),
delay = mean(arr_delay, na.rm = TRUE))
delay <- filter(delay, count > 20, dist < 2000)
# Interestingly, the average delay is only slightly related to the
# average distance flown by a plane.
ggplot(delay, aes(dist, delay)) +
geom_point(aes(size = count), alpha = 1/2) +
geom_smooth() +
scale_size_area()
```
2015-12-14 23:45:41 +08:00
### Useful summaries
You use `summarise()` with __aggregate functions__, which take a vector of values and return a single number.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* Location of "middle": `mean(x)`, `median(x)`
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* Measure of spread: `sd(x)`, `IQR(x)`, `mad(x)`.
2015-12-09 00:28:54 +08:00
2015-12-14 23:45:41 +08:00
* By ranked position: `min(x)`, `quantile(x, 0.25)`, `max(x)`
* By position: `first(x)`, `nth(x, 2)`, `last(x)`. These work similarly to
`x[1]`, `x[length(x)]`, and `x[n]` but give you more control over the result
if the value is missing.
* Count: `n()`
* Distinct count: `n_distinct(x)`.
* Counts and proportions of logical values: `sum(x > 10)`, `mean(y == 0)`
When used with numeric functions, `TRUE` is converted to 1 and `FALSE` to 0.
This makes `sum()` and `mean()` particularly useful: `sum(x)` gives the number
of `TRUE`s in `x`, and `mean(x)` gives the proportion.
* `first(x)`, `last(x)` and `nth(x, n)` -
2015-12-09 00:28:54 +08:00
For example, we could use these to find the number of planes and the number of flights that go to each possible destination:
```{r}
destinations <- group_by(flights, dest)
summarise(destinations,
planes = n_distinct(tailnum),
flights = n()
)
```
2015-09-21 21:41:14 +08:00
2015-12-14 23:45:41 +08:00
### Grouping by multiple variables
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
When you group by multiple variables, each summary peels off one level of the grouping. That makes it easy to progressively roll-up a dataset:
```{r}
daily <- group_by(flights, year, month, day)
(per_day <- summarise(daily, flights = n()))
(per_month <- summarise(per_day, flights = sum(flights)))
(per_year <- summarise(per_month, flights = sum(flights)))
```
2015-12-14 23:45:41 +08:00
However you need to be careful when progressively rolling up summaries like this: it's ok for sums and counts, but you need to think about weighting for means and variances, and it's not possible to do it exactly for medians.
2015-12-09 00:28:54 +08:00
## Piping
The dplyr API is functional in the sense that function calls don't have side-effects. You must always save their results. This doesn't lead to particularly elegant code, especially if you want to do many operations at once. You either have to do it step-by-step:
```{r, eval = FALSE}
a1 <- group_by(flights, year, month, day)
a2 <- select(a1, arr_delay, dep_delay)
a3 <- summarise(a2,
arr = mean(arr_delay, na.rm = TRUE),
dep = mean(dep_delay, na.rm = TRUE))
a4 <- filter(a3, arr > 30 | dep > 30)
```
Or if you don't want to save the intermediate results, you need to wrap the function calls inside each other:
```{r}
filter(
summarise(
select(
group_by(flights, year, month, day),
arr_delay, dep_delay
),
arr = mean(arr_delay, na.rm = TRUE),
dep = mean(dep_delay, na.rm = TRUE)
),
arr > 30 | dep > 30
)
```
This is difficult to read because the order of the operations is from inside to out. Thus, the arguments are a long way away from the function. To get around this problem, dplyr provides the `%>%` operator. `x %>% f(y)` turns into `f(x, y)` so you can use it to rewrite multiple operations that you can read left-to-right, top-to-bottom:
```{r, eval = FALSE}
flights %>%
group_by(year, month, day) %>%
select(arr_delay, dep_delay) %>%
summarise(
arr = mean(arr_delay, na.rm = TRUE),
dep = mean(dep_delay, na.rm = TRUE)
) %>%
filter(arr > 30 | dep > 30)
```
## Two-table verbs
It's rare that a data analysis involves only a single table of data. In practice, you'll normally have many tables that contribute to an analysis, and you need flexible tools to combine them. In dplyr, there are three families of verbs that work with two tables at a time:
* Mutating joins, which add new variables to one table from matching rows in
another.
* Filtering joins, which filter observations from one table based on whether or
not they match an observation in the other table.
* Set operations, which combine the observations in the data sets as if they
were set elements.
(This discussion assumes that you have [tidy data](http://www.jstatsoft.org/v59/i10/), where the rows are observations and the columns are variables. If you're not familiar with that framework, I'd recommend reading up on it first.)
All two-table verbs work similarly. The first two arguments are `x` and `y`, and provide the tables to combine. The output is always a new table with the same type as `x`.
### Mutating joins
Mutating joins allow you to combine variables from multiple tables. For example, take the nycflights13 data. In one table we have flight information with an abbreviation for carrier, and in another we have a mapping between abbreviations and full names. You can use a join to add the carrier names to the flight data:
```{r, warning = FALSE}
library("nycflights13")
# Drop unimportant variables so it's easier to understand the join results.
flights2 <- flights %>% select(year:day, hour, origin, dest, tailnum, carrier)
flights2 %>%
left_join(airlines)
```
#### Controlling how the tables are matched
As well as `x` and `y`, each mutating join takes an argument `by` that controls which variables are used to match observations in the two tables. There are a few ways to specify it, as I illustrate below with various tables from nycflights13:
* `NULL`, the default. dplyr will will use all variables that appear in
both tables, a __natural__ join. For example, the flights and
weather tables match on their common variables: year, month, day, hour and
origin.
```{r}
flights2 %>% left_join(weather)
```
* A character vector, `by = "x"`. Like a natural join, but uses only
some of the common variables. For example, `flights` and `planes` have
`year` columns, but they mean different things so we only want to join by
`tailnum`.
```{r}
flights2 %>% left_join(planes, by = "tailnum")
```
Note that the year columns in the output are disambiguated with a suffix.
* A named character vector: `by = c("x" = "a")`. This will
match variable `x` in table `x` to variable `a` in table `b`. The
variables from use will be used in the output.
Each flight has an origin and destination `airport`, so we need to specify
which one we want to join to:
```{r}
flights2 %>% left_join(airports, c("dest" = "faa"))
flights2 %>% left_join(airports, c("origin" = "faa"))
```
#### Types of join
There are four types of mutating join, which differ in their behaviour when a match is not found. We'll illustrate each with a simple example:
```{r}
(df1 <- data_frame(x = c(1, 2), y = 2:1))
(df2 <- data_frame(x = c(1, 3), a = 10, b = "a"))
```
* `inner_join(x, y)` only includes observations that match in both `x` and `y`.
```{r}
df1 %>% inner_join(df2) %>% knitr::kable()
```
* `left_join(x, y)` includes all observations in `x`, regardless of whether
they match or not. This is the most commonly used join because it ensures
that you don't lose observations from your primary table.
```{r}
df1 %>% left_join(df2)
```
* `right_join(x, y)` includes all observations in `y`. It's equivalent to
`left_join(y, x)`, but the columns will be ordered differently.
```{r}
df1 %>% right_join(df2)
df2 %>% left_join(df1)
```
* `full_join()` includes all observations from `x` and `y`.
```{r}
df1 %>% full_join(df2)
```
The left, right and full joins are collectively know as __outer joins__. When a row doesn't match in an outer join, the new variables are filled in with missing values.
#### Observations
While mutating joins are primarily used to add new variables, they can also generate new observations. If a match is not unique, a join will add all possible combinations (the Cartesian product) of the matching observations:
```{r}
df1 <- data_frame(x = c(1, 1, 2), y = 1:3)
df2 <- data_frame(x = c(1, 1, 2), z = c("a", "b", "a"))
df1 %>% left_join(df2)
```
### Filtering joins
Filtering joins match obserations in the same way as mutating joins, but affect the observations, not the variables. There are two types:
* `semi_join(x, y)` __keeps__ all observations in `x` that have a match in `y`.
* `anti_join(x, y)` __drops__ all observations in `x` that have a match in `y`.
These are most useful for diagnosing join mismatches. For example, there are many flights in the nycflights13 dataset that don't have a matching tail number in the planes table:
```{r}
library("nycflights13")
flights %>%
anti_join(planes, by = "tailnum") %>%
count(tailnum, sort = TRUE)
```
If you're worried about what observations your joins will match, start with a `semi_join()` or `anti_join()`. `semi_join()` and `anti_join()` never duplicate; they only ever remove observations.
```{r}
df1 <- data_frame(x = c(1, 1, 3, 4), y = 1:4)
df2 <- data_frame(x = c(1, 1, 2), z = c("a", "b", "a"))
# Four rows to start with:
df1 %>% nrow()
# And we get four rows after the join
df1 %>% inner_join(df2, by = "x") %>% nrow()
# But only two rows actually match
df1 %>% semi_join(df2, by = "x") %>% nrow()
```
### Set operations
The final type of two-table verb is set operations. These expect the `x` and `y` inputs to have the same variables, and treat the observations like sets:
* `intersect(x, y)`: return only observations in both `x` and `y`
* `union(x, y)`: return unique observations in `x` and `y`
* `setdiff(x, y)`: return observations in `x`, but not in `y`.
Given this simple data:
```{r}
(df1 <- data_frame(x = 1:2, y = c(1L, 1L)))
(df2 <- data_frame(x = 1:2, y = 1:2))
```
The four possibilities are:
```{r}
intersect(df1, df2)
# Note that we get 3 rows, not 4
union(df1, df2)
setdiff(df1, df2)
setdiff(df2, df1)
```
### Databases
Each two-table verb has a straightforward SQL equivalent:
| R | SQL
|------------------|--------
| `inner_join()` | `SELECT * FROM x JOIN y ON x.a = y.a`
| `left_join()` | `SELECT * FROM x LEFT JOIN y ON x.a = y.a`
| `right_join()` | `SELECT * FROM x RIGHT JOIN y ON x.a = y.a`
| `full_join()` | `SELECT * FROM x FULL JOIN y ON x.a = y.a`
| `semi_join()` | `SELECT * FROM x WHERE EXISTS (SELECT 1 FROM y WHERE x.a = y.a)`
| `anti_join()` | `SELECT * FROM x WHERE NOT EXISTS (SELECT 1 FROM y WHERE x.a = y.a)`
| `intersect(x, y)`| `SELECT * FROM x INTERSECT SELECT * FROM y`
| `union(x, y)` | `SELECT * FROM x UNION SELECT * FROM y`
| `setdiff(x, y)` | `SELECT * FROM x EXCEPT SELECT * FROM y`
`x` and `y` don't have to be tables in the same database. If you specify `copy = TRUE`, dplyr will copy the `y` table into the same location as the `x` variable. This is useful if you've downloaded a summarised dataset and determined a subset of interest that you now want the full data for. You can use `semi_join(x, y, copy = TRUE)` to upload the indices of interest to a temporary table in the same database as `x`, and then perform a efficient semi join in the database.
If you're working with large data, it maybe also be helpful to set `auto_index = TRUE`. That will automatically add an index on the join variables to the temporary table.
### Coercion rules
When joining tables, dplyr is a little more conservative than base R about the types of variable that it considers equivalent. This is mostly likely to surprise if you're working factors:
* Factors with different levels are coerced to character with a warning:
```{r}
df1 <- data_frame(x = 1, y = factor("a"))
df2 <- data_frame(x = 2, y = factor("b"))
full_join(df1, df2) %>% str()
```
* Factors with the same levels in a different order are coerced to character
with a warning:
```{r}
df1 <- data_frame(x = 1, y = factor("a", levels = c("a", "b")))
df2 <- data_frame(x = 2, y = factor("b", levels = c("b", "a")))
full_join(df1, df2) %>% str()
```
* Factors are preserved only if the levels match exactly:
```{r}
df1 <- data_frame(x = 1, y = factor("a", levels = c("a", "b")))
df2 <- data_frame(x = 2, y = factor("b", levels = c("a", "b")))
full_join(df1, df2) %>% str()
```
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
* A factor and a character are coerced to character with a warning:
```{r}
df1 <- data_frame(x = 1, y = "a")
df2 <- data_frame(x = 2, y = factor("a"))
full_join(df1, df2) %>% str()
```
Otherwise logicals will be silently upcast to integer, and integer to numeric, but coercing to character will raise an error:
2015-09-21 21:41:14 +08:00
2015-12-09 00:28:54 +08:00
```{r, error = TRUE, purl = FALSE}
df1 <- data_frame(x = 1, y = 1L)
df2 <- data_frame(x = 2, y = 1.5)
full_join(df1, df2) %>% str()
2015-12-06 22:02:29 +08:00
2015-12-09 00:28:54 +08:00
df1 <- data_frame(x = 1, y = 1L)
df2 <- data_frame(x = 2, y = "a")
full_join(df1, df2) %>% str()
```