Braindumping into EDA chapter

This commit is contained in:
hadley 2016-07-15 09:42:51 -05:00
parent 59104914f3
commit 6d5cb638f2
5 changed files with 316 additions and 178 deletions

Binary file not shown.

BIN
images/EDA-hclust.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

BIN
images/EDA-kmeans.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

BIN
images/EDA-linkage.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View File

@ -1,71 +1,87 @@
# Exploratory Data Analysis (EDA)
```{r include = FALSE}
library(ggplot2)
library(dplyr)
library(broom)
```{r include=FALSE}
knitr::opts_chunk$set(fig.height = 2)
```
# Exploratory Data Analysis (EDA)
This chapter will show you how to use visualization and transformation to explore your data in a systematic way, a task that statisticians call Exploratory Data Analysis, or EDA for short. EDA involves iteratively
## Introduction
1. forming questions about your data
2. searching for answers by visualizing and transforming your data
3. using what you discover to refine your questions about the data, or to choose new questions to investigate
This chapter will show you how to use visualization and transformation to explore your data in a systematic way, a task that statisticians call Exploratory Data Analysis, or EDA for short. EDA is an interative cycle that involves:
There is no formal way to do Exploratory Data Analysis because you must be free to investigate every idea that occurs to you. However, some tactics will reliably lead to insights. This chapter will teach you a basic toolkit of these useful EDA techniques. Our discussion will lead to a model of data science itself, the model that I've built this book around.
1. Forming questions about your data.
1. Searching for answers by visualizing, transforming, and modeling your data.
1. Using what you discover to refine your questions about the data, or
to choose new questions to investigate
EDA is not a formal process with a strict set of rules: you must be free to investigate every idea that occurs to you. Instead, EDA is a loose set of tactics that are more likely to lead to useful insights. This chapter will teach you a basic toolkit of these useful EDA techniques. Our discussion will lead to a model of data science itself, the model that I've built this book around.
This chapter will point you towards many other interesting packages, more so than any other chapter in the book.
### Prerequisites
In this chapter we'll combine what you've learned about dplyr and ggplot2 to iteratively ask questions, answer them with data, and then ask new questions.
```{r setup}
library(ggplot2)
library(dplyr)
```
## Questions
> "There are no routine statistical questions, only questionable statistical
> routines." --- Sir David Cox
> "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise."---John Tukey
> "Far better an approximate answer to the right question, which is often
> vague, than an exact answer to the wrong question, which can always be made
> precise." ---John Tukey
Your goal during EDA is to develop a complete understanding of your dataset and the information that it contains. The easiest way to do this is to use questions as tools to guide your investigation. When you ask a question, the question focuses your attention on a specific part of your dataset and helps you decide which graphs or models to make.
Your goal during EDA is to develop your understanding of your data. The easiest way to do this is to use questions as tools to guide your investigation. When you ask a question, the question focuses your attention on a specific part of your dataset and helps you decide which graphs, models, or transforamtions to make.
During EDA, the _quantity_ of questions that you ask matters more than the quality of the questions. It is difficult to ask revealing questions at the start of your analysis because you do not know what insights are contained in your dataset. On the other hand, each new question that you ask will expose you to a new aspect of your data and increase your chance of making a discovery. You can quickly drill down into the most interesting parts of your data---and develop a set of thought provoking questions---if you follow up each question with a new question based on what you find.
EDA is fundamentally a creative process. And like most creative processes, the key to asking _quality_ questions is to generate a large _quantity_ of questions. It is difficult to ask revealing questions at the start of your analysis because you do not know what insights are contained in your dataset. On the other hand, each new question that you ask will expose you to a new aspect of your data and increase your chance of making a discovery. You can quickly drill down into the most interesting parts of your data---and develop a set of thought provoking questions---if you follow up each question with a new question based on what you find.
There is no rule about which questions you should ask to guide your research. However, two types of questions will always be useful for making discoveries within your data. You can loosely word these questions as
There is no rule about which questions you should ask to guide your research. However, two types of questions will always be useful for making discoveries within your data. You can loosely word these questions as:
1. What type of **variation** occurs **within** my variables? and
1. What type of **variation** occurs **within** my variables?
2. What type of **covariation** occurs **between** my variables?
1. What type of **covariation** occurs **between** my variables?
The rest of this chapter will look at these two questions. I'll explain what variation and covariation are, and I'll show you several ways to answer each question. To make the discussion easier, let's define some terms:
* A _variable_ is a quantity, quality, or property that you can measure.
* A __variable__ is a quantity, quality, or property that you can measure.
* A _value_ is the state of a variable when you measure it. The value of a variable may change from measurement to measurement.
* A __value__ is the state of a variable when you measure it. The value of a
variable may change from measurement to measurement.
* An _observation_ is a set of measurements that you make under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object). An observation will contain several values, each associated with a different variable. I'll sometimes refer to an observation as a data point.
* An __observation__ is a set of measurements made under similar conditions
(you usually make all of the measurements in an observation at the same
time and on the same object). An observation will contain several values,
each associated with a different variable. I'll sometimes refer to
an observation as a data point.
* _tabular data_ is a set of values, each associated with a variable and an observation. Tabular data is _tidy_ if each value is placed in its own "cell", each variable in its own column, and each observation in its own row.
* _tabular data_ is a set of values, each associated with a variable and an
observation. Tabular data is _tidy_ if each value is placed in its own
"cell", each variable in its own column, and each observation in its own
row.
Throughout the rest of this chapter I will use the word data to mean tidy tabular data. Other types of "unstructured" data exist, but you would not use the methods in this chapter on unstructured data until you first reorganized the unstructured data into tidy tabular data.
For now, assume all the data you see in this book is be tidy. You'll encounter lots of other data in practice, so we'll come back to these ideas again in [tidy data] where you'll learn how to tidy messy data.
## Variation
> "What type of variation occurs within my variables?"
**Variation** is the tendency of the values of a variable to change from measurement to measurement. You can see variation easily in real life; if you measure any continuous variable twice---and precisely enough, you will get two different results. This is true even if you measure quantities that should be constant, like the speed of light (below). Each of your measurements will include a small amount of error that varies from measurement to measurement.
```{r include = FALSE}
options(digits = 7)
```
**Variation** is the tendency of the values of a variable to change from measurement to measurement. You can see variation easily in real life; if you measure any continuous variable twice---and precisely enough, you will get two different results. This is true even if you measure quantities that are constant, like the speed of light (below). Each of your measurements will include a small amount of error that varies from measurement to measurement.
```{r, variation, echo = FALSE}
old <- options(digits = 7)
mat <- as.data.frame(matrix(morley$Speed + 299000, ncol = 10))
knitr::kable(mat, caption = "*The speed of light is a universal constant, but variation due to measurement error obscures its value. In 1879, Albert Michelson measured the speed of light 100 times and observed 30 different values (in km/sec).*", col.names = rep("", ncol(mat)))
```
```{r include = FALSE}
options(digits = 3)
knitr::kable(
mat,
col.names = rep("", ncol(mat)),
caption = "The speed of light is a universal constant, but variation due to measurement error obscures its value. In 1879, Albert Michelson measured the speed of light 100 times and observed 30 different values (in km/sec)."
)
options(old)
```
Discrete and categorical variables can also vary if you measure across different subjects (e.g. the eye colors of different people), or different times (e.g. the energy levels of an electron at different moments).
@ -81,121 +97,189 @@ ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut))
```
The height of the bars displays how many observations occurred with each x value. You can compute these values manually with `table()`.
The height of the bars displays how many observations occurred with each x value. You can compute these values manually with `dplyr::count()`.
```{r}
table(diamonds$cut)
diamonds %>% count(cut)
```
A variable is **continuous** if you can arrange its values in order _and_ an infinite number of unique values can exist between any two values of the variable. Numbers and date-times are two examples of continuous variables. To examine the distribution of a continuous variable, use a histogram.
```{r message = FALSE}
```{r}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 0.5)
```
A histogram divides the x axis into equally spaced intervals and then uses a bar to display how many observations fall into each interval. In the graph above, the tallest bar shows that almost 30,000 observations have a $carat$ value between 0.25 and 0.75, which are the left and right edges of the bar.
You can set the width of the intervals in a histogram with the `binwidth` argument, which is measured in the units of the $x$ axis. You should always explore a variety of binwidths when working with histograms, as different binwidths can reveal different patterns. For example, here is how the graph above looks with a binwidth of 0.01.
You can set the width of the intervals in a histogram with the `binwidth` argument, which is measured in the units of the $x$ variable. You should always explore a variety of binwidths when working with histograms, as different binwidths can reveal different patterns. For example, here is how the graph above looks when we zoom into just the diamonds with a binwidth of less than three and choose a smaller binwidth.
```{r message = FALSE}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 0.01)
```{r}
smaller <- diamonds %>% filter(carat < 3)
ggplot(data = smaller, mapping = aes(x = carat)) +
geom_histogram(binwidth = 0.1)
```
If you wish to overlay multiple histograms in the same plot, I recommend using `geom_freqpoly()` or `geom_density2d()` instead of `geom_histogram()`. `geom_freqpoly()` makes a frequency polygon, a line that connects the tops of the bars that would appear in a histogram. Like `geom_histogram()`, `geom_freqpoly()` accepts a binwidth argument.
If you wish to overlay multiple histograms in the same plot, I recommend using `geom_freqpoly()` instead of `geom_histogram()`. `geom_freqpoly()` performs the same calculation as `geom_histogram()`, but instead of displaying the counts with bars, uses lines instead. It's much easier to understand overlapping lines than bars.
`geom_density()` plots a one dimensional kernel density estimate of a variable's distribution. The result is a smooth version of the information contained in a histogram or a frequency polygon. You can control the smoothness of the density with `adjust`. `geom_density()` displays _density_---not count---on the y axis; the area under each curve will be normalized to one, no matter how many total observations occur in the subgroup, which makes it easier to compare subgroups.
```{r message = FALSE, fig.show='hold', fig.width=3}
zoom <- coord_cartesian(xlim = c(55, 70))
ggplot(data = diamonds) +
geom_freqpoly(aes(x = depth, color = cut), binwidth = 0.2) +
zoom
ggplot(data = diamonds) +
geom_density(aes(x = depth, color = cut), adjust = 3) +
zoom
```{r}
ggplot(data = smaller, mapping = aes(x = carat)) +
geom_freqpoly(binwidth = 0.1)
```
### Asking questions about variation
Now that you can visualize variation, what should you look for in your plots? And what type of follow-up questions should you ask? I've put together a list below of the most useful types of information that you will find in your graphs, along with some follow up questions for each type of information. The key to asking good follow up questions will be to rely on your **curiosity** (What do you want to learn more about?) as well as your **skepticism** (How could this be misleading?).
* *Typical values*
In both bar charts and histograms, tall bars reveal common values of a variable. Shorter bars reveal less common or rare values. Places that do not have bars reveal seemingly impossible values. To turn this information into a useful question, look for anything unexpected:
### Typical values
In both bar charts and histograms, tall bars reveal common values of a variable. Shorter bars reveal rarer values. Places that do not have bars reveal values that were not seen in your data. To turn this information into useful questions, look for anything unexpected:
* Which values are the most common? Why?
* Which values are the rare? Why? Does that match your expectations?
* Can you see any unusual patterns? What might explain them?
As an example, the histogram below suggests several interesting questions:
* Why are there more diamonds at whole carats and common fractions of carats?
* Why are there more diamonds slightly to the right of each peak than there
are slightly to the left of each peak?
* Why are there no diamonds bigger than 3 carats?
```{r}
ggplot(data = smaller, mapping = aes(x = carat)) +
geom_histogram(binwidth = 0.01)
```
Clusters of similar values suggest that subgroups exist in your data. To understand the subgroups, ask:
* How are the observations within each cluster similar to each other?
* How are the observations in separate clusters different from each other?
* How can you explain or describe the clusters?
* Why might the appearance of clusters be misleading?
The histogram shows the length (in minutes) of 272 eruptions of the Old Faithful Geyser in Yellowstone National Park. Eruption times appear to be clustered in to two groups: there are short eruptions (of around 2 minutes) and long eruption (4-5 minutes), but little in between.
```{r}
ggplot(data = faithful, mapping = aes(x = eruptions)) +
geom_histogram(binwidth = 0.25)
```
Many of the questions above will prompt you to explore a relationship *between* variables, for example, to see if the values of one variable can explain the behavior of another variable.
### Unusual values
Outliers are observations that are unusual; data points that are don't seem to fit the pattern. Sometimes outliers are data entry errors; other times outliers suggest important new science. When you have a lot of data, outliers are sometimes difficult to see in a histogram.
For example, take this distribution of the `x` variable from the diamonds dataset. The only evidence of outliers is the unusually wide limits on the x-axis.
```{r}
ggplot(diamonds) +
geom_histogram(aes(x = y), binwidth = 0.5)
```
This is because there are so many observations in the common bins that the rare bins are so short that you can't see them (although maybe if you stare intently at 0 you'll spot something). To make it easy to see the unusual vaues, we need to zoom into to small values of the y-axis with `coord_cartesian()`:
```{r}
ggplot(diamonds) +
geom_histogram(aes(x = y), binwidth = 0.5) +
coord_cartesian(ylim = c(0, 50))
```
(`coord_cartesian()` also has an `xlim()` argument for when you need to zoom into the x-axis. ggplot2 also has `xlim()` and `ylim()` functions that work slightly differently: they throw away the data outside the limits.)
This allows us to see that there are three unusual values: 0, ~30, and ~60. We pluck them out with dplyr:
```{r}
unusual <- diamonds %>%
filter(y < 3 | y > 20) %>%
arrange(y)
unusual
```
The y variable measures one of the three dimensions of these diamonds in mm. We know that diamonds can't have a 0 measurement. So these must be invalid measurements. We might also suspect that measureents of 32mm and 59mm are implausible: those diamonds are over an inch long, but don't cost hundreds of thousands of dollars!
When you discover an outlier it's a good idea to trace it back as far as possible. You'll be in a much stronger analytical position if you can figure out why it happened. If you can't figure it out, and want to just move on with your analysis, it's a good idea to replace it with a missing value, which we'll discuss in the next section.
### Exercises
1. Explore the distribution of each of the `x`, `y`, and `z` variables
in `diamonds`. What do you learn? Think about a diamond and how you
might decide which dimension is the length, width, and depth.
1. Explore the distribution of `price`. Do you discover anything unusual
or surprising? (Hint: carefully think about the `binwidth` and make sure
you)
+ Which values are the most common? Why might that be?
1. Compare and contract `coord_cartesian()` vs `xlim()`/`ylim()` when
zooming in on a histogram. What happens if you leave `binwidth` unset?
What happens if you try and zoom so only half a bar shows?
+ Which values are the most rare? Why might that be?
+ Is there an unusual pattern in the distribution? Why might that be?
+ Do the typical values change if you look at subgroups of the data?
As an example, the histogram below suggests several interesting questions: Why are there more diamonds at whole carats and common fractions of carats? Why are there more diamonds slightly to the right of each peak than there are slightly to the left of each peak?
```{r echo = FALSE, message = FALSE, warning = FALSE, fig.height = 2}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 0.01) + xlim(0, 3)
## Missing values
If you've encountered unusual values in your dataset, and simply want to move on to the rest of your analysis, you have two options.
1. Drop the entire row with the strange values:
```{r}
diamonds2 <- diamonds %>% filter(between(y, 3, 20))
```
* *Range of values*
I don't recommend this option because just because one measurement
is invalid, doesn't mean all the measurements are. Additionally, if you
have very noisy data, you might find by time that you've applied this
approach to every variable that you don't have any data left!
The range of values, or spread, of the distribution reveals how certain you can be when you make predictions about a variable. If the variable only takes a narrow set of values, like below, you are unlikely to be far off if you make a prediction about a future observation. Even if the observation takes a value at the distant extreme of the distribution, the value will not be far from your guess.
```{r echo = FALSE, message = FALSE, fig.height = 2}
mpg$hwy2 <- mpg$hwy / 10 + 22
ggplot(mpg) + geom_histogram(aes(x = hwy2), binwidth = 1) + xlim(10, 45)
1. Instead, I recommend replacing the unusual values with missing values.
The easiest way to do this is use `mutate()` to replace the variable
with a modified copy. You can use the `ifelse()` function to replace
unusual values with `NA`:
```{r}
diamonds2 <- diamonds %>%
mutate(y = ifelse(y < 3 | y > 20, NA, y))
```
If the variable takes on a wide set of values, like below, the possibility that your guess will be far off the mark is much greater. The extreme possibilities are farther away.
```{r echo = FALSE, message = FALSE, fig.height = 2}
ggplot(mpg) + geom_histogram(aes(x = hwy), binwidth = 1) + xlim(10, 45)
```
As a quick rule, wide distributions imply less certainty when making predictions about a variable; narrow distributions imply more certainty. A distribution with only a single repeated value implies complete certainty: your variable is a constant. Ask yourself
+ Do your data show a surprising amount of certainty or uncertainty? Why?
+ Does the range of the distribution change if you look at individual subgroups of the data?
* *Outliers*
Outliers are data points that do not seem to fit the overall pattern of variation, like the diamond on the far right of the histogram below. This diamond has a y dimension of 59mm, which is much larger than the other diamonds.
```{r echo = FALSE, message = FALSE, fig.height = 2}
ggplot(diamonds[24000:24500, ]) + geom_histogram(aes(x = y), binwidth = 0.25)
```
An outlier is a signal that something unique happened to the observation. Whenever you spot an outlier, ask yourself
+ What can explain the unusual value?
If you can figure out what happened, a discovery might follow. In the case above, the unique event was a measurement error.
ggplot2 subscribes to the philosophy that missing values should never silently go missing. However, it's not obvious where you should plot missing values, so ggplot2 doesn't display in the plot, but does warn that they're been removed.
* *Clusters*
```{r}
ggplot(data = diamonds2, mapping = aes(x = x, y = y)) +
geom_point()
```
Clusters of similar values suggest that subgroups exist in your data. To understand the subgroups, ask:
+ How are the observations within each cluster similar to each other?
+ How are the observations in separate clusters different from each other?
+ How can you explain or describe the clusters?
+ Why might the appearance of clusters be misleading?
The histogram below displays two distinct clusters. It shows the length in minutes of 272 eruptions of the Old Faithful Geyser in Yellowstone National Park; Old Faithful appears to oscillate between short and long eruptions.
You can suppress that warning with `na.rm = TRUE`:
```{r echo = FALSE, message = FALSE, fig.height = 2}
ggplot(faithful) + geom_histogram(aes(x = eruptions))
```
```{r}
ggplot(data = diamonds2, mapping = aes(x = x, y = y)) +
geom_point(na.rm = TRUE)
```
Many of the questions above will prompt you to explore a relationship *between* variables, for example, to see if the values of one variable can explain the behavior of another variable. Questions about relationships are examples of the second general question that I proposed for EDA. Let's look at that question now.
Other times you want to understand what makes observations with missing values different to observations with recorded values. For example, in `nycflights13::flights`, missing value in the `dep_time` variable indicate that the flight was cancelled. So you might want to compare the scheduled departure times for cancelled and non-cancelled times. You can do by making a new variable with `is.na()`.
```{r}
nycflights13::flights %>%
mutate(
cancelled = is.na(dep_time),
sched_hour = sched_dep_time %/% 100,
sched_min = sched_dep_time %% 100,
sched_dep_time = sched_hour + sched_min / 60
) %>%
ggplot(mapping = aes(sched_dep_time)) +
geom_freqpoly(mapping = aes(colour = cancelled), binwidth = 1/4)
```
However this plot isn't great because there are many more non-cancelled flights than cancelled flights. In the next section we'll explore some techniques for making improving this comparison.
### Exercises
1. Recall what the `na.rm = TRUE` argument does in `mean()` and `sum()`.
Why is that a similar operation for `geom_point()`?
## Covariation
@ -203,40 +287,65 @@ Many of the questions above will prompt you to explore a relationship *between*
If variation describes the behavior _within_ a variable, covariation describes the behavior _between_ variables. **Covariation** is the tendency for the values of two or more variables to vary together in a correlated way. The best way to spot covariation is to visualize the relationship between two or more variables. How you do that should again depend on the type of variables involved.
### Visualizing two categorical variables
Visualize covariation between categorical variables with `geom_count()`.
```{r}
ggplot(data = diamonds) +
geom_count(mapping = aes(x = cut, y = color))
```
The size of each circle in the plot displays how many observations occurred at each combination of values. Covariation will appear as a strong correlation between specific x values and specific y values. As with bar charts, you can calculate the specific values with `table()`.
```{r}
table(diamonds$color, diamonds$cut)
```
### Visualizing one categorical variable and one continuous variable
Visualize covariation between continuous and categorical variables with boxplots. A **boxplot** is a type of visual shorthand for a distribution of values that is popular among statisticians. Each boxplot consists of:
It's common to want to explore the distribution of a continuous variable broken down by a categorical, as in the previous histogram. The default appearance of `geom_freqpoly()` is not that useful for that sort of comparison because the height is the count. That means if one of the groups is much smaller than the others, it's hard to see the differences in shape. For example, lets explore how the price of a diamond varies with its quality:
* A box that stretches from the 25th percentile of the distribution to the 75th percentile, a distance known as the Inter-Quartile Range (IQR). In the middle of the box is a line that displays the median, i.e. 50th percentile, of the distribution. These three lines give you a sense of the spread of the distribution and whether or not the distribution is symmetric about the median or skewed to one side.
* Visual points that display observations that fall more than 1.5 times the IQR from either edge of the box. These outlying points have a strong chance of being outliers, so they are included in the boxplot for inspection.
* A line (or whisker) that extends from each end of the box and goes to the farthest non-outlier point in the distribution.
```{r, echo = FALSE}
knitr::include_graphics("images/EDA-boxplot.pdf")
```{r}
ggplot(data = diamonds, mapping = aes(x = price)) +
geom_freqpoly(aes(colour = cut), binwidth = 500)
```
The chart below shows several boxplots, one for each level of the class variable in the mpg dataset. Each boxplot represents the distribution of hwy values for points with the given level of class. To make boxplots, use `geom_boxplot()`.
It's hard to see the difference in distribution because the overall counts differ so much:
```{r}
ggplot(diamonds, aes(cut)) +
geom_bar()
```
To make the comparison easier we need to swap what is displayed on the y-axis. Instead of display count, we'll display __density__, which is the count standardised so that the area under each frequency polygon is one.
```{r}
ggplot(data = diamonds, mapping = aes(x = price, y = ..density..)) +
geom_freqpoly(aes(colour = cut), binwidth = 500)
```
There's something rather surprising about this plot - it appears that fair diamonds (the lowest quality) have the highest average cut! But maybe that's because frequency polygons are a little hard to interpret - there's a lot going on in this plot.
Another alternative to display the distribution of a continuous variable broken down by a categorical variable is the boxplot. A **boxplot** is a type of visual shorthand for a distribution of values that is popular among statisticians. Each boxplot consists of:
* A box that stretches from the 25th percentile of the distribution to the
75th percentile, a distance known as the Inter-Quartile Range (IQR). In the
middle of the box is a line that displays the median, i.e. 50th percentile,
of the distribution. These three lines give you a sense of the spread of the
distribution and whether or not the distribution is symmetric about the
median or skewed to one side.
* Visual points that display observations that fall more than 1.5 times the
IQR from either edge of the box. These outlying points are unusual
so are plotted individually
* A line (or whisker) that extends from each end of the box and goes to the
farthest non-outlier point in the distribution.
```{r, echo = FALSE}
knitr::include_graphics("images/EDA-boxplot.png")
```
Let's take a look at the distribution of price by cut using `geom_boxplot()`:
```{r fig.height = 3}
ggplot(data = mpg) +
geom_boxplot(aes(x = class, y = hwy))
ggplot(data = diamonds, mapping = aes(x = cut, y = price)) +
geom_boxplot()
```
We see much less information about the distribution, but the boxplots are much more compact so we can more easily compare them (and fit more on one plot). It supports the counterintuive finding that better quality diamonds are cheaper on average! In the exercises, you'll be challenged to figure out why.
`cut` is an ordered factor: fair is worse than good, which is wrose than very good and so on. Most factors are unordered, so it's fair game to reorder to display the results better. For example, take the `class` variable in the `mpg` dataset. You might be interested to know how hwy mileage varies across classes:
```{r}
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot()
```
Covariation will appear as a systematic change in the medians or IQRs of the boxplots. To make the trend easier to see, wrap the $x$ variable with `reorder()`. The code below reorders the x axis based on the median hwy value of each group.
@ -246,7 +355,7 @@ ggplot(data = mpg) +
geom_boxplot(aes(x = reorder(class, hwy, FUN = median), y = hwy))
```
`geom_boxplot()` works best when the categorical variable is mapped to the x aesthetic. You can invert the axes with `coord_flip()`.
If you have long variable names, `geom_boxplot()` will work better if you flip it 90°. You can do that with `coord_flip()`.
```{r}
ggplot(data = mpg) +
@ -262,6 +371,57 @@ ggplot(data = mpg) +
coord_flip()
```
#### Exercises
1. Install the ggstance pacakge, and create a horizontal boxplot.
1. One problem with boxplots is that they were developed in an era of
much smaller datasets and tend to display an prohibitively large
number of "outlying values". One approach to remedy this problem is
the letter value plot. Install the lvplot package, and try using
`geom_lvplot()` to display the distribution of price vs cut. What
do you learn? How do you interpret the plots?
1. Compare and contrast `geom_violin()` with a facetted `geom_histogram()`,
or coloured `geom_freqpoly()`. What are the pros and cons of each
method?
### Visualizing two categorical variables
There are two basic techniques for visulaising covariation between categorical variables. One is to count the number of observations at each location and display the count with the size of a point. That's the job of `geom_count()`:
```{r}
ggplot(data = diamonds) +
geom_count(mapping = aes(x = cut, y = color))
```
The size of each circle in the plot displays how many observations occurred at each combination of values. Covariation will appear as a strong correlation between specific x values and specific y values. As with bar charts, you can calculate the specific values with `count()`.
```{r}
diamonds %>% count(color, cut)
```
This allows you to reproduce `geom_count()` by hand, or instead of mapping count to `size`, you could instead use `geom_raster()` and map count to `fill`:
```{r}
diamonds %>%
count(color, cut) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_raster(aes(fill = n))
```
If the categorical variables are unordered, you might want to use the seriation package to simultaneously reorder the rows and columns in order to more clearly reveal interesting patterns.
### Exercises
1. How could you rescale the count dataset above to more clearly see
the differences across colours or across cuts?
1. Use `geom_raster()` together with dplyr to explore how average flight
delays vary by destination and month of year.
1. Use the `seriation` to reorder
### Vizualizing two continuous variables
@ -287,6 +447,7 @@ ggplot(data = diamonds) +
`geom_density2d()` fits a 2D kernel density estimation to the data and then uses contour lines to highlight areas of high density. It is very useful for overlaying on raw data even when your dataset is not big.
Splitting
```{r}
ggplot(data = faithful, aes(x = eruptions, y = waiting)) +
@ -340,7 +501,8 @@ When you explore plots of covariation, look for the following sources of insight
ggplot(faithful) + geom_point(aes(x = eruptions, y = waiting))
```
Patterns provide one of the most useful tools for data scientists because they reveal covariation. If you think of variation as a phenomenon that creates uncertainty, covariation is a phenomenon that reduces it. If two variables covary, you can use the values of one variable to make better predictions about the values of the second. If the covariation is due to a causal relationship (a special case), then you can use the value of one variable to control the value of the second.
Patterns provide one of the most useful tools for data scientists because they reveal covariation. If you think of variation as a phenomenon that creates uncertainty, covariation is a phenomenon that reduces it. If two variables covary, you can use the values of one variable to make better predictions about the values of the second. If the covariation is due to a causal relationship (a special case), then you can use the value of one variable to control the value
of the second.
### Visualizing three or more variables
@ -386,7 +548,7 @@ Hierarchical clustering uses a simple algorithm to locate groups of points that
You can visualize the results of the algorithm as a dendrogram, and you can use the dendrogram to divide your data into any number of clusters. The figure below demonstrates how the algorithm would proceed in a two dimensional dataset.
```{r, echo = FALSE}
knitr::include_graphics("images/EDA-hclust.pdf")
knitr::include_graphics("images/EDA-hclust.png")
```
To use hierarchical clustering in R, begin by selecting the numeric columns from your data; you can only apply hierarchical clustering to numeric data. Then apply the `dist()` function to the data and pass the results to `hclust()`. `dist()` computes the distances between your points in the n dimensional space defined by your numeric vectors. `hclust()` performs the clustering algorithm.
@ -422,7 +584,7 @@ ggplot(small_iris, aes(x = Sepal.Width, y = Sepal.Length)) +
You can modify the hierarchical clustering algorithm by setting the method argument of hclust to one of "complete", "single", "average", or "centroid". The method determines how to measure the distance between two clusters or a lone point and a cluster, a measurement that affects the outcome of the algorithm.
```{r, echo = FALSE}
knitr::include_graphics("images/EDA-linkage.pdf")
knitr::include_graphics("images/EDA-linkage.png")
```
* *complete* - Measures the greatest distance between any two points in the separate clusters. Tends to create distinct clusters and subclusters.
@ -453,7 +615,7 @@ K means clustering provides a simulation based alternative to hierarchical clust
4. Repeat steps 2 and 3 until group memberships cease to change
```{r, echo = FALSE}
knitr::include_graphics("images/EDA-kmeans.pdf")
knitr::include_graphics("images/EDA-kmeans.png")
```
Use `kmeans()` to perform k means clustering with R. As with hierarchical clustering, you can only apply k means clustering to numerical data. Pass your numerical data to the `kmeans()` function, then set `center` to the number of clusters to search for ($k$) and `nstart` to the number of simulations to run. Since the results of k means clustering depend on the initial assignment of points to groups, which is random, R will run `nstart` simulations and then return the best results (as measured by the minimum sum of squared distances between each point and the centroid of the group it is assigned to). Finally, set the maximum number of iterations to let each simulation run in case the simulation cannot quickly find a stable grouping.
@ -540,39 +702,15 @@ I'll postpone teaching you how to fit and interpret models with R until Part 4.
You now know how to explore the variables displayed in your dataset, but you should know that these are not the only variables in your data. Nor are the observations that are displayed in your data the only observations. You can use the values in your data to compute new variables or to measure new (group-level) observations. These new variables and observations provide a further source of insights that you can explore with visualizations, clustering algorithms, and models.
### To make new variables
Use dplyr's `mutate()` function to calculate new variables from your existing variables.
```{r}
diamonds %>%
mutate(volume = x * y * z) %>%
head()
```
The window functions from Chapter 3 are particularly useful for calculating new variables. To calculate a variable from two or more variables, use basic operators or the `map2()`, `map3()`, and `map_n()` functions from purrr. You will learn more about purrr in Chapter ?.
If you are statistically trained, you can use R to extract potential variables with more sophisticated algorithms. R provides `prcomp()` for principal component analysis and `factanal()` for factor analysis. The psych and SEM packages also provide further tools for working with latent variables.
### To make new observations
If your dataset contains subgroups, you can derive from your data a new dataset of observations that describe the subgroups. To do this, first use dplyr's `group_by()` function to group the data into subgroups. Then use dplyr's `summarise()` function to calculate group level statistics. The measures of location, rank and spread listed in Chapter 3 are particularly useful for describing subgroups.
```{r}
mpg %>%
group_by(class) %>%
summarise(n_obs = n(), avg_hwy = mean(hwy), sd_hwy = sd(hwy))
```
## A last word on variables, values, and observations
Variables, values, and observations provide a basis for Exploratory Data Analysis: _if a relationship exists between two_ variables, _then the relationship will exist between the_ values _of those variables when those values are measured in the same_ observation. As a result, relationships between variables will appear as patterns in your data.
Variables, values, and observations provide a basis for EDA: _if a relationship exists between two_ variables, _then the relationship will exist between the_ values _of those variables when those values are measured in the same_ observation. As a result, relationships between variables will appear as patterns in your data.
Within any particular observation, the exact form of the relationship between variables may be obscured by mediating factors, measurement error, or random noise; which means that the patterns in your data will appear as signals obscured by noise.
Due to a quirk of the human cognitive system, the easiest way to spot signal amidst noise is to visualize your data. The concepts of variables, values, and observations have a role to play here as well. To visualize your data, represent each observation with its own geometric object, such as a point. Then map each variable to an aesthetic property of the point, setting specific values of the variable to specific levels of the aesthetic. You could also compute group-level statistics from your data (i.e. new observations) and map them to geoms, something that `geom_bar()`, `geom_boxplot()` and other geoms do for you automatically.
## Exploratory Data Analysis and Data Science
## EDA and Data Science
As a term, "data science" has been used in different ways by many people. This fluidity is necessary for a term that describes a wide breadth of activity, as data science does. Nonetheless, you can use the principles in this chapter to build a general model of data science. The model requires one limit to the definition of data science: data science must rely in some way on human judgement applied to data.