More writing for variation chapter.

This commit is contained in:
Garrett 2016-05-04 14:41:46 -04:00
parent 1fe1ca2015
commit 5a7f205647
2 changed files with 127 additions and 342 deletions

BIN
images/growth-chart.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

View File

@ -2,9 +2,10 @@
```{r, include = FALSE}
library(ggplot2)
library(dplyr)
```
If you are like most humans, your brain isn't built to process tables of raw data. Instead, you are more likely to make discoveries if you visualize or transform your data. This chapter will show you the best ways to work with your data to make discoveries, a process known as Exploratory Data Analysis (EDA).
If you are like most humans, your brain isn't built to process tables of raw data. You can understand your raw data better if you first visualize it or transform it. This chapter will show you the best ways to visualize and transform your data to make discoveries, a process known as Exploratory Data Analysis (EDA).
## The challenge of data
@ -22,129 +23,91 @@ knitr::kable(round(data.frame(X = X[order], Y = Y[order]), 2))
While your mind may stumble over raw data, you can easily process visual information. Within your mind is a visual processing system that has been fine-tuned by thousands of years of evolution. As a result, the quickest way to understand your data is to visualize it. Once you plot your data, you can instantly see the relationships between values. Here, we see that the values above fall on a circle.
```{r echo=FALSE, dependson=data}
ggplot2::qplot(X, Y) + ggplot2::coord_fixed(ylim = c(-2.5, 2.5), xlim = c(-2.5, 2.5))
```{r echo=FALSE}
ggplot2::qplot(X, Y) + ggplot2::coord_fixed(ylim = c(-2.5, 0.5), xlim = c(-0.5, 2.5))
```
Visualization works because it bypasses the bottle neck in your working memory. Your brain processes visual information in a different (and much wider) channel than it processes symbolic information, like words and numbers. However, you can also comprehend data in a second way.
Visualization works because it bypasses the bottle neck in your working memory. Your brain processes visual information in a different (and much wider) channel than it processes symbolic information, like words and numbers. However, visualization is not the only way to comprehend data.
You can comprehend data if you reduce it to a small set of summary values that you can attend to with your working memory. This is why it feels natural to work with averages, e.g. how tall is the average basketball player? How educated is the average politician? An average is a single number that you can attend to. Although averages are quite popular, you can also compare data sets on other summary values, such as the maximum, minimum, median, and so on. Another way to summarize your data is to replace it with a model, a function that describes the realtionship between two or more variables.
You can also comprehend your data if you reduce it to a small set of summary values. Your working memory can easily attend to just a few values, which lets you absorb important information about the data. This is why it feels natural to work with things like averages, e.g. how tall is the average basketball player? How educated is the average politician? An average is a single number that you can attend to. Although averages are quite popular, you can also compare data sets on other summary values, such as maximums, minimums, medians, and so on. Another way to summarize your data is to replace it with a model, a function that describes the relationship between two or more variables.
These two tactics, visualizing and summarizing your data, are the main tools of Exploratory Data Analysis. Before we look at how to visualize and summarise your data, let's consider what types of information you can hope to find. Data carries two types of useful information: information about _variation_ and information about _covariation_.
These two tactics, visualizing and summarizing your data, are the main tools of Exploratory Data Analysis. Before we begin to use the tools, let's consider what types of information you can hope to find in your data.
Let's define some terms that will make these concepts easier to describe:
## Variation
Data carries two types of useful information: information about _variation_ and information about _covariation_. These concepts will be easier to describe if we first define some basic terms:
* A _variable_ is a quantity, quality, or property that you can measure.
* A _value_ is the state of a variable when you measure it. The value of a variable may change from measurement to measurement.
* An _observation_ is a set of measurements you make under similar conditions (you usually make all of the measurements at the same time on the same object). Observations contain values that you measure on different variables.
* An _observation_ is a set of measurements that you make under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object). An observation will contain several values, each associated with a different variable. I'll sometimes refer to an observation as a data point.
## Variation
_Variation_ is the tendency of the values of a variable to change from measurement to measurement.
Variation is to the tendency for the values of a variable to change from measurement to measurement.
Variation is easy to encounter in real life; if you measure any continuous quantity twice---and precisely enough, you will get two different results. Since every measurement includes a small amount of error, this will be true even if you measure quantities that should be constant, like the speed of light (below).
Discrete and quantitative variables can also vary if you measure across different subjects (e.g. the eye colors of different people), or different times (e.g. the energy levels of an electron).
Variation is easy to encounter in real life; if you measure any continuous variable twice---and precisely enough, you will get two different results. This is true even if you measure quantities that should be constant, like the speed of light (below); each of your measurements will include a small amount of error that varies from measurement to measurement.
```{r, variation, echo = FALSE}
mat <- as.data.frame(matrix(morley$Speed + 299000, ncol = 10))
knitr::kable(mat, caption = "*The speed of light is a universal constant, but variation obscures its value. In 1879, Albert Michelson measured the speed of light 100 times and observed 30 different values (in km/sec).*", col.names = rep("", ncol(mat)))
knitr::kable(mat, caption = "*The speed of light is a universal constant, but variation due to measurement error obscures its value. In 1879, Albert Michelson measured the speed of light 100 times and observed 30 different values (in km/sec).*", col.names = rep("", ncol(mat)))
```
Variation is a source of uncertainy. Since values vary from measurement to measurement, you cannot assume that what you measure in one context will be true in another context.
Discrete and quantitative variables can also vary if you measure across different subjects (e.g. the eye colors of different people), or different times (e.g. the energy levels of an electron).
Variation can also be a tool. Every variable exhibits a pattern of variation. If you comprehend the pattern, you can determine which values of the variable are likely to occur, which are unlikely to occur, and which are impossible.
Variation is a source of uncertainy in data science. Since values vary from measurement to measurement, you cannot assume that what you measure in one context will be true in another context. However, variation can also be a tool. Every variable exhibits a pattern of variation. If you comprehend the pattern, you can determine which values of the variable are likely to occur, which are unlikely to occur, and which are impossible. You can also use the pattern to quickly spot outliers, data points that behave differently from other observations of a variable, and clusters, groups of data points that share similar values.
## Covariation
Covariation occurs when the values of two or more variables vary in systematic ways.
The second type of information contained in data is information about covariation. _Covariation_ occurs when two or more variables vary together in a systematic way.
You can understand covariation by picturing the growth charts that doctors use with young children (below). The ages and heights of young children covary since a child is likely to be born small and then to grow taller. As a result, a large value of height is unlikely to occur without being associated with a large value of age (and vice versa). In fact, the covariation between age and height is so regular that a doctor can tell if something has gone wrong by comparing the two.
You can understand covariation if you picture the growth charts that doctors use with young children (below). The ages and heights of children covary since a child is likely to be born small and to grow taller over time. As a result, you can expect a large value of age to be accompanied by a large value of height and a small value of age to be accompanied by a small value of height. In fact, the covariation between age and height is so regular that a doctor can tell if something has gone wrong by comparing a child's height to his or her age.
!["Height covaries with age in young children. Chart taken from http://www.cdc.gov/growthcharts"](images/growth-chart.png)
Systems of covariation can be very complex. Multiple variables can covary together, as do age, height, weight, bone density, etc., and two variables can covary in an inverse relationship, as do unemployment and presidential approval ratings (presidential approval ratings are reliably low at times when unemployment is high, and vice versa). Covariation can also occur between categorical variables. In that case observations in one category will be linked to certain values more often than observations in other categories.
Webs of covariation can be quite complex. Multiple variables can covary together as income, education, and home ownership do. Also, two variables can covary in an inverse relationship as unemployment and presidential approval ratings do. Presidential approval ratings are reliably low at times when unemployment is high, and vice versa.
Covariation provides a way to reduce the uncertainty created by variation. You can make an accurate guess about the value of an unobserved variable if you observe the values of variables that it covaries with. The covariation that occurs between variables will also occur between the values of those variables whenever the values belong in the same observation.
If variation creates uncertainty, covariation dispells it. You can make an accurate guess about an unobserved variable, if you observe the values of variables that it covaries with.
Covariation also provides clues about causal relationships. If one variable causes the value of a second, the two variables will covary. The inverse does not logically follow; i.e. if two variables covary, one does not necessarily cause the other. However, you can use patterns of covariation to direct your attention as you search for causal relationships.
Covariation is also the first clue that a causal relationship may exist between two variables (or that a hidden causal variable may exist that affects the two).
Now that you have a sense of what to look for in data, how do you find it?
## Exploratory Data Analysis
Exploratory Data Analysis is a loosely defined task that deepens your understanding of a data set. EDA involves iteratively
* forming questions about your data
* searching for answers by visualizing and summarizing your data
* refining your questions about the data, or choosing new questions to investigate, based on what you discover
There is no formal way to do Exploratory Data Analysis. You must be free to investigate every insight that occurs to you. The remainder of the chapter will teach you ways to visualize and summarise your data that you can use in any part of your exploration. I've organized these methods into two groups: methods for
1. Understanding variation. These methods elucidate the question, "What type of uncertainty exists in the processes that my data describe?"
2. Understanding covariation. These methods elucidate the question, "How can the data help me reduce the uncertainty that exists in the processes that my data describe?"
As you use these methods, it is important to keep an eye out for any signals that can lead to better questions about your data, and then scrutinize them. Things like outliers, missing values, gaps in your data coverage, and patterns can all tip you of to important aspects of your data set. Often the discoveries that you make during Exploratory Data Analysis will be the most valuable results of your data analysis. Many useful scientific discoveries, like the discovery of the hole in the ozone layer, were made by simply exploring data.
## Understanding variation - Distributions
The most useful tool for understanding the variation associated with a variable is the variable's _empirical distribution_, the pattern of values that emerges as you take many measurements of the variable.
## Understanding Variation
### Distributions describe variation
### Visualizing distributions
***
*Tip*: Throughout this section, we will rely on a distinction between two types of variables:
* A variable is **continuous** if you can arrange its values in order _and_ an infinite number of values can exist between any two values of the variable. For example, numbers and date-times are continuous variables. `ggplot2` will treat your variable as continuous if it is a numeric, integer, or a recognizable date-time class (but not a factor, see `?factor`).
* A variable is **discrete** if it is not continuous. Discrete variables can only contain a finite (or countably infinite) set of unique values. For example, character strings and boolean values are discrete variables. `ggplot2` will treat your variable as discrete if it is not a numeric, integer, or recognizable date-time class.
***
### Visualizing Distributions
The first group of geoms visualizes the _distribution_ of the values in a variable.
Recall that a variable is a quantity, quality, or property whose value can change between measurements. This unique property---that the values of a variable can vary---gives the word "variable" its name. It also motivates all of data science. Scientists attempt to understand what determines the value of a variable. They then use that information to predict or control the value of the variable under a variety of circumstances.
One of the most useful tools in this quest are the values themselves, the values that you have already observed for a variable. These values reveal which states of the variable are common, which are rare, and which are seemingly impossible. The pattern of values that emerges as you collect large amounts of data is known as the variable's _distribution_.
The distribution of a variable reveals information about the probabilities associated with the variable. As you collect more data, the proportion of observations that occur at a value (or in an interval) will match the probability that the variable will take that value (or take a value in that interval) in a future measurement.
In theory, it is easy to visualize the distribution of a variable: simply display how many observations occur at each value of the variable. In practice, how you do this will depend on the type of variable that you wish to visualize.
##### Discrete distributions
Use `geom_bar()` to visualize the distribution of a discrete variable. `geom_bar()` counts the number of observations that are associated with each value of the variable, and it displays the results as a series of bars. The height of each bar reveals the count of observations that are associated with the x value of the bar.
If a variable is categorical, you can display its empirical distribution with a simple bar graph. **Categorical** variables are variables that can only contain a finite (or countably infinite) set of unique values, like the cut rating of a diamond. In R, categorical variables are usually saved as factors or character strings.
```{r}
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut))
```
***
By default, `geom_bar()` counts the number of observations that are associated with each value of a variable, and it displays the results as a series of bars. You do not need to provide a $y$ aesthetic to `geom_bar()`.
*Tip* - Since each of the geoms in this subsection visualizes the values of a single variable, you do not need to provide a $y$ aesthetic.
The height of each bar in a bar graph reveals the number of observations that are associated with the $x$ value of the bar. You can use the heights to estimate the frequency that different values will appear as you measure the variable. $x$ values that have a tall bar occur often, and $x$ values with small bars occur rarely.
***
Useful aesthetics for `geom_bar()` are:
* x (required)
* alpha
* color
* fill
* linetype
* size
* weight
Useful position adjustments for `geom_bar()` are
* "stack" (default)
* "dodge"
* "fill"
Useful stats for `geom_bar()` are
* "bin" (default)
* "identity" (to map bar heights to a y variable)
The `width` argument of `geom_bar()` controls the width of each bar. The bars will touch when you set `width = 1`.
```{r}
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut), width = 1)
```
***
*Tip*: You can compute the counts of a discrete variable quickly with R's `table()` function. These are the numbers that `geom_bar()` visualizes.
*Tip*: You can compute the counts of a categorical variable quickly with R's `table()` function. These are the numbers that `geom_bar()` visualizes.
```{r}
table(diamonds$cut)
@ -152,79 +115,74 @@ table(diamonds$cut)
***
##### Continuous distributions
To compare the distributions of different subgroups in a bar chart,
The strategy of counting the number of observations at each value breaks down for continuous data. If your data is truly continuous, then no two observations will have the same value---so long as you measure the data precisely enough (e.g. without rounding to the _n_th decimal place).
1. set the fill and grouping aesthetics to a grouping variable
2. set the position adjustment to "dodge" for side by side bars
3. set $y$ to `..prop..` to compare proportions instead of raw counts (as raw counts will depend on group sizes which may differ)
To get around this, data scientists divide the range of a continuous variable into equally spaced intervals, a process called _binning_.
```{r}
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop.., group = carat > 1, fill = carat > 1), position = "dodge")
```
What if your variable is not categorical but continuous? A variable is **continuous** if you can arrange its values in order _and_ an infinite number of unique values can exist between any two values of the variable. Numbers and date-times are two examples of continuous variables.
The strategy of counting the number of observations at each value breaks down for continuous data, because if your data is truly continuous, then no two observations will have the same value.
To get around this, you can divide the range of a continuous variable into equally spaced intervals, a process called _binning_.
```{r, echo = FALSE}
# knitr::include_graphics("images/visualization-17.png")
```
They then count how many observations fall into each bin.
Then count how many observations fall into each bin.
```{r, echo = FALSE}
# knitr::include_graphics("images/visualization-18.png")
```
And display the count as a bar, or some other object.
And display the count as a bar.
```{r, echo = FALSE}
# knitr::include_graphics("images/visualization-19.png")
```
This method is temperamental because the appearance of the distribution can change dramatically if the bin size changes. As no bin size is "correct," you should explore several bin sizes when examining data.
The result is called a *histogram*. The height of each bar reveals how many observations fall within the width of the bar. Tall bars reveal common values of the variable, short bars reveal uncommon values, and the absence of bars suggests rare or impossible values. As with bar charts, you can use this information to estimate the probabilities that different values will appear in future measurements of the variable.
```{r, echo = FALSE}
# knitr::include_graphics("images/visualization-20.png")
```
To make a histogram of a continuous variable, like the carat size of a diamond, use `geom_histogram()`. As with `geom_bar()`, you do not need to supply a $y$ variable.
Several geoms exist to help you visualize continuous distributions. They almost all use the "bin" stat to implement the above strategy. For each of these geoms, you can set the following arguments for "bin" to use:
* `binwidth` - the width to use for the bins in the same units as the x variable
* `origin` - origin of the first bin interval
* `right` - if `TRUE` bins will be right closed (e.g. points that fall on the border of two bins will be counted with the bin to the left)
* `breaks` - a vector of actual bin breaks to use. If you set the breaks argument, it will override the binwidth and origin arguments.
Use `geom_histogram()` to make a traditional histogram. The height of each bar reveals how many observations fall within the width of the bar.
```{r}
```{r message = FALSE}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat))
```
By default, `geom_histogram()` will divide the range of the variable into 30 equal length bins. The quickest way to change this behavior is to set the binwidth argument.
Binning is a temperamental process because the appearance of a distribution can change dramatically if the bin size changes. As no bin size is "correct," you should explore several bin sizes when examining data.
```{r}
By default, `geom_histogram()` will divide the range of your variable into 30 equal length bins. The quickest way to change this behavior is to set the binwidth argument.
```{r message = FALSE}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 1)
```
Notice how different binwidths reveal different information. The plot above shows that the availability of diamonds decreases quickly as carat size increases. The plot below shows that there are more diamonds than you would expect at whole carat sizes (and common fractions of carat sizes). Moreover, for each popular size, there are more diamonds slightly larger than the size than diamonds slightly smaller than the size.
Different binwidths reveal different information. For example, the plot above shows that the availability of diamonds decreases quickly as carat size increases. The plot below shows that there are more diamonds than you would expect at whole carat sizes (and common fractions of carat sizes). Moreover, for each popular size, there are more diamonds that are slightly larger than the size than there are diamonds that are slightly smaller than the size.
```{r}
```{r message = FALSE}
ggplot(data = diamonds) +
geom_histogram(aes(x = carat), binwidth = 0.01)
```
Useful aesthetics for `geom_histogram()` are:
Histograms give you a quick sense of the variation in your variable. Often you can immediately tell what the typical value of a variable is and what range of values you can expect (the wider the range, the more uncertainty you will encounter when making predictions about the variable). However, histograms have a downside.
* x (required)
* alpha
* color
* fill
* linetype
* size
* weight
It is difficult to compare multiple histograms. The solid bars of a histogram will occlude other histograms when you arrenage them in layers. You could stack histograms on top of each other, but this invites error because you cannot compare the bars against a common baseline.
Useful position adjustments for `geom_histogram()` are
```{r message = FALSE}
ggplot(data = diamonds) +
geom_histogram(aes(x = price, fill = cut))
```
* "stack" (default)
* "fill"
`geom_freqpoly()` uses a line to display the same information as `geom_histogram()`. You can think of `geom_freqpoly()` as drawing a line that connects the tops of the bars that would appear in a histogram.
`geom_freqpoly()` and `geom_density()` provide better ways to compare multiple distributions in the same plot. You can think of `geom_freqpoly()` as a line that connects the tops of the bars that would appear in a histogram.
```{r message = FALSE, fig.show='hold', fig.width=4, fig.height=4}
ggplot(data = diamonds) +
@ -234,32 +192,6 @@ ggplot(data = diamonds) +
geom_histogram(aes(x = carat))
```
It is easier to compare levels of a third variable with `geom_freqpoly()` than with `geom_histogram()`. `geom_freqpoly()` displays the shape of the distribution faithfully for each subgroup because you can plot multiple lines in the same graph without adjusting their position. Notice that `geom_histogram()` must stack each new subgroup on top of the others, which obscures the shape of the distributions.
```{r message = FALSE, fig.show='hold', fig.width=4, fig.height=4}
ggplot(data = diamonds) +
geom_freqpoly(aes(x = carat, color = cut))
ggplot(data = diamonds) +
geom_histogram(aes(x = carat, fill = cut))
```
Useful aesthetics for `geom_freqpoly()` are:
* x (required)
* y
* alpha
* color
* linetype
* size
Although the name of `geom_freqpoly()` suggests that it draws a polygon, it actually draws a line. You can draw the same information as a true polygon (and thus fill in the area below the line) if you combine `geom_area()` with `stat = "bin"`. You will learn more about `geom_area()` in _Visualizing functions between two variables_.
```{r}
ggplot(data = diamonds) +
geom_area(aes(x = carat, fill = cut), stat = "bin", position = "stack")
```
`geom_density()` plots a one dimensional kernel density estimate of a variable's distribution. The result is a smooth version of the information contained in a histogram or a freqpoly.
```{r}
@ -267,7 +199,15 @@ ggplot(data = diamonds) +
geom_density(aes(x = carat))
```
`geom_density()` displays $density$---not $count$---on the y axis, which makes it easier to compare the shape of the distributions of multiple subgroups; the area under each curve will be normalized to one, no matter how many total observations occur in the subgroup.
`geom_density()` displays $density$---not $count$---on the y axis, which makes it easy to compare the shape of the distributions of multiple subgroups; the area under each curve will be normalized to one, no matter how many total observations occur in the subgroup. To achieve the same effect with `geom_freqpoly()`, set the y variable to `..density..`.
```{r message = FALSE, fig.show='hold', fig.width=4, fig.height=4}
ggplot(data = diamonds) +
geom_freqpoly(aes(x = price, y = ..density.., color = cut ))
ggplot(data = diamonds) +
geom_density(aes(x = price, color = cut))
```
`geom_density()` does not use the binwidth argument. You can control the smoothness of the density with `adjust`, and you can select the kernel to use to estimate the density with `kernel`. Set kernel to one of "gaussian" (default), "epanechikov", "rectangular", "triangular", "biweight", "cosine", "optcosine".
@ -276,50 +216,41 @@ ggplot(data = diamonds) +
geom_density(aes(x = carat, color = cut), kernel = "gaussian", adjust = 4)
```
Useful aesthetics for `geom_density()` are:
### Summarizing distributions
* x (required)
* y
* alpha
* color
* fill
* linetype
* size
You can also make sense of a distribution by reducing it to a few summary statistics, numbers that summarize important information about the distribution. Summary statistics have an advantage over plots and raw data; it is easy to talk and write about summary statistics.
Useful position adjustments for `geom_density()` are
Two types of summary statistics are more useful than the rest:
* "identity" (default)
* "stack" (when using the fill aesthetic)
* "fill" (when using the fill aesthetic)
* statistics that describe the typical value of a variable. These include the measures of location from Chapter 2:
+ `mean()` - the average value of a variable
+ `median()` - the 50th percentile value of a variable
`geom_dotplot()` provides a final way to visualize distributions. This unique geom displays a point for each observation, but it stacks points that appear in the same bin on top of each other. The result is similar to a histogram, the height of each stack reveals the number of points in the stack.
* statistics that describe the range of a variable's values. These include the measures of spread from chapter 2.
+ `sd()` - the standard deviation of a variable's distribution, which is the average distance between any two values selected at random from the distribution.
+ `var()` - the variance of the distribution, which is the standard deviation squared.
+ `IQR()` - the interquartile range of the distribution, which is the distance between the 25th and 75th percentiles of a distribution.
+ `range()` - the minimum and maximum value of a distribution, use with `diff(range())` to compute the distance between the miminum and maximum values.
Use dplyr's summarise function to calculate any of these statistics for a variable.
```{r}
ggplot(data = mpg) +
geom_dotplot(aes(x = displ), binwidth = 0.2)
diamonds %>%
summarise(mean = mean(price, na.rm = TRUE),
sd = sd(price, na.rm = TRUE))
```
Useful aesthetics for `geom_dotplot()` are:
Combine `summarise()` with `group_by()` to calculate the statistics on a groupwise basis.
* x (required)
* y
* alpha
* color
* fill
```{r}
diamonds %>%
group_by(cut) %>%
summarise(mean = mean(price, na.rm = TRUE),
sd = sd(price, na.rm = TRUE))
```
Useful arguments that apply to `geom_dotplot()`
* `binaxis` - the axis to bin along ("x" or "y")
* `binwidth` - the interval width to use when binning
* `dotsize` - diameter of dots relative to binwidth
* `stackdir` - which direction to stack the dots ("up" (default), "down", "center", "centerwhole")
* `stackgroups` - Has the equivalent of `position = "stack"` when set to true.
* `stackratio` - how close to stack the dots. Values less than 1 cause dots to overlap, which shortens stacks.
In practice, I find that `geom_dotplot()` works best with small data sets and takes a lot of tweaking of the binwidth, dotsize, and stackratio arguments to fit the dots within the graph (the stack heights depend entirely on the organization of the dots, which renders the y axis ambiguous). That said, dotplots can be useful as a learning aid. They provide an intuitive representation of a histogram.
### Summarizing distributions
## Understanding Covariation
### Visualizing covariation
### Visualize Covariation
@ -356,15 +287,6 @@ ggplot(data = economics) +
geom_line(aes(x = date, y = unemploy))
```
Useful aesthetics for `geom_line()` are:
* x (required)
* y (required)
* alpha
* color
* linetype
* size
Use `geom_step()` to turn a line chart into a step function. Here, the result will be easier to see with a subset of data.
```{r}
@ -374,15 +296,6 @@ ggplot(data = economics[1:150, ]) +
Control the step direction by giving `geom_step()` a direction argument. `direction = "hv"` will make stairs that move horizontally then vertically to connect points. `direction = "vh"` will do the opposite.
Useful aesthetics for `geom_step()` are:
* x (required)
* y (required)
* alpha
* color
* linetype
* size
`geom_area()` creates a line chart with a filled area under the line.
```{r}
@ -390,15 +303,6 @@ ggplot(data = economics) +
geom_area(aes(x = date, y = unemploy))
```
Useful aesthetics for `geom_area()` are:
* x (required)
* y (required)
* alpha
* color
* fill
* linetype
* size
##### Visualize correlations between two variables
@ -413,33 +317,17 @@ ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
Useful aesthetics for `geom_point()` are:
* x (required)
* y (required)
* alpha
* color
* fill (for some shapes)
* shape
* size
Useful position adjustments for `geom_point()` are:
* "identity" (default)
* "jitter"
In fact, the jitter adjustment is so useful that `ggplot2` provides the `geom_jitter()`, which is identical to `geom_point()` but comes with `position = "jitter"` by default.
The jitter adjustment is so useful for scatterplots that `ggplot2` provides the `geom_jitter()`, which is identical to `geom_point()` but comes with `position = "jitter"` by default.
```{r}
ggplot(data = mpg) +
geom_jitter(mapping = aes(x = displ, y = hwy))
```
`geom_jitter()` can be a useful way to visualize the distribution between two discrete variables. Can you tell why `geom_point()` would be less useful here?
`geom_count()` can be a useful way to visualize the distribution between two discrete variables.
```{r}
ggplot(data = mpg) +
geom_jitter(mapping = aes(x = cyl, y = fl, color = fl))
ggplot(data = diamonds) +
geom_count(mapping = aes(x = cut, y = color))
```
Use `geom_rug()` to visualize the distribution of each variable in the scatterplot. `geom_rug()` adds a tickmark along each axis for each value observed in the data. `geom_rug()` works best as a second layer in the plot (see Section 3 for more info on layers).
@ -456,51 +344,6 @@ Use the `sides` argument to control which axes to place a "rug" on.
* `sides = "b"` - Places a rug on the bottom axis
* `sides = "l"` - Places a rug on the left axis
Useful aesthetics for `geom_rug()` are:
* x (required)
* y (required)
* alpha
* color
* linetype
* size
Useful position adjustments for `geom_rug()` are:
* "identity" (default)
* "jitter"
Use `geom_text()` to display a label, instead of a point, for each observation in a scatterplot. `geom_text()` lets you add information to the scatterplot, but is less effective when you have many data points.
```{r}
ggplot(data = mpg[sample(1:234, 10), ]) +
geom_text(mapping = aes(x = displ, y = hwy, label = class))
```
Useful aesthetics for `geom_text()` are:
* x (required)
* y (required)
* alpha
* angle
* color
* family
* fontface
* hjust
* label (`geom_text()` displays the values of this variable)
* lineheight
* linetype
* size
* vjust
Control the appearance of the labels with the following arguments. You can also use each of these arguments as an aesthetic. To do so, set them inside the `aes()` call in `geom_text()`'s mapping argument.
* `angle` - angle of text
* `family` - font family of text
* `fontface` - bold, italic, etc.
* `hjust` - horizontal adjustment
* `vjust`- vertical adjustment
Scatterplots do not work well with large data sets because individual points will begin to occlude each other. As a result, you cannot tell where the mass of the data lies. Does a black region contain a single layer of points? Or hundreds of points stacked on top of each other?
You can see this type of plotting in the `diamonds` data set. The data set only contains 53,940 points, but the points overplot each other in a way that we cannot fix with jittering.
@ -543,31 +386,11 @@ ggplot(data = diamonds) +
method = lm, formula = y ~ poly(x, 4))
```
Useful aesthetics for `geom_smooth()` are:
* x (required)
* y (required)
* alpha
* color
* fill
* linetype
* size
* weight
Useful arguments for `geom_smooth()` are:
* `formula` - the formula to use in the smoothing function
* `fullrange` - Should the fit span the full range of the plot, or just the data?
* `level` - Confidence level to use for standard error ribbon
* `method` - Smoothing function to use, a model function in R
* `n` - The number of points to evaluate smoother at (defaults to 80)
* `se` - If `TRUE` (the default), `geom_smooth()` will include a standard error ribbon
Be careful, `geom_smooth()` will overlay a trend line on every data set, even if the underlying data is uncorrelated. You can avoid being fooled by also inspecting the raw data or calculating the correlation between your variables, e.g. `cor(diamonds$carat, diamonds$price)`.
`geom_quantile()` fits a different type of model to your data. Use it to display the results of a quantile regression (see `?rq` for details). Like `geom_smooth()`, `geom_quantile()` takes a formula argument that describes the relationship between $x$ and $y$.
```{r}
```{r message = FALSE}
ggplot(data = diamonds) +
geom_point(mapping = aes(x = carat, y = price)) +
geom_quantile(mapping = aes(x = carat, y = price),
@ -575,21 +398,6 @@ ggplot(data = diamonds) +
formula = y ~ poly(x, 2))
```
Useful aesthetics for `geom_quantile()` are:
* x (required)
* y (required)
* alpha
* color
* linetype
* size
* weight
Useful arguments for `geom_quantile()` are:
* `formula` - the formula to use in the smoothing function
* `quantiles` - Conditional quantiles of $y$ to display. Each quantile is displayed with a line.
`geom_smooth()` and `geom_quantile()` summarise the relationship between two variables as a function, but you can also summarise the relationship as a bivariate distribution.
`geom_bin2d()` divides the coordinate plane into a two dimensional grid and then displays the number of observations that fall into each bin in the grid. This technique let's you see where the mass of the data lies; bins with a light fill color contain more data than bins with a dark fill color. Bins with no fill contain no data at all.
@ -599,22 +407,6 @@ ggplot(data = diamonds) +
geom_bin2d(mapping = aes(x = carat, y = price), binwidth = c(0.1, 500))
```
Useful aesthetics for `geom_bin2d()` are:
* x (required)
* y (required)
* alpha
* color
* fill
* size
* weight
Useful arguments for `geom_bin2d()` are:
* `bins` - A vector like `c(30, 40)` that gives the number of bins to use in the horizontal and vertical directions.
* `binwidth` - A vector like `c(0.1, 500)` that gives the binwidths to use in the horizontal and vertical directions. Overrides `bins` when set.
* `drop` - If `TRUE` (default) `geom_bin2d()` removes the fill from all bins that contain zero observations.
`geom_hex()` works similarly to `geom_bin2d()`, but it divides the coordinate plane into hexagon shaped bins. This can reduce visual artifacts that are introduced by the aligning edges of rectangular bins.
```{r}
@ -634,20 +426,6 @@ ggplot(data = diamonds) +
geom_density2d(mapping = aes(x = carat, y = price))
```
Useful aesthetics for `geom_density2d()` are:
* x (required)
* y (required)
* alpha
* color
* linetype
* size
Useful arguments for `geom_density2d()` are:
* `h` - A vector like `c(0.2, 500)` that gives the bandwiths to use to estimate the density in the horizontal and vertical directions.
* `n` - number of gridpoints to use when estimating the density (defaults to 100).
##### Visualize correlations between three variables
There are two ways to add three (or more) variables to a two dimensional plot. You can map additional variables to aesthetics within the plot, or you can use a geom that is designed to visualize three variables.
@ -656,10 +434,17 @@ There are two ways to add three (or more) variables to a two dimensional plot. Y
`geom_raster()` and `geom_tile()`
### Summarizing covariation: statistics
### Summarizing covariation: models
#### Models have one of the richest literatures of how to select and test, so we've reserved them for their own section. Modelling brings together the various components of data science more so than any other data science task. So we'll postpone its coverage until you can program and wrangle data, two skills that will aid your ability to select models.
### Summarizing covariation
### Modeling covariation
Models have one of the richest literatures of how to select and test, so we've reserved them for their own section. Modelling brings together the various components of data science more so than any other data science task. So we'll postpone its coverage until you can program and wrangle data, two skills that will aid your ability to select models.
#### How to fit a model
#### How to quickly look at a model
#### How to quickly access a model's residuals
## Bias can ruin everything
## Bring it all together variables, values, observations, variation, natural laws, models.