Merge branch 'master' of github.com:hadley/r4ds

This commit is contained in:
hadley 2016-01-07 10:20:37 -06:00
commit 66a583e954
2 changed files with 27 additions and 26 deletions

View File

@ -16,27 +16,27 @@ Data science is an exciting discipline that allows you to turn raw data into und
Data science is a huge field, and there's no way you can master it by reading a single book. The goal of this book is to give you a solid foundation with the most important tools. Our model of the tools needed in a typical data science project looks something like this:
```{r}
```{r echo = FALSE}
knitr::include_graphics("diagrams/data-science.png")
```
First you must __import__ your data in R. This typically means that you take data stored in file, in a database, or in an web API, and load it into a data frame in R. If you can't get your data into R, you can't do data science on it!
First you must __import__ your data into R. This typically means that you take data stored in a file, database, or web API, and load it into a data frame in R. If you can't get your data into R, you can't do data science on it!
Once you've imported your data, it's a good idea to __tidy__ it. Tidying your data means storing it in a standard form that matches the semantics of the dataset with the way its storage. In brief, when your data is tidy, each column is a variable, and each row is an observation. Working with tidy data is important because the consistency lets you spend your time struggling with your questions, not fighting to get data into the right form for different functions.
Once you've imported your data, it is a good idea to __tidy__ it. Tidying your data means storing it in a standard form that matches the semantics of the dataset with the way it is stored. In brief, when your data is tidy, each column is a variable, and each row is an observation. Working with tidy data is important because the consistency lets you spend your time struggling with your questions, not fighting to get data into the right form for different functions.
Once you have tidy data, a common first step is to __transform__ it to add new variables that are functions of existing variables (like computing velocity from speed and distance), to rename the variables to be easier to understand, to sort your data, or summarise it.
Once you have tidy data, a common first step is to __transform__ it. You may zero in on a subset of data, add new variables that are functions of existing variables, calculate a set of summary statistics, or sort your data according to values.
There are two main engines of knowledge generation: visualisation and modelling. These have complementary strengths and weaknesses so any real analysis will iterate between them many times. For example, you might see a scatterplot that inspires you to fit a linear model, then you transform the data to add a column of residuals from the model, and look at another scatterplot.
There are two main engines of knowledge generation: visualisation and modelling. These have complementary strengths and weaknesses so any real analysis will iterate between them many times. For example, you might see a scatterplot that inspires you to fit a linear model. Then you transform the data to add a column of residuals from the model, and look at another scatterplot, this time of the residuals.
__Visualisation__ is a fundamentally human activity. A good visualisation will show you things that you did not expect, or raise new questions of the data. A good visualisation might also hint that you're asking the wrong question and you need to refine your thinking. In short, visualisations can surprise you, but don't scale particularly well.
__Visualisation__ is a fundamentally human activity. A good visualisation will show you things that you did not expect, or raise new questions of the data. A good visualisation might also hint that you're asking the wrong question and you need to refine your thinking. In short, visualisations can surprise you. However, visualisations don't scale particularly well.
__Models__ are the complementary tools to visualisation. Models are a fundamentally mathematical or computation tool, so generally scale well. Even when they don't, it's usually cheaper to buy more computers than it is to buy more brains. But every model makes assumptions, and by its very nature a model can not question its own assumptions. That means a model can not fundamentally surprise you.
__Models__ are the complementary tools to visualisation. Models are a fundamentally mathematical or computation tool, so they generally scale well. Even when they don't, it's usually cheaper to buy more computers than it is to buy more brains. But every model makes assumptions, and by its very nature a model can not question its own assumptions. That means a model can not fundamentally surprise you.
It doesn't matter how well models and visualisation have led you to understand the data, unless you can __commmunicate__ your results to other people. Communication is an absolutely critical part of any data analysis project.
The last step of data science is __communication__, an absolutely critical part of any data analysis project. It doesn't matter how well models and visualisation have led you to understand the data, unless you can commmunicate your results to other people.
There's one important toolset that's not shown in the diagram: programming. Programming is a cross-cutting tool that you use in every part of the project. You don't need to be an expert programmer to be a data scientist, but learning more about programming pays off. Becoming a better programmer will allow you automate common tasks, and solve new problems with greater ease.
You'll use these tools in every data science project, but for most projects they're not enough. There's a rough 80-20 rule at play: you can probably tackle 80% of every project using the tools we'll teach you, but you'll need more to tackle the remaining 20%. Throughout this book we'll point you to resources where you can learn more.
You'll use these tools in every data science project, but for most projects they're not enough. There's a rough 80-20 rule at play: you can probably tackle 80% of every project using the tools that we'll teach you, but you'll need more to tackle the remaining 20%. Throughout this book we'll point you to resources where you can learn more.
## How you will learn
@ -45,13 +45,13 @@ The above description of the tools of data science was organised roughly around
* Starting with data ingest and tidying is sub-optimal because 80% of the time
it's routine and boring, and the other 20% of the time it's horrendously
frustrating. Instead, we'll start with visualisation and transformation on
data that's already been imported and tidied. That way when you ingest
data that's already been imported and tidied. That way, when you ingest
and tidy your own data, you'll be able to keep your motivation high because
you know the pain is worth it because of what you can accomplish once its
done.
* Some topics are best explained with other tools. For example, we believe that
it's easier to understand how models work as a tool for data science, if you
it's easier to understand how models work as a tool for data science if you
already know about visualisation, data transformation, and tidy data.
* Programming tools are not necessarily interesting in their own right,
@ -64,13 +64,13 @@ Within each chapter, we try and stick to a similar pattern: start with some moti
## What you won't learn
There are some important topics that this book doesn't cover. We believe it's important to stay ruthlessly focussed on the essentials so you can get up and running as quickly as possible. That means this book can't covered every important topic.
There are some important topics that this book doesn't cover. We believe it's important to stay ruthlessly focussed on the essentials so you can get up and running as quickly as possible. That means this book can't cover every important topic.
### Big data
This book proudly focusses on small, in-memory datasets. This is the right place to start because you can't tackle big data unless you have experience with small data. The tools you learn in this book will easily handle hundreds of megabytes of data, and with a little care you can typically use them to work with 1-2 Gb of data. If you're routinely working larger data (10-100 Gb, say), you should learn more about [data.table](https://github.com/Rdatatable/data.table). We don't teach here because it has a very concise interface that is harder to learn because it offers fewer linguistic cues. But if you're working with large data, the performance payoff is worth a little extra effort to learn it.
This book proudly focusses on small, in-memory datasets. This is the right place to start because you can't tackle big data unless you have experience with small data. The tools you learn in this book will easily handle hundreds of megabytes of data, and with a little care you can typically use them to work with 1-2 Gb of data. If you're routinely working larger data (10-100 Gb, say), you should learn more about [data.table](https://github.com/Rdatatable/data.table). We don't teach data.table here because it has a very concise interface that is harder to learn because it offers fewer linguistic cues. But if you're working with large data, the performance payoff is worth a little extra effort to learn it.
Many big data problems are often small data problems in disguise. Often your complete dataset is big, but the data needed to answer is a specific question is small. It's often possible to find a subset, subsample, or summary that fits in memory and still allows you to answer the question you're interested in. The challenge here is finding the right small data, which often requires a lot of iteration. We'll touch on this idea in [transform](#transform).
Many big data problems are often small data problems in disguise. Often your complete dataset is big, but the data needed to answer a specific question is small. It's often possible to find a subset, subsample, or summary that fits in memory and still allows you to answer the question that you're interested in. The challenge here is finding the right small data, which often requires a lot of iteration. We'll touch on this idea in [transform](#transform).
Another class of big data problem consists of many small data problems. Each individual problem might fit in memory, but you have millions of them. For example, you might want to fit a model to each person in your dataset. That would be trivial if you had just 10 or 100 people, but instead you have a million. Fortunately each problem is independent (sometimes called embarassingly parallel), so you just need a system (like hadoop) that allows you to send different datasets to different computers for processing. Once you've figured out to how answer the question for a single subset using the tools described in this book, you can use packages like SparkR, rhipe, and ddr to solve it for the complete dataset.
@ -78,7 +78,7 @@ Another class of big data problem consists of many small data problems. Each ind
In this book, you won't learn anything about Python, Julia, or any other programming language useful for data science. This isn't because we think these tools are bad. They're not! And in practice, most data science teams use a mix of languages, often at least R and Python.
However, we strongly believe that it's best to master one tool at a time. You will get better faster if you dive deep, rather than spreading yourself thinly over many topics. This doesn't mean you should be only know one thing, just that you'll generally learn faster if you stick to one thing at a time.
However, we strongly believe that it's best to master one tool at a time. You will get better faster if you dive deep, rather than spreading yourself thinly over many topics. This doesn't mean you should only know one thing, just that you'll generally learn faster if you stick to one thing at a time.
### Non-rectangular data
@ -93,7 +93,11 @@ This book focusses exclusively on rectangular data, data made up of variables, o
(usually all at the same time or on the same object). Observations contain
values that you measure on different variables.
This book focuses exclusively on structured data sets: collections of values that are each associated with a variable and an observation. There are lots of data that doesn't naturally fit in this paradigm: images, sounds, trees, text. But data frames are extremely common in science and in industry and we believe that they're a great place to start your data analysis journey.
This book focuses exclusively on structured data sets: collections of values that are each associated with a variable and an observation. There are lots of data sets that do not naturally fit in this paradigm: images, sounds, trees, text. But data frames are extremely common in science and in industry and we believe that they're a great place to start your data analysis journey.
### Formal Statistics and Machine Learning
This book focusses on practical tools for understanding your data: visualization, modelling, and transformation. You can develop your understanding further by learning probability theory, statistical hypothesis testing, and machine learning methods; but we won't teach you those things here. There are many books that cover these topics, but few that integrate the other parts of the data science process. When you are ready, you can and should read books devoted to each of these topics. We recommend *Statistical Modeling: A Fresh Approach* by Danny Kaplan; *An Introduction to Statistical Learning* by James, Witten, Hastie, and Tibshirani; and *Applied Predictive Modeling* by Kuhn and Johnson.
## Prerequisites
@ -115,15 +119,15 @@ knitr::include_graphics("screenshots/rstudio-layout.png")
You run R code in the __console__ pane. Textual output appears inline, and graphical output appears in the __output__ pane. You write more complex R scripts in the __editor__ pane.
There are three keyboard shortcuts that we strongly encourage that you learn because they'll save you so much time:
There are three keyboard shortcuts for the RStudio IDE that we strongly encourage that you learn because they'll save you so much time:
* Cmd + Enter: sends current line (or current selection) from the editor to
the console and runs it.
the console and runs it. (Ctrl + Enter on a PC)
* Tab: suggest possible completions for the text you've typed.
* Cmd + ↑: in the console, searches all commands you've typed that start with
those characters.
those characters. (Ctrl + ↑ on a PC)
If you want to see a list of all keyboard shortcuts, use the meta keyboard shortcut Alt + Shift + K: that's the keyboard shortcut to show all the other keyboard shortcuts.
@ -148,7 +152,7 @@ pkgs <- c(
install.packages(pkgs)
```
R will download the packages from CRAN and install them in your system library. If you have problems installing, make that you are connected to the internet, and that you haven't blocked <http://cran.r-project.org> in your firewall or proxy.
R will download the packages from CRAN and install them in your system library. If you have problems installing, make sure that you are connected to the internet, and that you haven't blocked <http://cran.r-project.org> in your firewall or proxy.
You will not be able to use the functions, objects, and help files in a package until you load it with `library()`. After you have downloaded the packages, you can load any of the packages into your current R session with the `library()` command, e.g.
@ -162,16 +166,13 @@ You will need to reload the package every time you start a new R session.
* Google. Always a great place to start! Adding "R" to a query is usually
enough to filter it down. If you ever hit an error message that you
don't know how to handle, great idea to google it.
don't know how to handle, it is a great idea to google it.
If your operating system defaults to another language, you can use
`Sys.setenv(LANGUAGE = "en")` to tell R to use english. That's likely to
get you to common solutions more quickly.
* StackOverflow. How to make a reproducible example.
([reprex](https://github.com/jennybc/reprex))
Unfortunately the R stackoverflow community is not always the friendliest.
* StackOverflow. Be sure to read and use [How to make a reproducible example](http://adv-r.had.co.nz/Reproducibility.html)([reprex](https://github.com/jennybc/reprex)) before posting. Unfortunately the R stackoverflow community is not always the friendliest.
* Twitter. #rstats hashtag is very welcoming. Great way to keep up with
what's happening in the community.

View File

@ -14,7 +14,7 @@ knitr::opts_chunk$set(
> "The simple graph has brought more information to the data analysts mind than any other device."---John Tukey
Visualization makes data decipherable. Have you ever tried to study a table of raw data? You can examine a couple of values at a time, but you cannot attend to many values at once. The data overloads your attention span, which makes it hard to spot patterns in the data. See this for yourself; can you spot the striking relationship between $X$ and $Y$ in the table below?
Visualization makes data decipherable. Consider what it is like to study a table of raw data. You can examine a couple of values at a time, but you cannot attend to many values at once. The data overloads your attention span, which makes it hard to spot patterns in the data. See this for yourself; can you spot the striking relationship between $X$ and $Y$ in the table below?
```{r data, echo=FALSE}
x <- rep(seq(0.2, 1.8, length = 5), 2) + runif(10, -0.15, 0.15)