Some big picture stuff in the overview

This commit is contained in:
hadley 2015-09-21 08:21:59 -05:00
parent daef68dce7
commit 4a70fc4a91
1 changed files with 79 additions and 0 deletions

View File

@ -4,6 +4,85 @@ title: Welcome
output: bookdown::html_chapter
---
## Overview
The goal of "R for Data Science" is to give you a solid foundation into using R to do data science. The goal is not to be exhaustive, but to instead focus on what we think are the critical skills for data science:
* Getting your data into R so you can work with it.
* Wrangling your data into a tidy form, so it's easier to work with and you
spend your time struggling with your questions, not fighting to get data
into the right form for different functions.
* Manipulating your data to add variables and compute basic summaries. We'll
show you the broad tools, and focus on three common types of data: numbers,
strings, and date/times.
* Visualising your data to gain insight. Visualisations are one of the most
important tools of data science because they can surprise you: you can
see something in a visualisation that you did not expect. Visualisations
are also really helpful for helping you refine your questions of the data.
* Modelling your data to scale visualisations to larger datasets, and to
remove strong patterns. Modelling is a very deep topic - we can't possibly
cover all the details, but we'll give you a taste of how you can use it,
and where you can go to learn more.
* Communicating your results to others. It doesn't matter how great your
analysis is unless you can communicate the results to others. We'll show
how you can create static reports with rmarkdown, and interactive apps with
shiny.
## Learning data science
Above, I've listed the components of the data science process in roughly the order you'll encounter them in an analysis (although of course you'll iterate multiple times). This, howver, is not the order you'll encounter them in this book. This is because:
* Starting with data ingest is boring. It's much more interesting to learn
some new visualisation and manipulation tools on data that's already been
imported and cleaned. You'll later learn the skills to apply these new ideas
to your own data.
* Some topics, like modelling, are best explained with other tools, like
visualisation and manipulation. These need to come later in the book.
We've designed this order based on our experience teaching live classes, and it's been carefully designed to keep you motivated. We try and stick to a similar pattern within each chapter: give some bigger motivating examples so you can see the bigger picture, and then dive into the details.
Each section of the book also comes with exercises to help you practice what you've learned. It's tempting to skip these, but there's no better way to learn than practicing. If you were taking a class with either of us, we'd force you to do them by making them homework. (Sometimes I feel like the art of teaching is tricking people to do what's in their own best interests.)
## R and big data
This book focuses almost exclusively on in-memory datasets.
* Small data: data that fits in memory on a laptop, ~10 GB. Note that small
data is still big! R is great with small data.
* Medium data: data that fits in memory on a powerful server, ~5 TB. It's
possible to use R with this much data, but it's challenging. Dealing
effectively with medium data requires effective use of all cores on a
computer. It's not that hard to do that from R, but it requires some thought,
and many packages do not take advantage of R's tools.
* Big data: data that must be stored on disk or spread across the memory of
multiple machines. Writing code that works efficiently with this sort of data
is a very challenging. Tools for this sort of data will never be written in
R: they'll be written in a language specially designed for high performance
computing like C/C++, Fortran or Scala. But R can still talk to these systems.
The other thing to bear in mind, is that while all your data might be big, typically you don't need all of it to answer a specific question:
* Many questions can be answered with the right small dataset. It's often
possible to find a subset, subsample, or summary that fits in memory and
still allows you to answer the question you're interested in. The challenge
here is finding the right small data, which often requires a lot of iteration.
* Other challenges are because an individual problem might fit in memory,
but you have hundreds of thousands or millions of them. For example, you
might want to fit a model to each person in your dataset. That would be
trivial if you had just 10 or 100 people, but instead you have a million.
Fortunately each problem is independent (sometimes called embarassingly
parallel), so you just need a system (like hadoop) that allows you to
send different datasets to different computers for processing.
## Prerequisites
To run the code in this book, you will need to have R installed on your computer, as well as the RStudio IDE, an application that makes it easier to use R. Both R and the RStudio IDE are free and easy to install.