Update import.Rmd

typos
This commit is contained in:
Radu Grosu 2016-01-27 13:33:35 +00:00
parent 8101753650
commit de44f2e3d6
1 changed files with 73 additions and 73 deletions

View File

@ -27,14 +27,14 @@ There are many ways to read flat files into R. If you've be using R for a while,
* These functions are typically much faster (~10x) than the base equivalents.
Long run running jobs also have a progress bar, so you can see what's
happening. (If you're looking for raw speed, try `data.table::fread()`,
happening. (If you're looking for raw speed, try `data.table::fread()`,
it's slightly less flexible than readr, but can be twice as fast.)
* They have more flexible parsers: they can read in dates, times, currencies,
percentages, and more.
* They fail to do some annoying things like converting character vectors to
factors, munging the column headers to make sure they're valid R
percentages, and more.
* They fail to do some annoying things like converting character vectors to
factors, munging the column headers to make sure they're valid R
variable names, and using row names.
* They return objects with class `tbl_df`. As you saw in the dplyr chapter,
@ -45,24 +45,24 @@ There are many ways to read flat files into R. If you've be using R for a while,
sometimes need to supply a few more arguments when using them the first
time, but they'll definitely work on other peoples computers. The base R
functions take a number of settings from system defaults, which means that
code that works on your computer might not work on someone elses.
code that works on your computer might not work on someone else's.
Make sure you have the readr package (`install.packages("readr")`).
Most of readr's functions are concerned with turning flat files into data frames:
* `read_csv()` read comma delimited files, `read_csv2()` reads semi-colon
* `read_csv()` reads comma delimited files, `read_csv2()` reads semi-colon
separated files (common in countries where `,` is used as the decimal place),
`read_tsv()` reads tab delimited files, and `read_delim()` reads in files
with a user supplied delimiter.
* `read_fwf()` reads fixed width files. You can specify fields either by their
widths with `fwf_widths()` or theirs position with `fwf_positions()`.
widths with `fwf_widths()` or their position with `fwf_positions()`.
`read_table()` reads a common variation of fixed width files where columns
are separated by white space.
* `read_log()` reads Apache style logs. (But also check out
[webreadr](https://github.com/Ironholds/webreadr) which is built on top
[webreadr](https://github.com/Ironholds/webreadr) which is built on top
of `read_log()`, but provides many more helpful tools.)
readr also provides a number of functions for reading files off disk into simpler data structures:
@ -73,29 +73,29 @@ readr also provides a number of functions for reading files off disk into simple
These might be useful for other programming tasks.
As well as reading data frame disk, readr also provides tools for working with data frames and character vectors in R:
As well as reading data from disk, readr also provides tools for working with data frames and character vectors in R:
* `type_convert()` applies the same parsing heuristics to the character columns
in a data frame. You can override its choices using `col_types`.
For the rest of this chapter we'll focus on `read_csv()`. If you understand how to use this function, it will be straightforward to your knowledge to all the other functions in readr.
### Basics
The first two arguments of `read_csv()` are:
* `file`: path (or URL) to the file you want to load. Readr can automatically
* `file`: path (or URL) to the file you want to load. Readr can automatically
decompress files ending in `.zip`, `.gz`, `.bz2`, and `.xz`. This can also
be a literal csv file, which is useful for experimenting and creating
reproducible examples.
* `col_names`: column names. There are three options:
* `TRUE` (the default), which reads column names from the first row
* `TRUE` (the default), which reads column names from the first row
of the file
* `FALSE` number columns sequentially from `X1` to `Xn`.
* `FALSE` numbers columns sequentially from `X1` to `Xn`.
* A character vector, used as column names. If these don't match up
with the columns in the data, you'll get a warning message.
@ -109,7 +109,7 @@ EXAMPLE
Typically, you'll see a lot of warnings if readr has guessed the column type incorrectly. This most often occurs when the first 1000 rows are different to the rest of the data. Perhaps there are a lot of missing data there, or maybe your data is mostly numeric but a few rows have characters. Fortunately, it's easy to fix these problems using the `col_type` argument.
(Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iteration while you're finding common problems)
(Note that if you have a very large file, you might want to set `n_max` to 10,000 or 100,000. That will speed up iterations while you're finding common problems)
Specifying the `col_type` looks like this:
@ -122,24 +122,24 @@ read_csv("mypath.csv", col_types = col(
You can use the following types of columns
* `col_integer()` (i) and `col_double()` (d) specify integer and doubles.
* `col_integer()` (i) and `col_double()` (d) specify integer and doubles.
`col_logical()` (l) parses TRUE, T, FALSE and F into a logical vector.
`col_character()` (c) leaves strings as is.
`col_character()` (c) leaves strings as is.
* `col_number()` (n) is a more flexible parsed for numbers embedded in other
strings. It will look for the first number in a string, ignoring non-numeric
prefixes and suffixes. It will also ignoring the grouping mark specified by
* `col_number()` (n) is a more flexible parsed for numbers embedded in other
strings. It will look for the first number in a string, ignoring non-numeric
prefixes and suffixes. It will also ignore the grouping mark specified by
the locale (see below for more details).
* `col_factor()` (f) allows you to load data directly into a factor if you know
* `col_factor()` (f) allows you to load data directly into a factor if you know
what the levels are.
* `col_skip()` (_, -) completely ignores a column.
* `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates,
* `col_date()` (D), `col_datetime()` (T) and `col_time()` (t) parse into dates,
date times, and times as described below.
You might have noticed that each column parser has a one letter abbreviation, which you can instead of the full function call (assuming you're happy with the default arguments):
You might have noticed that each column parser has a one letter abbreviation, which you can use instead of the full function call (assuming you're happy with the default arguments):
```{r, eval = FALSE}
read_csv("mypath.csv", col_types = cols(
@ -196,14 +196,14 @@ If these defaults don't work for your data you can supply your own date time for
* Seconds: `%S` (integer seconds), `%OS` (partial seconds).
* Time zone: `%Z` (as name, e.g. `America/Chicago`), `%z` (as offset from UTC,
e.g. `+0800`). If you're American, note that "EST" is a Canadian time zone
that does not have daylight savings time. It is \emph{not} Eastern Standard
* Time zone: `%Z` (as name, e.g. `America/Chicago`), `%z` (as offset from UTC,
e.g. `+0800`). If you're American, note that "EST" is a Canadian time zone
that does not have daylight savings time. It is \emph{not} Eastern Standard
Time!
* AM/PM indicator: `%p`.
* Non-digits: `%.` skips one non-digit charcter, `%*` skips any number of
* Non-digits: `%.` skips one non-digit character, `%*` skips any number of
non-digits.
The best way to figure out the correct string is to create a few examples in a character vector, and test with one of the parsing functions. For example:
@ -236,11 +236,11 @@ The settings you are most like to need to change are:
locale("fr")
locale("fr", asciify = TRUE)
```
* The character encoding used in the file. If you don't know the encoding
you can use `guess_encoding()`. It's not perfect, but if you have a decent
sample of text, it's likely to be able to figure it out.
Readr converts all strings into UTF-8 as this is safest to work with across
platforms. (It's also what every stringr operation does.)
@ -264,61 +264,61 @@ Needs to discuss how data types in different languages are converted to R. Simil
`data_frame()` is a nice way to create data frames. It encapsulates best practices for data frames:
* It never changes an input's type (i.e., no more `stringsAsFactors = FALSE`!).
```{r}
data.frame(x = letters) %>% sapply(class)
data_frame(x = letters) %>% sapply(class)
```
This makes it easier to use with list-columns:
```{r}
data_frame(x = 1:3, y = list(1:5, 1:10, 1:20))
```
List-columns are most commonly created by `do()`, but they can be useful to
create by hand.
* It never adjusts the names of variables:
```{r}
data.frame(`crazy name` = 1) %>% names()
data_frame(`crazy name` = 1) %>% names()
```
* It evaluates its arguments lazily and sequentially:
```{r}
data_frame(x = 1:5, y = x ^ 2)
```
* It adds the `tbl_df()` class to the output so that if you accidentally print a large
* It adds the `tbl_df()` class to the output so that if you accidentally print a large
data frame you only get the first few rows.
```{r}
data_frame(x = 1:5) %>% class()
```
* It changes the behaviour of `[` to always return the same type of object:
subsetting using `[` always returns a `tbl_df()` object; subsetting using
subsetting using `[` always returns a `tbl_df()` object; subsetting using
`[[` always returns a column.
You should be aware of one case where subsetting a `tbl_df()` object
You should be aware of one case where subsetting a `tbl_df()` object
will produce a different result than a `data.frame()` object:
```{r}
df <- data.frame(a = 1:2, b = 1:2)
str(df[, "a"])
tbldf <- tbl_df(df)
str(tbldf[, "a"])
```
* It never uses `row.names()`. The whole point of tidy data is to
store variables in a consistent way. So it never stores a variable as
* It never uses `row.names()`. The whole point of tidy data is to
store variables in a consistent way. So it never stores a variable as
special attribute.
* It only recycles vectors of length 1. This is because recycling vectors of greater lengths
* It only recycles vectors of length 1. This is because recycling vectors of greater lengths
is a frequent source of bugs.
### Coercion
@ -326,13 +326,13 @@ Needs to discuss how data types in different languages are converted to R. Simil
To complement `data_frame()`, dplyr provides `as_data_frame()` to coerce lists into data frames. It does two things:
* It checks that the input list is valid for a data frame, i.e. that each element
is named, is a 1d atomic vector or list, and all elements have the same
is named, is a 1d atomic vector or list, and all elements have the same
length.
* It sets the class and attributes of the list to make it behave like a data frame.
This modification does not require a deep copy of the input list, so it's
very fast.
This is much simpler than `as.data.frame()`. It's hard to explain precisely what `as.data.frame()` does, but it's similar to `do.call(cbind, lapply(x, data.frame))` - i.e. it coerces each component to a data frame and then `cbinds()` them all together. Consequently `as_data_frame()` is much faster than `as.data.frame()`:
```{r}
@ -353,49 +353,49 @@ There are three key differences between tbl_dfs and data.frames:
* When you print a tbl_df, it only shows the first ten rows and all the
columns that fit on one screen. It also prints an abbreviated description
of the column type:
```{r}
data_frame(x = 1:1000)
```
You can control the default appearance with options:
* `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `n`
* `options(dplyr.print_max = n, dplyr.print_min = m)`: if more than `m`
rows print `m` rows. Use `options(dplyr.print_max = Inf)` to always
show all rows.
* `options(dply.width = Inf)` will always print all columns, regardless
* `options(dplyr.width = Inf)` will always print all columns, regardless
of the width of the screen.
* When you subset a tbl\_df with `[`, it always returns another tbl\_df.
* When you subset a tbl\_df with `[`, it always returns another tbl\_df.
Contrast this with a data frame: sometimes `[` returns a data frame and
sometimes it just returns a single column:
```{r}
df1 <- data.frame(x = 1:3, y = 3:1)
class(df1[, 1:2])
class(df1[, 1])
df2 <- data_frame(x = 1:3, y = 3:1)
class(df2[, 1:2])
class(df2[, 1])
```
To extract a single column it's use `[[` or `$`:
```{r}
class(df2[[1]])
class(df2$x)
```
* When you extract a variable with `$`, tbl\_dfs never do partial
* When you extract a variable with `$`, tbl\_dfs never do partial
matching. They'll throw an error if the column doesn't exist:
```{r, error = TRUE}
df <- data.frame(abc = 1)
df$a
df2 <- data_frame(abc = 1)
df2$a
```