r4ds/strings.Rmd

849 lines
31 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Strings
```{r setup-strings, include = FALSE, cache = FALSE}
library(stringr)
common <- rcorpora::corpora("words/common")$commonWords
fruit <- rcorpora::corpora("foods/fruits")$fruits
sentences <- readr::read_lines("harvard-sentences.txt")
```
<!-- look at http://d-rug.github.io/blog/2015/regex.fick/, http://qntm.org/files/re/re.html -->
This chapter introduces you to string manipulation in R. You'll learn the basics of how strings work and how to create them by hand, but the focus of this chapter will be on regular expressions. Character variables typically come as unstructured or semi-structured data. When this happens, you need some tools to make order from madness. Regular expressions are a very concise language for describing patterns in strings. When you first look at them, you'll think a cat walked across your keyboard, but as you learn more, you'll see how they allow you to express complex patterns very concisely. The goal of this chapter is not to teach you every detail of regular expressions. Instead I'll give you a solid foundation that allows you to solve a wide variety of problems and point you to resources where you can learn more.
This chapter will focus on the __stringr__ package. This package provides a consistent set of functions that all work the same way and are easier to learn than the base R equivalents. We'll also take a brief look at the __stringi__ package. This package is what stringr uses internally: it's more complex than stringr (and includes many many more functions). stringr includes tools to let you tackle the most common 90% of string manipulation challenges; stringi contains functions to let you tackle the last 10%.
## String basics
In R, strings are stored in a character vector. You can create strings with either single quotes or double quotes: there is no difference in behaviour. I recommend always using `"`, unless you want to create a string that contains multiple `"`, in which case use `'`.
```{r}
string1 <- "This is a string"
string2 <- 'If I want to include a "quote" inside a string, I use single quotes'
```
To include a literal single or double quote in a string you can use `\` to "escape" it:
```{r}
double_quote <- "\"" # or '"'
single_quote <- '\'' # or "'"
```
That means if you want to include a literal backslash, you'll need to double it up: `"\\"`.
Beware that the printed representation of a string is not the same as string itself, because the printed representation shows the escapes. To see the raw contents of the string, use `writeLines()`:
```{r}
x <- c("\"", "\\")
x
writeLines(x)
```
There are a handful of other special characters. The most common used are `"\n"`, newline, and `"\t"`, tab, but you can see the complete list by requesting help on `"`: `?'"'`, or `?"'"`. You'll also sometimes see strings like `"\u00b5"`, this is a way of writing non-English characters that works on all platforms:
```{r}
x <- "\u00b5"
x
```
### String length
Base R contains many functions to work with strings but we'll generally avoid them because they're inconsistent and hard to remember. Their behaviour is particularly inconsistent when it comes to missing values. For example, `nchar()`, which gives the length of a string, returns 2 for `NA` (instead of `NA`)
```{r}
# Bug will be fixed in R 3.3.0
nchar(NA)
```
Instead we'll use functions from stringr. These have more intuitive names, and all start with `str_`:
```{r}
str_length(NA)
```
The common `str_` prefix is particularly useful if you use RStudio, because typing `str_` will trigger autocomplete, allowing you to see all stringr functions:
```{r, echo = FALSE}
knitr::include_graphics("screenshots/stringr-autocomplete.png")
```
### Combining strings
To combine two or more strings, use `str_c()`:
```{r}
str_c("x", "y")
str_c("x", "y", "z")
```
Use the `sep` argument to control how they're separated:
```{r}
str_c("x", "y", sep = ", ")
```
Like most other functions in R, missing values are contagious. If you want them to print as `"NA"`, use `str_replace_na()`:
```{r}
x <- c("abc", NA)
str_c("|-", x, "-|")
str_c("|-", str_replace_na(x), "-|")
```
As shown above, `str_c()` is vectorised, automatically recycling shorter vectors to the same length as the longest:
```{r}
str_c("prefix-", c("a", "b", "c"), "-suffix")
```
Objects of length 0 are silently dropped. This is particularly useful in conjunction with `if`:
```{r}
name <- "Hadley"
time_of_day <- "morning"
birthday <- FALSE
str_c("Good ", time_of_day, " ", name,
if (birthday) " and HAPPY BIRTHDAY",
"."
)
```
To collapse vectors into a single string, use `collapse`:
```{r}
str_c(c("x", "y", "z"), collapse = ", ")
```
### Subsetting strings
You can extract parts of a string using `str_sub()`. As well as the string, `str_sub()` takes `start` and `end` arguments which give the (inclusive) position of the substring:
```{r}
x <- c("Apple", "Banana", "Pear")
str_sub(x, 1, 3)
# negative numbers count backwards from end
str_sub(x, -3, -1)
```
Note that `str_sub()` won't fail if the string is too short: it will just return as much as possible:
```{r}
str_sub("a", 1, 5)
```
You can also use the assignment form of `str_sub()`, `` `str_sub<-()` ``, to modify strings:
```{r}
str_sub(x, 1, 1) <- str_to_lower(str_sub(x, 1, 1))
x
```
### Locales
Above I used `str_to_lower()` to change to lower case. You can also use `str_to_upper()` or `str_to_title()`. However, changing case is more complicated than it might at first seem because different languages have different rules for changing case. You can pick which set of rules to use by specifying a locale:
```{r}
# Turkish has two i's: with and without a dot, and it
# has a different rule for capitalising them:
str_to_upper(c("i", "ı"))
str_to_upper(c("i", "ı"), locale = "tr")
```
The locale is specified as ISO 639 language codes, which are two or three letter abbreviations. If you don't already know the code for your language, [Wikipedia](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) has a good list. If you leave the locale blank, it will use the current locale.
Another important operation that's affected by the locale is sorting. The base R `order()` and `sort()` functions sort strings using the current locale. If you want robust behaviour across different computers, you may want to use `str_sort()` and `str_order()` which take an additional `locale` argument:
```{r}
x <- c("apple", "eggplant", "banana")
str_sort(x, locale = "en") # English
str_sort(x, locale = "haw") # Hawaiian
```
### Exercises
1. In your own words, describe the difference between the `sep` and `collapse`
arguments to `str_c()`.
1. In code that doesn't use stringr, you'll often see `paste()` and `paste0()`.
What's the difference between the two functions? What stringr function are
they equivalent to? How do the functions differ in their handling of
`NA`?
1. Use `str_length()` and `str_sub()` to extract the middle character from
a character vector.
1. What does `str_wrap()` do? When might you want to use it?
1. What does `str_trim()` do? What's the opposite of `str_trim()`?
1. Write a function that turns (e.g.) a vector `c("a", "b", "c")` into
the string `a, b, and c`. Think carefully about what it should do if
given a vector of length 0, 1, or 2.
## Matching patterns with regular expressions
Regular expressions, regexps for short, are a very terse language that allow to describe patterns in strings. They take a little while to get your head around, but once you've got it you'll find them extremely useful.
To learn regular expressions, we'll use `str_view()` and `str_view_all()`. These functions take a character vector and a regular expression, and show you how they match. We'll start with very simple regular expressions and then gradually get more and more complicated. Once you've mastered pattern matching, you'll learn how to apply those ideas with various stringr functions.
### Basic matches
The simplest patterns match exact strings:
```{r, cache = FALSE}
x <- c("apple", "banana", "pear")
str_view(x, "an")
```
The next step up in complexity is `.`, which matches any character (except a newline):
```{r, cache = FALSE}
str_view(x, ".a.")
```
But if "`.`" matches any character, how do you match an actual "`.`"? You need to use an "escape" to tell the regular expression you want to match it exactly, not use the special behaviour. In other words, you need to make the regular expression `\.`, but this creates a problem. We use strings to represent regular expressions, and `\` is also used as an escape symbol in strings. So the string `"\."` reduces to the special character written as `\.` In this case, `\.` is not a recognized special character and the string would lead to an error; but `"\n"` would reduce to a new line, `"\t"` would reduce to a tab, and `"\\"` would reduce to a literal `\`, which provides a way forward. To create a string that reduces to a literal backslash followed by a period, you need to escape the backslash. So to match a literal "`.`" you need to use `"\\."`, which simplifies to the regular expression `\.`.
```{r, cache = FALSE}
# To create the regular expression, we need \\
dot <- "\\."
# But the expression itself only contains one:
writeLines(dot)
# And this tells R to look for explicit .
str_view(c("abc", "a.c", "bef"), "a\\.c")
```
If `\` is used as an escape character in regular expressions, how do you match a literal `\`? Well you need to escape it, creating the regular expression `\\`. To create that regular expression, you need to use a string, which also needs to escape `\`. That means to match a literal `\` you need to write `"\\\\"` - you need four backslashes to match one!
```{r, cache = FALSE}
x <- "a\\b"
writeLines(x)
str_view(x, "\\\\")
```
In this book, I'll write a regular expression like `\.` and the string that represents the regular expression as `"\\."`.
#### Exercises
1. Explain why each of these strings don't match a `\`: `"\"`, `"\\"`, `"\\\"`.
1. How would you match the sequence `"'\`?
1. What patterns does will this regular expression match `"\..\..\..`?
How would you represent it as a string?
### Anchors
By default, regular expressions will match any part of a string. It's often useful to _anchor_ the regular expression so that it matches from the start or end of the string. You can use:
* `^` to match the start of the string.
* `$` to match the end of the string.
```{r, cache = FALSE}
x <- c("apple", "banana", "pear")
str_view(x, "^a")
str_view(x, "a$")
```
To remember which is which, try this mnemonic which I learned from [Evan Misshula](https://twitter.com/emisshula/status/323863393167613953): if you begin with power (`^`), you end up with money (`$`).
To force a regular expression to only match a complete string, anchor it with both `^` and `$`.:
```{r, cache = FALSE}
x <- c("apple pie", "apple", "apple cake")
str_view(x, "apple")
str_view(x, "^apple$")
```
You can also match the boundary between words with `\b`. I don't find I often use this in R, but I will sometimes use it when I'm doing a find all in RStudio when I want to find the name of a function that's a component of other functions. For example, I'll search for `\bsum\b` to avoid matching `summarise`, `summary`, `rowsum` and so on.
#### Exercises
1. How would you match the literal string `"$^$"`?
1. Given this corpus of common words:
```{r}
```
Create regular expressions that find all words that:
1. Start with "y".
1. End with "x"
1. Are exactly three letters long. (Don't cheat by using `str_length()`!)
1. Have seven letters or more.
Since this list is long, you might want to use the `match` argument to
`str_view()` to show only the matching or non-matching words.
### Character classes and alternatives
There are number of other special patterns that match more than one character:
* `.`: any character apart from a newline.
* `\d`: any digit.
* `\s`: any whitespace (space, tab, newline).
* `[abc]`: match a, b, or c.
* `[!abc]`: match anything except a, b, or c.
Remember, to create a regular expression containing `\d` or `\s`, you'll need to escape the `\` for the string, so you'll type `"\\d"` or `"\\s"`.
You can use _alternation_ to pick between one or more alternative patterns. For example, `abc|d..f` will match either '"abc"', or `"deaf"`. Note that the precedence for `|` is low, so that `abc|xyz` matches either `abc` or `xyz` not `abcyz` or `abxyz`:
```{r, cache = FALSE}
str_view(c("abc", "xyz"), "abc|xyz")
```
Like with mathematical expressions, if precedence ever gets confusing, use parentheses to make it clear what you want:
```{r, cache = FALSE}
str_view(c("grey", "gray"), "gr(e|a)y")
```
#### Exercises
1. Create regular expressions that find all words that:
1. Start with a vowel.
1. That only contain consonants. (Hint: thinking about matching
"not"-vowels.)
1. End with `ed`, but not with `eed`.
1. End with `ing` or `ise`.
1. Write a regular expression that matches a word if it's probably written
in British English, not American English.
1. Create a regular expression that will match telephone numbers as commonly
written in your country.
### Repetition
The next step up in power involves control over how many times a pattern matches:
* `?`: 0 or 1
* `+`: 1 or more
* `*`: 0 or more
* `{n}`: exactly n
* `{n,}`: n or more
* `{,m}`: at most m
* `{n,m}`: between n and m
```{r}
```
By default these matches are "greedy": they will match the longest string possible. You can make them "lazy", matching the shortest string possible by putting a `?` after them. This is an advanced feature of regular expressions, but it's useful to know that it exists:
```{r}
```
Note that the precedence of these operators is high, so you can write: `colou?r` to match either American or British spellings. That means most uses will need parentheses, like `bana(na)+` or `ba(na){2,}`.
#### Exercises
1. Describe in words what these regular expressions match:
(read carefully to see if I'm using a regular expression or a string
that defines a regular expression.)
1. `^.*$`
1. `"\\{.+\\}"`
1. `\d{4}-\d{2}-\d{2}`
1. `"\\\\{4}"`
1. Create regular expressions to find all words that:
1. Have three or more vowels in a row.
1. Start with three consonants.
1. Have two or more vowel-consonant pairs in a row.
### Grouping and backreferences
You learned about parentheses earlier as a way to disambiguate complex expression. They do one other special thing: they also define numeric groups that you can refer to with _backreferences_, `\1`, `\2` etc.For example, the following regular expression finds all fruits that have a pair of letters that's repeated.
```{r, cache = FALSE}
str_view(fruit, "(..)\\1", match = TRUE)
```
(You'll also see how they're useful in conjunction with `str_match()` in a few pages.)
Unfortunately `()` in regexps serve two purposes: you usually use them to disambiguate precedence, but you can also use them for grouping. If you're using one set for grouping and one set for disambiguation, things can get confusing. You might want to use `(?:)` instead: it only disambiguates, and doesn't modify the grouping. `(?:)` are called non-capturing parentheses.
For example:
```{r}
str_detect(c("grey", "gray"), "gr(e|a)y")
str_detect(c("grey", "gray"), "gr(?:e|a)y")
```
### Exercises
1. Describe, in words, what these expressions will match:
1. `"(.)(.)\\2\\1"`
1. `(..)\1`
1. `"(.)(.)(.).*\\3\\2\\1"`
1. Construct regular expressions to match words that:
1. Start and end with the same character.
## Tools
Now that you've learned the basics of regular expressions, it's time to learn how to apply them to real problems. In this section you'll learn a wide array of stringr functions that let you:
* Determine which elements match a pattern.
* Find the positions of matches.
* Extract the content of matches.
* Replace matches with new values.
* Split a string based on a match.
Because regular expressions are so powerful, it's easy to try and solve every problem with a single regular expression. But since you're in a programming language, it's often easy to break the problem down into smaller pieces. If you find yourself getting stuck trying to create a single regexp that solves your problem, take a step back and think if you could break the problem down into smaller pieces, solving each challenge before moving onto the next one.
### Detect matches
To determine if a character vector matches a pattern, use `str_detect()`. It returns a logical vector the same length as the input:
```{r}
x <- c("apple", "banana", "pear")
str_detect(x, "e")
```
Remember that when you use a logical vector in a numeric context, `FALSE` becomes 0 and `TRUE` becomes 1. That makes `sum()` and `mean()` useful if you want to answer questions about matches across a larger vector:
```{r}
# How many common words start with t?
sum(str_detect(common, "^t"))
# What proportion of common words end with a vowel?
mean(str_detect(common, "[aeiou]$"))
```
When you have complex logical conditions (e.g. match a or b but not c unless d) it's often easier to combine multiple `str_detect()` calls with logical operators, rather than trying to create a single regular expression. For example, here are two ways to find all words that don't contain any vowels:
```{r}
# Find all words containing at least one vowel, and negate
no_vowels_1 <- !str_detect(common, "[aeiou]")
# Find all words consisting only of consonants (non-vowels)
no_vowels_2 <- str_detect(common, "^[^aeiou]+$")
all.equal(no_vowels_1, no_vowels_2)
```
The results are identical, but I think the first approach is significantly easier to understand. So if you find your regular expression is getting overly complicated, try breaking it up into smaller pieces, giving each piece a name, and then combining them with logical operations.
A common use of `str_detect()` is to select the elements that match a pattern. You can do this with logical subsetting, or the convenient `str_subset()` wrapper:
```{r}
common[str_detect(common, "x$")]
str_subset(common, "x$")
```
A variation on `str_detect()` is `str_count()`: rather than a simple yes or no, it tells you how many matches there are in a string:
```{r}
x <- c("apple", "banana", "pear")
str_count(x, "a")
# On average, how many vowels per word?
mean(str_count(common, "[aeiou]"))
```
Note that matches never overlap. For example, in `"abababa"`, how many times will the pattern `"aba"` match? Regular expressions say two, not three:
```{r, cache = FALSE}
str_count("abababa", "aba")
str_view_all("abababa", "aba")
```
Note the use of `str_view_all()`. As you'll shortly learn, many stringr functions come in pairs: one function works with a single match, and the other works with all matches. The second function will have the suffix `_all`.
### Exercises
1. For each of the following challenges, try solving it by using both a single
regular expression, and a combination of multiple `str_detect()` calls.
1. Find all words that start or end with `x`.
1. Find all words that start with a vowel and end with a consonant.
1. Are there any words that contain at least one of each different
vowel?
1. What word has the highest number of vowels? What word has the highest
proportion of vowels? (Hint: what is the denominator?)
### Extract matches
To extract the actual text of a match, use `str_extract()`. To show that off, we're going to need a more complicated example. I'm going to use the [Harvard sentences](https://en.wikipedia.org/wiki/Harvard_sentences), which were designed to test VOIP systems, but are also useful for practicing regexes.
```{r}
length(sentences)
head(sentences)
```
Imagine we want to find all sentences that contain a colour. We first create a vector of colour names, and then turn it into a single regular expression:
```{r}
colours <- c("red", "orange", "yellow", "green", "blue", "purple")
colour_match <- str_c(colours, collapse = "|")
colour_match
```
Now we can select the sentences that contain a colour, and then extract the colour to figure out which one it is:
```{r}
has_colour <- str_subset(sentences, colour_match)
matches <- str_extract(has_colour, colour_match)
head(matches)
```
Note that `str_extract()` only extracts the first match. We can see that most easily by first selecting all the sentences that have more than 1 match:
```{r, cache = FALSE}
more <- sentences[str_count(sentences, colour_match) > 1]
str_view_all(more, colour_match)
str_extract(more, colour_match)
```
This is a common pattern for stringr functions, because working with a single match allows you to use much simpler data structures. To get all matches, use `str_extract_all()`. It returns either a list or a matrix, based on the value of the `simplify` argument:
```{r}
str_extract_all(more, colour_match)
str_extract_all(more, colour_match, simplify = TRUE)
```
You'll learn more about working with lists in Chapter XYZ. If you use `simplify = TRUE`, note that short matches are expanded to the same length as the longest:
```{r}
x <- c("a", "a b", "a b c")
str_extract_all(x, "[a-z]", simplify = TRUE)
```
#### Exercises
1. In the previous example, you might have noticed that the regular
expression matched "fickered", which is not a colour. Modify the
regex to fix the problem.
1. From the Harvard sentences data, extract:
1. The first word from each sentence.
1. All words ending in `ing`.
1. All plurals.
### Grouped matches
Earlier in this chapter we talked about the use of parentheses for clarifying precedence and for backreferences when matching. You can also use parentheses to extract parts of a complex match. For example, imagine we want to extract nouns from the sentences. As a heuristic, we'll look for any word that comes after "a" or "the". Defining a "word" in a regular expression is a little tricky. Here I use a sequence of at least one character that isn't a space.
```{r}
noun <- "(a|the) ([^ ]+)"
has_noun <- sentences %>%
str_subset(noun) %>%
head(10)
str_extract(has_noun, noun)
```
`str_extract()` gives us the complete match; `str_match()` gives each individual component. Instead of a character vector, it returns a matrix, with one column for the complete match followed by one column for each group:
```{r}
str_match(has_noun, noun)
```
(Unsurprisingly, our heuristic for detecting nouns is poor, and also picks up adjectives like smooth and parked.)
```{r}
num <- str_c("one", "two", "three", "four", "five", "six",
"seven", "eight", "nine", "ten", sep = "|")
match <- str_interp("(${num}) ([^ ]+s)\\b")
sentences %>%
str_subset(match) %>%
head(10) %>%
str_match(match)
```
Like `str_extract()`, if you want all matches for each string, you'll need `str_match_all()`.
#### Exercises
### Replacing matches
`str_replace()` and `str_replace_all()` allow you to replace matches with new strings:
```{r}
x <- c("apple", "pear", "banana")
str_replace(x, "[aeiou]", "-")
str_replace_all(x, "[aeiou]", "-")
```
With `str_replace_all()` you can also perform multiple replacements by supplying a named vector:
```{r}
x <- c("1 house", "2 cars", "3 people")
str_replace_all(x, c("1" = "one", "2" = "two", "3" = "three"))
```
You can refer to groups with backreferences:
```{r}
sentences %>%
head(5) %>%
str_replace("([^ ]+) ([^ ]+) ([^ ]+)", "\\1 \\3 \\2")
```
<!-- Replacing with a function call (hopefully) -->
#### Exercises
1. Replace all `/`'s in a string with `\`'s.
### Splitting
Use `str_split()` to split a string up into pieces. For example, we could split sentences into words:
```{r}
sentences %>%
head(5) %>%
str_split(" ")
```
Because each component might contain a different number of pieces, this returns a list. If you're working with a length-1 vector, the easiest thing is to just extract the first element of the list:
```{r}
"a|b|c|d" %>%
str_split("\\|") %>%
.[[1]]
```
Otherwise, like the other stringr functions that return a list, you can use `simplify = TRUE` to return a matrix:
```{r}
sentences %>%
head(5) %>%
str_split(" ", simplify = TRUE)
```
You can also request a maximum number of pieces:
```{r}
fields <- c("Name: Hadley", "Country: NZ", "Age: 35")
fields %>% str_split(": ", n = 2, simplify = TRUE)
```
Instead of splitting up strings by patterns, you can also split up by character, line, sentence and word `boundary()`s:
```{r, cache = FALSE}
x <- "This is a sentence. This is another sentence."
str_view_all(x, boundary("word"))
str_split(x, " ")[[1]]
str_split(x, boundary("word"))[[1]]
```
#### Exercises
1. Split up a string like `"apples, pears, and bananas"` into individual
components.
1. Why is it better to split up by `boundary("word")` than `" "`?
1. What does splitting with an empty string (`""`) do?
### Find matches
`str_locate()`, `str_locate_all()` gives you the starting and ending positions of each match. These are particularly useful when none of the other functions does exactly what you want. You can use `str_locate()` to find the matching pattern, `str_sub()` to extract and/or modify them.
## Other types of pattern
When you use a pattern that's a string, it's automatically wrapped into a call to `regex()`:
```{r, eval = FALSE}
# The regular call:
str_view(fruit, "nana")
# Is shorthand for
str_view(fruit, regex("nana"))
```
You can use the other arguments of `regex()` to control details of the match:
* `ignore_case = TRUE` allows characters to match either their uppercase or
lowercase forms. This always uses the current locale.
```{r, cache = FALSE}
bananas <- c("banana", "Banana", "BANANA")
str_view(bananas, "banana")
str_view(bananas, regex("banana", ignore_case = TRUE))
```
* `multiline = TRUE` allows `^` and `$` to match the start and end of each
line rather than the start and end of the complete string.
```{r, cache = FALSE}
x <- "Line 1\nLine 2\nLine 3"
str_view_all(x, "^Line")
str_view_all(x, regex("^Line", multiline = TRUE))
```
* `comments = TRUE` allows you to use comments and white space to make
complex regular expressions more understandable. Spaces are ignored, as is
everything after `#`. To match a literal space, you'll need to escape it:
`"\\ "`.
* `dotall = TRUE` allows `.` to match everything, including `\n`.
There are three other functions you can use instead of `regex()`:
* `fixed()`: matches exactly the specified sequence of bytes. It ignores
all special regular expressions and operates at a very low level.
This allows you to avoid complex escaping and can be much faster than
regular expressions:
```{r}
microbenchmark::microbenchmark(
fixed = str_detect(sentences, fixed("the")),
regex = str_detect(sentences, "the")
)
```
Here the fixed match is almost 3x times faster than the regular
expression match. However, if you're working with non-English data
`fixed()` can lead to unreliable matches because there are often
multiple ways of representing the same character. For example, there
are two ways to define "á": either as a single character or as an "a"
plus an accent:
```{r}
a1 <- "\u00e1"
a2 <- "a\u0301"
c(a1, a2)
a1 == a2
```
They render identically, but because they're defined differently,
`fixed()` doesn't find a match. Instead, you can use `coll()`, defined
next to respect human character comparison rules:
```{r}
str_detect(a1, fixed(a2))
str_detect(a1, coll(a2))
```
* `coll()`: compare strings using standard **coll**ation rules. This is
useful for doing case insensitive matching. Note that `coll()` takes a
`locale` parameter that controls which rules are used for comparing
characters. Unfortunately different parts of the world use different rules!
```{r}
# That means you also need to be aware of the difference
# when doing case insensitive matches:
i <- c("I", "İ", "i", "ı")
i
str_subset(i, coll("i", TRUE))
str_subset(i, coll("i", TRUE, locale = "tr"))
```
Both `fixed()` and `regex()` have `ignore_case` arguments, but they
do not allow you to pick the locale: they always use the default locale.
You can see what that is with the following code; more on stringi
later.
```{r}
stringi::stri_locale_info()
```
The downside of `coll()` is speed; because the rules for recognising which
characters are the same are complicated, `coll()` is relatively slow
compared to `regex()` and `fixed()`.
* As you saw with `str_split()` you can use `boundary()` to match boundaries.
You can also use it with the other functions:
```{r, cache = FALSE}
x <- "This is a sentence."
str_view_all(x, boundary("word"))
str_extract_all(x, boundary("word"))
```
### Exercises
1. How would you find all strings containing `\` with `regex()` vs.
with `fixed()`?
1. What are the five most common words in `sentences`?
## Other uses of regular expressions
There are a few other functions in base R that accept regular expressions:
* `apropos()` searches all objects available from the global environment. This
is useful if you can't quite remember the name of the function.
```{r}
apropos("replace")
```
* `dir()` lists all the files in a directory. The `pattern` argument takes
a regular expression and only returns file names that match the pattern.
For example, you can find all the rmarkdown files in the current
directory with:
```{r}
head(dir(pattern = "\\.Rmd$"))
```
(If you're more comfortable with "globs" like `*.Rmd`, you can convert
them to regular expressions with `glob2rx()`):
* `ls()` is similar to `apropos()` but only works in the current
environment. However, if you have so many objects in your environment
that you have to use a regular expression to filter them all, you
need to think about what you're doing! (And probably use a list instead).
## Advanced topics
### The stringi package
stringr is built on top of the __stringi__ package. stringr is useful when you're learning because it exposes a minimal set of functions, that have been carefully picked to handle the most common string manipulation functions. stringi on the other hand is designed to be comprehensive. It contains almost every function you might ever need. stringi has `r length(ls(getNamespace("stringi")))` functions to stringr's `r length(ls("package:stringr"))`.
So if you find yourself struggling to do something that doesn't seem natural in stringr, it's worth taking a look at stringi. The use of the two packages is very similar because stringr was designed to mimic stringi's interface. The main difference is the prefix: `str_` vs `stri_`.
### Encoding
Complicated and fraught with difficulty. Best approach is to convert to UTF-8 as soon as possible. All stringr and stringi functions do this. Readr always reads as UTF-8.
* UTF-8
* Latin1
* bytes: everything else
Generally, you should fix encoding problems during the data import phase.
Detect encoding operates statistically, by comparing frequency of byte fragments across languages and encodings. It's fundamentally heuristic and works better with larger amounts of text (i.e. a whole file, not a single string from that file).
```{r}
x <- "\xc9migr\xe9 cause c\xe9l\xe8bre d\xe9j\xe0 vu."
x
str_conv(x, "ISO-8859-1")
as.data.frame(stringi::stri_enc_detect(x))
str_conv(x, "ISO-8859-2")
```
### UTF-8
<http://wiki.secondlife.com/wiki/Unicode_In_5_Minutes>
<http://www.joelonsoftware.com/articles/Unicode.html>
Homoglyph attack, https://github.com/reinderien/mimic.