More hacking and thwacking

This commit is contained in:
Hadley Wickham 2022-10-04 17:34:27 -05:00
parent 7189136cf5
commit d7af73d196
1 changed files with 312 additions and 399 deletions

View File

@ -9,15 +9,17 @@ status("restructuring")
## Introduction
The chapter starts by expanding your knowledge of patterns, to cover six important new topics (escaping, anchoring, character classes, shorthand classes, quantifiers, and alternation).
Here we'll focus mostly on the language itself, not the functions that use it.
That means we'll mostly work with toy character vectors, showing the results with `str_view()` and `str_view_all()`.
You'll need to take what you learn here and apply it to data frames with tidyr functions or by combining dplyr and stringr functions.
We'll then take what you've learned a show a few useful strategies when creating more complex patterns.
In @sec-strings, you learned a whole bunch of useful functions for working with strings.
In this this chapter we'll learn even more, but these functions all use regular expressions.
Regular expressions are a powerful language for describing patterns within strings.
The term "regular expression" is a bit of a mouthful, so most people abbreviate to "regex"[^regexps-1] or "regexp".
Next we'll talk about the important concepts of "grouping" and "capturing" which give you new ways to extract variables out of strings using `tidyr::separate_group()`.
Grouping also allows you to use back references which allow you do things like match repeated patterns.
We'll finish by discussing the various "flags" that allow you to tweak the operation of regular expressions
[^regexps-1]: With a hard g, sounding like "reg-x".
The chapter starts with the basics of regular expressions and the most useful stringr functions for data analysis.
We'll then expand your knowledge of patterns, to cover seven important new topics (escaping, anchoring, character classes, shorthand classes, quantifiers, precedence, and grouping).
Next we'll talk about some of the other types of pattern that stringr functions can work with, and the various "flags" that allow you to tweak the operation of regular expressions.
We'll finish up with a survey of other places in stringr, the tidyverse, and base R where you might use regexes.
### Prerequisites
@ -31,115 +33,77 @@ library(tidyverse)
library(babynames)
```
It's worth noting that the regular expressions used by stringr are very slightly different to those of base R.
That's because stringr is built on top of the [stringi package](https://stringi.gagolewski.com), which is in turn built on top of the [ICU engine](https://unicode-org.github.io/icu/userguide/strings/regexp.html), whereas base R functions (like `gsub()` and `grepl()`) use either the [TRE engine](https://github.com/laurikari/tre) or the [PCRE engine](https://www.pcre.org).
Fortunately, the basics of regular expressions are so well established that you'll encounter few variations when working with the patterns you'll learn in this book (and we'll point them out where important).
You only need to be aware of the difference when you start to rely on advanced features like complex Unicode character ranges or special features that use the `(?…)` syntax.
You can learn more about these advanced features in `vignette("regular-expressions", package = "stringr")`.
Another useful reference is [https://www.regular-expressions.info/](https://www.regular-expressions.info/tutorial.html).
It's not R specific, but it covers the most advanced features and explains how regular expressions work under the hood.
## Regular expression basics {#sec-reg-basics}
Similar functionality is available in base R (through functions like `grepl()`, `gsub()`, and `regmatches()`) but we think you'll find stringr easier to use because it's been carefully designed to be as consistent as possible.
### Patterns
### Exercises
1. Explain why each of these strings don't match a `\`: `"\"`, `"\\"`, `"\\\"`.
2. How would you match the sequence `"'\`?
3. What patterns will the regular expression `\..\..\..` match?
How would you represent it as a string?
### Introduction to regular expressions
The simplest patterns, like those above, are exact: they match any strings that contain the exact sequence of characters in the pattern.
The simplest patterns consist of regular letters and numbers, and match exactly.
And when we say exact we really mean exact: "x" will only match lowercase "x" not uppercase "X".
To see what's going on we can take advantage of the second argument to `str_view()` a regular expression that's applied to its first argument:
```{r}
str_detect(c("x", "X"), "x")
str_view(c("x", "X"), "x")
```
In general, any letter or number will match exactly, but punctuation characters like `.`, `+`, `*`, `[`, `]`, `?`, often have special meanings[^regexps-1].
In general, any letter or number will match exactly, but punctuation characters like `.`, `+`, `*`, `[`, `]`, `?`, often have special meanings[^regexps-2].
For example, `.`
will match any character[^regexps-2], so `"a."` will match any string that contains an "a" followed by another character
will match any character[^regexps-3], so `"a."` will match any string that contains an "a" followed by another character
:
[^regexps-1]: You'll learn how to escape this special behaviour in @sec-regexp-escaping.
[^regexps-2]: You'll learn how to escape this special behaviour in @sec-regexp-escaping.
[^regexps-2]: Well, any character apart from `\n`.
[^regexps-3]: Well, any character apart from `\n`.
```{r}
str_detect(c("a", "ab", "ae", "bd", "ea", "eab"), "a.")
str_view(c("a", "ab", "ae", "bd", "ea", "eab"), "a.")
```
To get a better sense of what's happening, lets switch to `str_view_all()`.
This shows which characters are matched by colouring the match blue and surrounding it with `<>`:
```{r}
str_view_all(c("a", "ab", "ae", "bd", "ea", "eab"), "a.")
```
Regular expressions are a powerful and flexible language which we'll come back to in @sec-regular-expressions.
Here we'll just introduce only the most important components: quantifiers and character classes.
**Quantifiers** control how many times an element that can be applied to other pattern: `?` makes a pattern optional (i.e. it matches 0 or 1 times), `+` lets a pattern repeat (i.e. it matches at least once), and `*` lets a pattern be optional or repeat (i.e. it matches any number of times, including 0).
```{r}
# ab? matches an "a", optionally followed by a "b".
str_view_all(c("a", "ab", "abb"), "ab?")
str_view(c("a", "ab", "abb"), "ab?")
# ab+ matches an "a", followed by at least one "b".
str_view_all(c("a", "ab", "abb"), "ab+")
str_view(c("a", "ab", "abb"), "ab+")
# ab* matches an "a", followed by any number of "b"s.
str_view_all(c("a", "ab", "abb"), "ab*")
str_view(c("a", "ab", "abb"), "ab*")
```
**Character classes** are defined by `[]` and let you match a set set of characters, e.g. `[abcd]` matches "a", "b", "c", or "d".
You can also invert the match by starting with `^`: `[^abcd]` matches anything **except** "a", "b", "c", or "d".
We can use this idea to find the vowels in a few particularly special names:
We can use this idea to find the vowels and consonants in a few particularly special names:
```{r}
names <- c("Hadley", "Mine", "Garrett")
str_view_all(names, "[aeiou]")
str_view(names, "[aeiou]")
str_view(names, "[^aeiou]")
```
You can combine character classes and quantifiers.
Notice the difference between the following two patterns that look for consonants.
The same characters are matched, but the number of matches is different.
The following regexp looks for a vowel followed by one or more consonants:
```{r}
str_view_all(names, "[^aeiou]")
str_view_all(names, "[^aeiou]+")
str_view(names, "[aeiou][^aeiou]+")
```
You can use **alternation** to pick between one or more alternative patterns.
Here are a few examples:
- Match apple, pear, or banana: `apple|pear|banana`.
- Match three letters or two digits: `\w{3}|\d{2}`.
Regular expressions are very compact and use a lot of punctuation characters, so they can seem overwhelming at first, and you'll think a cat has walked across your keyboard.
So don't worry if they're hard to understand at first; you'll get better with practice.
Lets start that practice with some other useful stringr functions.
## Working with patterns
As well as creating strings from data, you probably also want to extract data from longer strings.
Unfortunately before we can tackle that, we need to take a brief digression to talk about **regular expressions**.
Regular expressions are a very concise language that describes patterns in strings.
For example, `"^The"` is shorthand for any string that starts with "The", and `a.+e` is a shorthand for "a" followed by one or more other characters, followed by an "e".
We'll start by using `str_detect()` which answers a simple question: "does this pattern occur anywhere in my vector?".
We'll then ask progressively more complex questions by learning more about regular expressions and the stringr functions that use them.
Lets start that practice with some useful stringr functions.
### Detect matches
The term "regular expression" is a bit of a mouthful, so most people abbreviate to "regex"[^regexps-3] or "regexp".
To learn about regexes, we'll start with the simplest function that uses them: `str_detect()`. It takes a character vector and a pattern, and returns a logical vector that says if the pattern was found at each element of the vector.
The following code shows the simplest type of pattern, an exact match.
[^regexps-3]: With a hard g, sounding like "reg-x".
`str_detect()` takes a character vector and a pattern, and returns a logical vector that says if the pattern was found at each element of the vector.
```{r}
x <- c("apple", "banana", "pear")
str_detect(x, "e") # does the word contain an e?
str_detect(x, "b") # does the word contain a b?
str_detect(x, "ear") # does the word contain "ear"?
str_detect(c("a", "b", "c"), "[aeiou]")
```
`str_detect()` returns a logical vector the same length as the first argument, so it pairs well with `filter()`.
@ -151,8 +115,8 @@ babynames |>
count(name, wt = n, sort = TRUE)
```
We can also use `str_detect()` with `summarize()` by remembering that when you use a logical vector in a numeric context, `FALSE` becomes 0 and `TRUE` becomes 1.
That means `sum(str_detect(x, pattern))` tells you the number of observations that match and `mean(str_detect(x, pattern))` tells you the proportion of observations that match.
We can also use `str_detect()` with `summarize()` by pairing it with `sum()` or `mean()`.
remembering that when you use a logical vector in a numeric context, `FALSE` becomes 0 and `TRUE` becomes 1 so `sum(str_detect(x, pattern))` tells you the number of observations that match and `mean(str_detect(x, pattern))` tells you the proportion of observations that match.
For example, the following snippet computes and visualizes the proportion of baby names that contain "x", broken down by year.
```{r}
@ -189,91 +153,10 @@ Regular expressions say two, not three:
```{r}
str_count("abababa", "aba")
str_view_all("abababa", "aba")
str_view("abababa", "aba")
```
It's natural to use `str_count()` with `mutate()`.
### Replace matches
`str_replace_all()` allows you to replace a match with the text of your choosing.
This can be particularly useful if you need to standardize a vector.
Unlike the regexp functions we've encountered so far, `str_replace_all()` takes three arguments: a character vector, a pattern, and a replacement.
The simplest use is to replace a pattern with a fixed string:
```{r}
x <- c("apple", "pear", "banana")
str_replace_all(x, "[aeiou]", "-")
```
`str_remove_all()` is a short cut for `str_replace_all(x, pattern, "")` --- it removes matching patterns from a string.
Use in `mutate()`
Using pipe inside mutate.
Recommendation to make a function, and think about testing it --- don't need formal tests, but useful to build up a set of positive and negative test cases as you.
### Exercises
1. What name has the most vowels?
What name has the highest proportion of vowels?
(Hint: what is the denominator?)
2. For each of the following challenges, try solving it by using both a single regular expression, and a combination of multiple `str_detect()` calls.
a. Find all words that start or end with `x`.
b. Find all words that start with a vowel and end with a consonant.
c. Are there any words that contain at least one of each different vowel?
3. Replace all forward slashes in a string with backslashes.
4. Implement a simple version of `str_to_lower()` using `str_replace_all()`.
5. Switch the first and last letters in `words`.
Which of those strings are still `words`?
### Replacement
### Advanced replacements
You can also perform multiple replacements by supplying a named vector.
The name gives a regular expression to match, and the value gives the replacement.
```{r}
x <- c("1 house", "1 person has 2 cars", "3 people")
str_replace_all(x, c("1" = "one", "2" = "two", "3" = "three"))
```
Alternatively, you can provide a replacement function: it's called with a vector of matches, and should return what to replacement them with.
```{r}
x <- c("1 house", "1 person has 2 cars", "3 people")
str_replace_all(x, "[aeiou]+", str_to_upper)
```
### Pattern control
Now that you've learn about regular expressions, you might be worried about them working when you don't want them to.
You can opt-out of the regular expression rules by using `fixed()`:
```{r}
str_view(c("", "a", "."), fixed("."))
```
Both fixed strings and regular expressions are case sensitive by default.
You can opt out by setting `ignore_case = TRUE`.
```{r}
str_view_all("x X xy", "X")
str_view_all("x X xy", fixed("X", ignore_case = TRUE))
str_view_all("x X xy", regex(".Y", ignore_case = TRUE))
```
## Applications
### Counting
The following example uses `str_count()` with character classes to count the number of vowels and consonants in each name.
```{r}
@ -290,33 +173,82 @@ That's because we've forgotten to tell you that regular expressions are case sen
There are three ways we could fix this:
- Add the upper case vowels to the character class: `str_count(name, "[aeiouAEIOU]")`.
- Tell the regular expression to ignore case: `str_count(regex(name, ignore.case = TRUE), "[aeiou]")`. We'll talk about more a little later.
- Use `str_to_lower()` to convert the names to lower case: `str_count(str_to_lower(name), "[aeiou]")`. We'll come back to this function in @sec-other-languages.
- Tell the regular expression to ignore case: `str_count(regex(name, ignore_case = TRUE), "[aeiou]")`. We'll talk about more in @sec-flags..
- Use `str_to_lower()` to convert the names to lower case: `str_count(str_to_lower(name), "[aeiou]")`. You learned about this function in @sec-other-languages.
This is pretty typical when working with strings --- there are often multiple ways to reach your goal, either making your pattern more complicated or by doing some preprocessing on your string.
If you get stuck trying one approach, it can often be useful to switch gears and tackle the problem from a different perspective.
```{r}
babynames |>
count(name) |>
mutate(
name = str_to_lower(name),
vowels = str_count(name, "[aeiou]"),
consonants = str_count(name, "[^aeiou]")
)
```
### Replace values
Another powerful tool are `str_replace()` and `str_replace_all()` which allow you to replace either one match or all matches with your own text.
These are particularly useful in `mutate()` when doing data cleaning.
```{r}
x <- c("apple", "pear", "banana")
str_replace_all(x, "[aeiou]", "-")
```
`str_remove()` and `str_remove_all()` are handy shortcuts for `str_replace(x, pattern, "")`.
### Extract variables
The last function comes from tidyr: `separate_regex_wider()`.
This works similarly to `separate_at_wider()` and `separate_by_wider()` but you give it a vector of regular expressions.
The named components become variables and the unnamed components are dropped.
### Exercises
1. Explain why each of these strings don't match a `\`: `"\"`, `"\\"`, `"\\\"`.
2. How would you match the sequence `"'\`?
3. What patterns will the regular expression `\..\..\..` match?
How would you represent it as a string?
4. What name has the most vowels?
What name has the highest proportion of vowels?
(Hint: what is the denominator?)
5. For each of the following challenges, try solving it by using both a single regular expression, and a combination of multiple `str_detect()` calls.
a. Find all words that start or end with `x`.
b. Find all words that start with a vowel and end with a consonant.
c. Are there any words that contain at least one of each different vowel?
6. Replace all forward slashes in a string with backslashes.
7. Implement a simple version of `str_to_lower()` using `str_replace_all()`.
8. Switch the first and last letters in `words`.
Which of those strings are still `words`?
## Pattern language
You learned the very basics of the regular expression pattern language in @sec-strings, and now its time to dig into more of the details.
You learned the basics of the regular expression pattern language in above, and now its time to dig into more of the details.
First, we'll start with **escaping**, which allows you to match characters that the pattern language otherwise treats specially.
Next you'll learn about **anchors**, which allow you to match the start or end of the string.
Then you'll learn about **character classes** and their shortcuts, which allow you to match any character from a set.
We'll finish up with **quantifiers**, which control how many times a pattern can match, and **alternation**, which allows you to match either *this* or *that.*
Then you'll more learn about **character classes** and their shortcuts, which allow you to match any character from a set.
We'll finish up with the final details of **quantifiers**, which control how many times a pattern can match.
The terms we use here are the technical names for each component.
They're not always the most evocative of their purpose, but it's very helpful to know the correct terms if you later want to Google for more details.
We'll concentrate on showing how these patterns work with `str_view()` and `str_view_all()` but remember that you can use them with any of the functions that you learned about in @sec-strings, i.e.:
- `str_detect(x, pattern)` returns a logical vector the same length as `x`, indicating whether each element matches (`TRUE`) or doesn't match (`FALSE`) the pattern.
- `str_count(x, pattern)` returns the number of times `pattern` matches in each element of `x`.
- `str_replace_all(x, pattern, replacement)` replaces every instance of `pattern` with `replacement`.
We'll concentrate on showing how these patterns work with `str_view()` but remember that you can use them with any of the functions that you learned above.
### Escaping {#sec-regexp-escaping}
In @sec-strings, you'll learned how to match a literal `.` by using `fixed(".")`.
But what if you want to match a literal `.` as part of a bigger regular expression?
What if you want to match a literal `.` as part of a bigger regular expression?
You'll need to use an **escape**, which tells the regular expression you want it to match exactly, not use its special behavior.
Like strings, regexps use the backslash for escaping, so to match a `.`, you need the regexp `\.`.
Unfortunately this creates a problem.
@ -351,7 +283,7 @@ Alternatively, you might find it easier to use the raw strings you learned about
That lets you to avoid one layer of escaping:
```{r}
str_view(x, r"(\\)")
str_view(x, r"{\\}")
```
The full set of characters with special meanings that need to be escaped is `.^$\|*+?{}[]()`.
@ -394,10 +326,10 @@ str_view(x, "sum")
str_view(x, "\\bsum\\b")
```
When used alone these anchors will produce a zero-width match:
When used alone anchors will produce a zero-width match:
```{r}
str_view_all("abc", c("$", "^", "\\b"))
str_view("abc", c("$", "^", "\\b"))
```
### Character classes
@ -411,11 +343,11 @@ Inside of `[]` only `-`, `^`, and `\` have special meanings:
- `\` escapes special characters, so `[\^\-\]]`: matches `^`, `-`, or `]`.
```{r}
str_view_all("abcd12345-!@#%.", c("[abc]", "[a-z]", "[^a-z0-9]"))
str_view("abcd12345-!@#%.", c("[abc]", "[a-z]", "[^a-z0-9]"))
# You need an escape to match characters that are otherwise
# special inside of []
str_view_all("a-b-c", "[a\\-c]")
str_view("a-b-c", "[a\\-c]")
```
Remember that regular expressions are case sensitive so if you want to match any lowercase or uppercase letter, you'd need to write `[a-zA-Z0-9]`.
@ -434,21 +366,22 @@ There are three other particularly useful pairs:
`\W`: matches any "non-word" character.
Remember, to create a regular expression containing `\d` or `\s`, you'll need to escape the `\` for the string, so you'll type `"\\d"` or `"\\s"`.
The following code demonstrates the different shortcuts with a selection of letters, numbers, and punctuation characters.
```{r}
str_view_all("abcd12345!@#%. ", "\\d+")
str_view_all("abcd12345!@#%. ", "\\D+")
str_view_all("abcd12345!@#%. ", "\\w+")
str_view_all("abcd12345!@#%. ", "\\W+")
str_view_all("abcd12345!@#%. ", "\\s+")
str_view_all("abcd12345!@#%. ", "\\S+")
str_view("abcd 12345 !@#%.", "\\d+")
str_view("abcd 12345 !@#%.", "\\D+")
str_view("abcd 12345 !@#%.", "\\w+")
str_view("abcd 12345 !@#%.", "\\W+")
str_view("abcd 12345 !@#%.", "\\s+")
str_view("abcd 12345 !@#%.", "\\S+")
```
### Quantifiers
The **quantifiers** control how many times a pattern matches.
In @sec-strings you learned about `?` (0 or 1 matches), `+` (1 or more matches), and `*` (0 or more matches).
In @sec-reg-basics you learned about `?` (0 or 1 matches), `+` (1 or more matches), and `*` (0 or more matches).
For example, `colou?r` will match American or British spelling, `\d+` will match one or more digits, and `\s?` will optionally match a single whitespace.
You can also specify the number of matches precisely:
@ -461,21 +394,13 @@ The following code shows how this works for a few simple examples using to `\b`
```{r}
x <- " x xx xxx xxxx"
str_view_all(x, "\\bx{2}")
str_view_all(x, "\\bx{2,}")
str_view_all(x, "\\bx{1,3}")
str_view_all(x, "\\bx{2,3}")
str_view(x, "\\bx{2}")
str_view(x, "\\bx{2,}")
str_view(x, "\\bx{1,3}")
str_view(x, "\\bx{2,3}")
```
### Alternation
You can use **alternation** to pick between one or more alternative patterns.
Here are a few examples:
- Match apple, pear, or banana: `apple|pear|banana`.
- Match three letters or two digits: `\w{3}|\d{2}`.
### Parentheses and operator precedence
### Operator precedence and parentheses
What does `ab+` match?
Does it match "a" followed by one or more "b"s, or does it match "ab" repeated any number of times?
@ -486,11 +411,66 @@ The answer to these questions is determined by operator precedence, similar to t
You already know that `a + b * c` is equivalent to `a + (b * c)` not `(a + b) * c` because `*` has high precedence and `+` has lower precedence: you compute `*` before `+`.
In regular expressions, quantifiers have high precedence and alternation has low precedence.
That means `ab+` is equivalent to `a(b+)`, and `^a|b$` is equivalent to `(^a)|(b$)`.
Just like with algebra, you can use parentheses to override the usual order (because they have the highest precedence of all).
Just like with algebra, you can use parentheses to override the usual order.
Unlike algebra you're unlikely to remember the precedence rules for regexes, so feel free to use parentheses liberally.
Technically the escape, character classes, and parentheses are all operators that also have precedence.
But these tend to be less likely to cause confusion because they mostly behave how you expect: it's unlikely that you'd think that `\(s|d)` would mean `(\s)|(\d)`.
### Grouping and capturing
Parentheses are an important tool for controlling the order in which pattern operations are applied but they also have an important additional effect: they create **capturing groups** that allow you to use to sub-components of the match.
You can refer back to previously matched text inside parentheses by using **back reference**: `\1` refers to the match contained in the first parenthesis, `\2` in the second parenthesis, and so on.
For example, the following pattern finds all fruits that have a repeated pair of letters:
```{r}
str_view(fruit, "(..)\\1")
```
And this one finds all words that start and end with the same pair of letters:
```{r}
str_view(words, "^(..).*\\1$")
```
You can also use backreferences in `str_replace()`:
```{r}
sentences |>
str_replace("(\\w+) (\\w+) (\\w+)", "\\1 \\3 \\2") |>
head(5)
```
If you want extract the matches for each group you can use `str_match()`.
But it returns a matrix, so isn't as easy to work with:
```{r}
sentences |>
str_match("the (\\w+) (\\w+)") |>
head()
```
You could convert to a tibble and name the columns:
```{r}
sentences |>
str_match("the (\\w+) (\\w+)") |>
as_tibble(.name_repair = "minimal") |>
set_names("match", "word1", "word2")
```
But then you've basically recreated your own simple version of `separate_regex_wider()`.
Indeed, behind the scenes `separate_regexp_wider()` converts your vector of patterns to a single regexp that uses grouping to capture only the named components.
Occasionally, you'll want to use parentheses without creating matching groups.
You can create a non-capturing group with `(?:)`.
```{r}
x <- c("a gray cat", "a grey dog")
str_match(x, "(gr(e|a)y)")
str_match(x, "(gr(?:e|a)y)")
```
### Exercises
1. How would you match the literal string `"$^$"`?
@ -503,24 +483,117 @@ But these tend to be less likely to cause confusion because they mostly behave h
d. Are exactly three letters long. (Don't cheat by using `str_length()`!)
e. Have seven letters or more.
Since `words` is long, you might want to use the `match` argument to `str_view()` to show only the matching or non-matching words.
3. Create 11 regular expressions that match the British or American spellings for each of the following words: grey/gray, modelling/modeling, summarize/summarise, aluminium/aluminum, defence/defense, analog/analogue, center/centre, sceptic/skeptic, aeroplane/airplane, arse/ass, doughnut/donut.
Try and make the shortest possible regex!
3. Create regular expressions that match the British or American spellings of the following words: grey/gray, modelling/modeling, summarize/summarise, aluminium/aluminum, defence/defense, analog/analogue, center/centre, sceptic/skeptic, aeroplane/airplane, arse/ass, doughnut/donut.
4. Create a regular expression that will match telephone numbers as commonly written in your country.
4. What strings will `$a` match?
5. Write the equivalents of `?`, `+`, `*` in `{m,n}` form.
5. Create a regular expression that will match telephone numbers as commonly written in your country.
6. Write the equivalents of `?`, `+`, `*` in `{m,n}` form.
7. Describe in words what these regular expressions match: (read carefully to see if each entry is a regular expression or a string that defines a regular expression.)
6. Describe in words what these regular expressions match: (read carefully to see if each entry is a regular expression or a string that defines a regular expression.)
a. `^.*$`
b. `"\\{.+\\}"`
c. `\d{4}-\d{2}-\d{2}`
d. `"\\\\{4}"`
8. Solve the beginner regexp crosswords at <https://regexcrossword.com/challenges/beginner>.
7. Describe, in words, what these expressions will match:
a. `(.)\1\1`
b. `"(.)(.)\\2\\1"`
c. `(..)\1`
d. `"(.).\\1.\\1"`
e. `"(.)(.)(.).*\\3\\2\\1"`
8. Construct regular expressions to match words that:
a. Who's first letter is the same as the last letter, and the second letter is the same as the second to last letter.
b. Contain one letter repeated in at least three places (e.g. "eleven" contains three "e"s.)
9. Solve the beginner regexp crosswords at <https://regexcrossword.com/challenges/beginner>.
## Pattern control
Now that you've learn about regular expressions, you might be worried about them working when you don't want them to.
### Fixed matches
You can opt-out of the regular expression rules by using `fixed()`:
```{r}
str_view(c("", "a", "."), fixed("."))
```
You can opt out by setting `ignore_case = TRUE`.
```{r}
str_view("x X xy", "X")
str_view("x X xy", fixed("X", ignore_case = TRUE))
```
### Regex Flags {#sec-flags}
The are a number of settings, often called **flags** in other programming languages, that you can use to control some of the details of the regex.
In stringr, you can use these by wrapping the pattern in a call to `regex()`:
```{r}
#| eval: false
# The regular call:
str_view(fruit, "nana")
# is shorthand for
str_view(fruit, regex("nana"))
```
The most useful flag is probably `ignore_case = TRUE` because it allows characters to match either their uppercase or lowercase forms:
```{r}
bananas <- c("banana", "Banana", "BANANA")
str_view(bananas, "banana")
str_view(bananas, regex("banana", ignore_case = TRUE))
```
If you're doing a lot of work with multiline strings (i.e. strings that contain `\n`), `multiline` and `dotall` can also be useful.
`dotall = TRUE` allows `.` to match everything, including `\n`:
```{r}
x <- "Line 1\nLine 2\nLine 3"
str_view(x, ".L")
str_view(x, regex(".L", dotall = TRUE))
```
And `multiline = TRUE` allows `^` and `$` to match the start and end of each line rather than the start and end of the complete string:
```{r}
x <- "Line 1\nLine 2\nLine 3"
str_view(x, "^Line")
str_view(x, regex("^Line", multiline = TRUE))
```
Finally, if you're writing a complicated regular expression and you're worried you might not understand it in the future, `comments = TRUE` can be extremely useful.
It allows you to use comments and whitespace to make complex regular expressions more understandable.
Spaces and new lines are ignored, as is everything after `#`.
(Note that we use a raw string here to minimize the number of escapes needed.)
```{r}
phone <- regex(r"(
\(? # optional opening parens
(\d{3}) # area code
[)\ -]? # optional closing parens, space, or dash
(\d{3}) # another three numbers
[\ -]? # optional space or dash
(\d{3}) # three more numbers
)", comments = TRUE)
str_match("514-791-8141", phone)
```
If you're using comments and want to match a space, newline, or `#`, you'll need to escape it:
```{r}
str_view("x x #", regex("x #", comments = TRUE))
str_view("x x #", regex(r"(x\ \#)", comments = TRUE))
```
## Practice
@ -540,27 +613,27 @@ First, let's find all sentences that start with "The".
Using the `^` anchor alone is not enough:
```{r}
str_view(sentences, "^The", match = TRUE)
str_view(sentences, "^The")
```
Because it all matches sentences starting with `They` or `Those`.
We need to make sure that the "e" is the last letter in the word, which we can do by adding adding a word boundary:
```{r}
str_view(sentences, "^The\\b", match = TRUE)
str_view(sentences, "^The\\b")
```
What about finding all sentences that begin with a pronoun?
```{r}
str_view(sentences, "^She|He|It|They\\b", match = TRUE)
str_view(sentences, "^She|He|It|They\\b")
```
A quick inspection of the results shows that we're getting some spurious matches.
That's because we've forgotten to use parentheses:
```{r}
str_view(sentences, "^(She|He|It|They)\\b", match = TRUE)
str_view(sentences, "^(She|He|It|They)\\b")
```
You might wonder how you might spot such a mistake if it didn't occur in the first few matches.
@ -585,7 +658,7 @@ Imagine we want to find words that only contain consonants.
One technique is to create a character class that contains all letters except for the vowels (`[^aeiou]`), then allow that to match any number of letters (`[^aeiou]+`), then force it to match the whole string by anchoring to the beginning and the end (`^[^aeiou]+$`):
```{r}
str_view(words, "^[^aeiou]+$", match = TRUE)
str_view(words, "^[^aeiou]+$")
```
But we can make this problem a bit easier by flipping the problem around.
@ -614,6 +687,7 @@ If we did it with patterns we'd need to generate 5!
(120) different patterns:
```{r}
#| results: false
words[str_detect(words, "a.*e.*i.*o.*u")]
# ...
words[str_detect(words, "u.*o.*i.*e.*a")]
@ -639,7 +713,7 @@ What if we wanted to find all `sentences` that mention a color?
The basic idea is simple: we just combine alternation with word boundaries.
```{r}
str_view(sentences, "\\b(red|green|blue)\\b", match = TRUE)
str_view(sentences, "\\b(red|green|blue)\\b")
```
But it would be tedious to construct this pattern by hand.
@ -660,7 +734,7 @@ We could make this pattern more comprehensive if we had a good list of colors.
One place we could start from is the list of built-in colours that R can use for plots:
```{r}
colors()[1:27]
str_view(colors())[1:27]
```
But first lets element the numbered variants:
@ -675,7 +749,7 @@ Then we can turn this into one giant pattern:
```{r}
pattern <- str_c("\\b(", str_flatten(cols, "|"), ")\\b")
str_view(sentences, pattern, match = TRUE)
str_view(sentences, pattern)
```
In this example `cols` only contains numbers and letters so you don't need to worry about metacharacters.
@ -684,201 +758,35 @@ But in general, when creating patterns from existing strings it's good practice
### Exercises
1. Construct patterns to find evidence for and against the rule "i before e except after c"?
2. `colors()` contains a number of modifiers like "lightgray" and "darkblue". How could you automatically identify these modifiers? (Think about how you might detect and removed what is being modified).
3. Create a regular expression that finds any use of base R dataset. You can get a list of these datasets via a special use of the `data()` function: `data(package = "datasets")$results[, "Item"]`. Note that a number of old datasets are individual vectors; these contain the name of the grouping "data frame" in parentheses, so you'll need to also strip these off.
## Grouping and capturing
Like in algebra, parentheses are an important tool for controlling the order in which pattern operations are applied.
But they also have an important additional effect: they create **capturing groups** that allow you to use to sub-components of the match.
There are three main ways you can use them:
- To match a repeated pattern.
- To include a matched pattern in the replacement.
- To extract individual components of the match.
If needed, there's also a special form of parentheses that only affect operator precedence without creating capturing a group.
All of these are these described below.
### Matching a repeated pattern
You can refer back to previously matched text inside parentheses by using **back reference**.
Back references are usually numbered: `\1` refers to the match contained in the first parenthesis, `\2` in the second parenthesis, and so on.
For example, the following pattern finds all fruits that have a repeated pair of letters:
```{r}
str_view(fruit, "(..)\\1", match = TRUE)
```
And this one finds all words that start and end with the same pair of letters:
```{r}
str_view(words, "^(..).*\\1$", match = TRUE)
```
### Replacing with the matched pattern
You can also use back references when replacing with `str_replace()` and `str_replace_all()`.
The following code will switch the order of the second and third words:
```{r}
sentences |>
str_replace("(\\w+) (\\w+) (\\w+)", "\\1 \\3 \\2") |>
head(5)
```
You'll sometimes see people using `str_replace()` to extract a single match:
```{r}
pattern <- "^.*the ([^ .,]+).*$"
sentences |>
str_subset(pattern) |>
str_replace(pattern, "\\1") |>
head(10)
```
But you're generally better off using `str_match()` or `tidyr::separate_groups()`, which you'll learn about next.
### Extracting groups
stringr provides a lower-level function for extract matches called `str_match()`.
But it returns a matrix, so isn't as easy to work with:
```{r}
sentences |>
str_match("the (\\w+) (\\w+)") |>
head()
```
Instead, we recommend using tidyr's `separate_groups()` which creates a column for each capturing group.
### Named groups
If you have many groups, referring to them by position can get confusing.
It's possible to give them a name with `(?<name>…)`.
You can refer to it with `\k<name>`.
```{r}
str_view(words, "^(?<first>.).*\\k<first>$", match = TRUE)
```
This verbosity is a good fit with `comments = TRUE`:
```{r}
pattern <- regex(
r"(
^ # start at the beginning of the string
(?<first>.) # and match the <first> letter
.* # then match any other letters
\k<first>$ # ensuring the last letter is the same as the <first>
)",
comments = TRUE
)
```
You can also use named groups as an alternative to the `col_names` argument to `tidyr::separate_groups()`.
### Non-capturing groups
Occasionally, you'll want to use parentheses without creating matching groups.
You can create a non-capturing group with `(?:)`.
```{r}
x <- c("a gray cat", "a grey dog")
str_match(x, "(gr(e|a)y)")
str_match(x, "(gr(?:e|a)y)")
```
Typically, however, you'll find it easier to just ignore that result by setting the `col_name` to `NA`:
### Exercises
1. Describe, in words, what these expressions will match:
a. `(.)\1\1`
b. `"(.)(.)\\2\\1"`
c. `(..)\1`
d. `"(.).\\1.\\1"`
e. `"(.)(.)(.).*\\3\\2\\1"`
2. Construct regular expressions to match words that:
a. Who's first letter is the same as the last letter, and the second letter is the same as the second to last letter.
b. Contain one letter repeated in at least three places (e.g. "eleven" contains three "e"s.)
## Flags
The are a number of settings, often called **flags** in other programming languages, that you can use to control some of the details of the regex.
In stringr, you can use these by wrapping the pattern in a call to `regex()`:
```{r}
#| eval: false
# The regular call:
str_view(fruit, "nana")
# is shorthand for
str_view(fruit, regex("nana"))
```
The most useful flag is probably `ignore_case = TRUE` because it allows characters to match either their uppercase or lowercase forms:
```{r}
bananas <- c("banana", "Banana", "BANANA")
str_view(bananas, "banana")
str_view(bananas, regex("banana", ignore_case = TRUE))
```
If you're doing a lot of work with multiline strings (i.e. strings that contain `\n`), `multiline` and `dotall` can also be useful.
`dotall = TRUE` allows `.` to match everything, including `\n`:
```{r}
x <- "Line 1\nLine 2\nLine 3"
str_view_all(x, ".L")
str_view_all(x, regex(".L", dotall = TRUE))
```
And `multiline = TRUE` allows `^` and `$` to match the start and end of each line rather than the start and end of the complete string:
```{r}
x <- "Line 1\nLine 2\nLine 3"
str_view_all(x, "^Line")
str_view_all(x, regex("^Line", multiline = TRUE))
```
Finally, if you're writing a complicated regular expression and you're worried you might not understand it in the future, `comments = TRUE` can be extremely useful.
It allows you to use comments and whitespace to make complex regular expressions more understandable.
Spaces and new lines are ignored, as is everything after `#`.
(Note that we use a raw string here to minimize the number of escapes needed.)
```{r}
phone <- regex(r"(
\(? # optional opening parens
(\d{3}) # area code
[)\ -]? # optional closing parens, space, or dash
(\d{3}) # another three numbers
[\ -]? # optional space or dash
(\d{3}) # three more numbers
)", comments = TRUE)
str_match("514-791-8141", phone)
```
If you're using comments and want to match a space, newline, or `#`, you'll need to escape it:
```{r}
str_view("x x #", regex("x #", comments = TRUE))
str_view("x x #", regex(r"(x\ \#)", comments = TRUE))
```
2. `colors()` contains a number of modifiers like "lightgray" and "darkblue". How could you automatically identify these modifiers? (Think about how you might detect and removed what colors are being modified).
3. Create a regular expression that finds any base R dataset. You can get a list of these datasets via a special use of the `data()` function: `data(package = "datasets")$results[, "Item"]`. Note that a number of old datasets are individual vectors; these contain the name of the grouping "data frame" in parentheses, so you'll need to also strip these off.
## Elsewhere
The are a bunch of other places you can use regular expressions outside of stringr.
- `matches()`: as you can tell from it's lack of `str_` prefix, this isn't a stringr fuction.
It's a "tidyselect" function, a fucntion that you can use anywhere in the tidyverse when selecting variables (e.g. `dplyr::select()`, `rename_with()`, `across()`, ...).
### stringr
- `str_locate()`, `str_locate_all()`
- `str_split()` and friends
- `str_extract()`
### tidyverse
- `matches()`: a "tidyselect" function that you can use anywhere in the tidyverse when selecting variables (e.g. `dplyr::select()`, `rename_with()`, `across()`, ...).
- `names_pattern` in `pivot_longer()`
- `sep` in `separate_by_longer()` and `separate_by_wider()`.
### Base R
The regular expressions used by stringr are very slightly different to those of base R.
That's because stringr is built on top of the [stringi package](https://stringi.gagolewski.com), which is in turn built on top of the [ICU engine](https://unicode-org.github.io/icu/userguide/strings/regexp.html), whereas base R functions (like `gsub()` and `grepl()`) use either the [TRE engine](https://github.com/laurikari/tre) or the [PCRE engine](https://www.pcre.org).
Fortunately, the basics of regular expressions are so well established that you'll encounter few variations when working with the patterns you'll learn in this book (and we'll point them out where important).
You only need to be aware of the difference when you start to rely on advanced features like complex Unicode character ranges or special features that use the `(?…)` syntax.
You can learn more about these advanced features in `vignette("regular-expressions", package = "stringr")`.
- `apropos()` searches all objects available from the global environment.
This is useful if you can't quite remember the name of the function.
@ -895,3 +803,8 @@ The are a bunch of other places you can use regular expressions outside of strin
```
(If you're more comfortable with "globs" like `*.Rmd`, you can convert them to regular expressions with `glob2rx()`).
## Summary
Another useful reference is [https://www.regular-expressions.info/](https://www.regular-expressions.info/tutorial.html).
It's not R specific, but it covers the most advanced features and explains how regular expressions work under the hood.