You will not generally find the base R `Encoding()` to be useful because it only supports three different encodings (and interpreting what they mean is non-trivial) and it only tells you the encoding that R thinks it is, not what it really is.
And typically the problem is that the declaring encoding is wrong.
The tidyverse follows best practices[^prog-strings-1] of using UTF-8 everywhere, so any string you create with the tidyverse will use UTF-8.
It's still possible to have problems, but they'll typically arise during data import.
Once you've diagnosed you have an encoding problem, you should fix it in data import (i.e. by using the `encoding` argument to `readr::locale()`).
[^prog-strings-1]: <http://utf8everywhere.org>
### Length and subsetting
This seems like a straightforward computation if you're only familiar with English, but things get complex quick when working with other languages.
Four most common are Latin, Chinese, Arabic, and Devangari, which represent three different systems of writing systems:
- Latin uses an alphabet, where each consonant and vowel gets its own letter.
- Chinese.
Logograms.
Half width vs full width.
English letters are roughly twice as high as they are wide.
Chinese characters are roughly square.
- Arabic is an abjad, only consonants are written and vowels are optionally as diacritics.
Additionally, it's written from right-to-left, so the first letter is the letter on the far right.
- Devangari is an abugida where each symbol represents a consonant-vowel pair, , vowel notation secondary.
> For instance, 'ch' is two letters in English and Latin, but considered to be one letter in Czech and Slovak.
This is a problem even with Latin alphabets because many languages use **diacritics**, glyphs added to the basic alphabet.
This is a problem because Unicode provides two ways of representing characters with accents: many common characters have a special codepoint, but others can be built up from individual components.
We don't talk about matrices here, but they are useful elsewhere.
### Exercises
1. From the Harvard sentences data, extract:
1. The first word from each sentence.
2. All words ending in `ing`.
3. All plurals.
## Grouped matches
Earlier in this chapter we talked about the use of parentheses for clarifying precedence and for backreferences when matching.
You can also use parentheses to extract parts of a complex match.
For example, imagine we want to extract nouns from the sentences.
As a heuristic, we'll look for any word that comes after "a" or "the".
Defining a "word" in a regular expression is a little tricky, so here I use a simple approximation: a sequence of at least one character that isn't a space.
`str_locate()` and `str_locate_all()` give you the starting and ending positions of each match.
These are particularly useful when none of the other functions does exactly what you want.
You can use `str_locate()` to find the matching pattern, `str_sub()` to extract and/or modify them.
## stringi
stringr is built on top of the **stringi** package.
stringr is useful when you're learning because it exposes a minimal set of functions, which have been carefully picked to handle the most common string manipulation functions.
stringi, on the other hand, is designed to be comprehensive.
It contains almost every function you might ever need: stringi has `r length(getNamespaceExports("stringi"))` functions to stringr's `r length(getNamespaceExports("stringr"))`.
If you find yourself struggling to do something in stringr, it's worth taking a look at stringi.
The packages work very similarly, so you should be able to translate your stringr knowledge in a natural way.
The main difference is the prefix: `str_` vs. `stri_`.
### Exercises
1. Find the stringi functions that:
a. Count the number of words.
b. Find duplicated strings.
c. Generate random text.
2. How do you control the language that `stri_sort()` uses for sorting?
### Exercises
1. What do the `extra` and `fill` arguments do in `separate()`?
Experiment with the various options for the following two toy datasets.