Fix in minor typos in import-databases (#1040)

* Fix typos

* dbplyr sql() to lower case
This commit is contained in:
Maria Paula Caldas 2022-06-04 03:28:45 +02:00 committed by GitHub
parent 325925fbf1
commit 6408e00d93
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 7 additions and 8 deletions

View File

@ -186,7 +186,7 @@ We won't discuss it further here, but if you're dealing with very large datasets
### Other functions
There are lots of other functions in DBI that you might find useful if you're managing your own data (like `dbWriteTable()` which we used in @sec-load-data), but we're going to skip past them in the interests of staying focused on working with data that already lives in a database.
There are lots of other functions in DBI that you might find useful if you're managing your own data (like `dbWriteTable()` which we used in @sec-load-data), but we're going to skip past them in the interest of staying focused on working with data that already lives in a database.
## dbplyr basics
@ -196,7 +196,7 @@ In this, dbplyr translates to SQL; other backends include [dtplyr](https://dtply
To use dbplyr, you must first use `tbl()` to create an object that represents a database table[^import-databases-4]:
[^import-databases-4]: If you want to mix SQL and dbplyr, you can also create a tbl from a SQL query with `tbl(con, SQL("SELECT * FROM foo")).`
[^import-databases-4]: If you want to mix SQL and dbplyr, you can also create a tbl from a SQL query with `tbl(con, sql("SELECT * FROM foo")).`
```{r}
diamonds_db <- tbl(con, "diamonds")
@ -575,7 +575,7 @@ mutate_query <- function(df, ...) {
```
Let's dive in with some summaries!
Looking at the code below you'll notice that some summary functions, like `mean()`, have a relatively simple translation while others, like `median()`, are much complex.
Looking at the code below you'll notice that some summary functions, like `mean()`, have a relatively simple translation while others, like `median()`, are much more complex.
The complexity is typically higher for operations that are common in statistics but less common in databases.
```{r}
@ -598,9 +598,9 @@ flights |>
)
```
In SQL, the `GROUP BY` clause is used exclusively for summary so here you can seeing that the grouping has moved to the `PARTITION BY` argument to `OVER`.
In SQL, the `GROUP BY` clause is used exclusively for summary so here you can see that the grouping has moved to the `PARTITION BY` argument to `OVER`.
Window functions includes all functions that look forward or backwards, like `lead()` and `lag()`:
Window functions include all functions that look forward or backwards, like `lead()` and `lag()`:
```{r}
flights |>
@ -650,7 +650,7 @@ flights |>
```
dbplyr also translates common string and date-time manipulation functions, which you can learn about in `vignette("translation-function", package = "dbplyr")`.
dbplyr's translation are certainly not perfect, and there are many R functions that aren't translated yet, but dbplyr does a surprisingly good job covering the functions that you'll use most of the time.
dbplyr's translations are certainly not perfect, and there are many R functions that aren't translated yet, but dbplyr does a surprisingly good job covering the functions that you'll use most of the time.
### Learning more
@ -658,5 +658,4 @@ If you've finished this chapter and would like to learn more about SQL.
I have two recommendations:
- [*SQL for Data Scientists*](https://sqlfordatascientists.com) by Renée M. P. Teate is an introduction to SQL designed specifically for the needs of data scientists, and includes examples of the sort of highly interconnected data you're likely to encounter in real organisations.
- [*Practical SQL*](https://www.practicalsql.com) by Anthony DeBarros is written from the perspective of a data journalist ( a data scientist specialized in telling compelling stories) and goes into more detail about getting your data into a database and running your own DBMS.
- [*Practical SQL*](https://www.practicalsql.com) by Anthony DeBarros is written from the perspective of a data journalist (a data scientist specialized in telling compelling stories) and goes into more detail about getting your data into a database and running your own DBMS.