domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/iteration.html |
21 Iteration
============
21\.1 Introduction
------------------
The **microbenchmark** package is used for timing code.
```
library("tidyverse")
library("stringr")
library("microbenchmark")
```
The `map()` function appears in both the purrr and maps packages. See the
“Prerequisites” section of the [Introduction](https://r4ds.had.co.nz/data-visualisation.html#introduction-1).
If you see errors like the following, you are using the wrong `map()` function.
```
> map(c(TRUE, FALSE, TRUE), ~ !.)
Error: $ operator is invalid for atomic vectors
> map(-2:2, rnorm, n = 5)
Error in map(-2:2, rnorm, n = 5) :
argument 3 matches multiple formal arguments
```
You can check the package in which a function is defined using the `environment()` function:
```
environment(map)
#> <environment: namespace:purrr>
```
The result should include `namespace:purrr` if `map()` is coming from the purrr package.
To explicitly reference the package to get a function from, use the colon operator `::`.
For example,
```
purrr::map(c(TRUE, FALSE, TRUE), ~ !.)
#> [[1]]
#> [1] FALSE
#>
#> [[2]]
#> [1] TRUE
#>
#> [[3]]
#> [1] FALSE
```
21\.2 For loops
---------------
### Exercise 21\.2\.1
Write for\-loops to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
The answers for each part are below.
1. To compute the mean of every column in `mtcars`.
```
output <- vector("double", ncol(mtcars))
names(output) <- names(mtcars)
for (i in names(mtcars)) {
output[i] <- mean(mtcars[[i]])
}
output
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. Determine the type of each column in `nycflights13::flights`.
```
output <- vector("list", ncol(nycflights13::flights))
names(output) <- names(nycflights13::flights)
for (i in names(nycflights13::flights)) {
output[[i]] <- class(nycflights13::flights[[i]])
}
output
#> $year
#> [1] "integer"
#>
#> $month
#> [1] "integer"
#>
#> $day
#> [1] "integer"
#>
#> $dep_time
#> [1] "integer"
#>
#> $sched_dep_time
#> [1] "integer"
#>
#> $dep_delay
#> [1] "numeric"
#>
#> $arr_time
#> [1] "integer"
#>
#> $sched_arr_time
#> [1] "integer"
#>
#> $arr_delay
#> [1] "numeric"
#>
#> $carrier
#> [1] "character"
#>
#> $flight
#> [1] "integer"
#>
#> $tailnum
#> [1] "character"
#>
#> $origin
#> [1] "character"
#>
#> $dest
#> [1] "character"
#>
#> $air_time
#> [1] "numeric"
#>
#> $distance
#> [1] "numeric"
#>
#> $hour
#> [1] "numeric"
#>
#> $minute
#> [1] "numeric"
#>
#> $time_hour
#> [1] "POSIXct" "POSIXt"
```
I used a `list`, not a character vector, since the class of an object can have multiple values.
For example, the class of the `time_hour` column is POSIXct, POSIXt.
3. To compute the number of unique values in each column of the `iris` dataset.
```
data("iris")
iris_uniq <- vector("double", ncol(iris))
names(iris_uniq) <- names(iris)
for (i in names(iris)) {
iris_uniq[i] <- n_distinct(iris[[i]])
}
iris_uniq
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
4. To generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
```
# number to draw
n <- 10
# values of the mean
mu <- c(-10, 0, 10, 100)
normals <- vector("list", length(mu))
for (i in seq_along(normals)) {
normals[[i]] <- rnorm(n, mean = mu[i])
}
normals
#> [[1]]
#> [1] -11.40 -9.74 -12.44 -10.01 -9.38 -8.85 -11.82 -10.25 -10.24 -10.28
#>
#> [[2]]
#> [1] -0.5537 0.6290 2.0650 -1.6310 0.5124 -1.8630 -0.5220 -0.0526 0.5430
#> [10] -0.9141
#>
#> [[3]]
#> [1] 10.47 10.36 8.70 10.74 11.89 9.90 9.06 9.98 9.17 8.49
#>
#> [[4]]
#> [1] 100.9 100.2 100.2 101.6 100.1 99.9 98.1 99.7 99.7 101.1
```
However, we don’t need a for loop for this since `rnorm()` recycle the `mean` argument.
```
matrix(rnorm(n * length(mu), mean = mu), ncol = n)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] -9.930 -9.56 -9.88 -10.2061 -12.27 -8.926 -11.178 -9.51 -8.663 -9.39
#> [2,] -0.639 2.76 -1.91 0.0192 2.68 -0.665 -0.976 -1.70 0.237 -0.11
#> [3,] 9.950 10.05 10.86 10.0296 9.64 11.114 11.065 8.53 11.318 10.17
#> [4,] 99.749 100.58 99.76 100.5498 100.21 99.754 100.132 100.28 100.524 99.91
```
### Exercise 21\.2\.2
Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors:
```
out <- ""
for (x in letters) {
out <- str_c(out, x)
}
out
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
Since `str_c()` already works with vectors, use `str_c()` with the `collapse` argument to return a single string.
```
str_c(letters, collapse = "")
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
For this I’m going to rename the variable `sd` to something different because `sd` is the name of the function we want to use.
```
x <- sample(100)
sd. <- 0
for (i in seq_along(x)) {
sd. <- sd. + (x[i] - mean(x))^2
}
sd. <- sqrt(sd. / (length(x) - 1))
sd.
#> [1] 29
```
We could simply use the `sd` function.
```
sd(x)
#> [1] 29
```
Or if there was a need to use the equation (e.g. for pedagogical reasons), then
the functions `mean()` and `sum()` already work with vectors:
```
sqrt(sum((x - mean(x))^2) / (length(x) - 1))
#> [1] 29
```
```
x <- runif(100)
out <- vector("numeric", length(x))
out[1] <- x[1]
for (i in 2:length(x)) {
out[i] <- out[i - 1] + x[i]
}
out
#> [1] 0.854 1.268 2.019 2.738 3.253 4.228 4.589 4.759 5.542 5.573
#> [11] 6.363 6.529 6.558 7.344 8.169 9.134 9.513 9.687 10.291 11.097
#> [21] 11.133 11.866 12.082 12.098 12.226 12.912 13.554 13.882 14.269 14.976
#> [31] 15.674 16.600 17.059 17.655 17.820 18.387 19.285 19.879 20.711 21.304
#> [41] 22.083 22.481 23.331 24.073 24.391 24.502 24.603 25.403 25.783 25.836
#> [51] 26.823 27.427 27.576 28.114 28.240 29.203 29.250 29.412 30.348 31.319
#> [61] 32.029 32.914 33.891 33.926 34.365 35.009 36.004 36.319 37.175 37.715
#> [71] 38.588 39.104 39.973 40.830 41.176 41.176 41.381 42.326 42.607 43.488
#> [81] 44.449 44.454 45.006 45.226 45.872 46.600 47.473 47.855 48.747 49.591
#> [91] 50.321 50.359 50.693 51.443 52.356 52.560 53.032 53.417 53.810 54.028
```
The code above is calculating a cumulative sum. Use the function `cumsum()`
```
all.equal(cumsum(x), out)
#> [1] TRUE
```
### Exercise 21\.2\.3
Combine your function writing and for loop skills:
1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”.
2. Convert the nursery rhyme “ten in the bed” to a function.
Generalize it to any number of people in any sleeping structure.
3. Convert the song “99 bottles of beer on the wall” to a function.
Generalize to any number of any vessel containing any liquid on surface.
The answers to each part follow.
1. The lyrics for [Alice the Camel](https://www.kididdles.com/lyrics/a012.html) are:
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> So go, Alice, go.
This verse is repeated, each time with one fewer hump,
until there are no humps.
The last verse, with no humps, is:
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Now Alice is a horse.
We’ll iterate from five to no humps, and print out a different last line if there are no humps.
```
humps <- c("five", "four", "three", "two", "one", "no")
for (i in humps) {
cat(str_c("Alice the camel has ", rep(i, 3), " humps.",
collapse = "\n"
), "\n")
if (i == "no") {
cat("Now Alice is a horse.\n")
} else {
cat("So go, Alice, go.\n")
}
cat("\n")
}
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> So go, Alice, go.
#>
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> So go, Alice, go.
#>
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> So go, Alice, go.
#>
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> So go, Alice, go.
#>
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> So go, Alice, go.
#>
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Now Alice is a horse.
```
2. The lyrics for [Ten in the Bed](https://www.kididdles.com/lyrics/t003.html) are:
> Here we go!
>
> There were ten in the bed
>
> and the little one said,
>
> “Roll over, roll over.”
>
> So they all rolled over and one fell out.
This verse is repeated, each time with one fewer in the bed, until there is one left.
That last verse is:
> One!
> There was one in the bed
>
> and the little one said,
>
> “I’m lonely…”
```
numbers <- c(
"ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "one"
)
for (i in numbers) {
cat(str_c("There were ", i, " in the bed\n"))
cat("and the little one said\n")
if (i == "one") {
cat("I'm lonely...")
} else {
cat("Roll over, roll over\n")
cat("So they all rolled over and one fell out.\n")
}
cat("\n")
}
#> There were ten in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were nine in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were eight in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were seven in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were six in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were five in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were four in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were three in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were two in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were one in the bed
#> and the little one said
#> I'm lonely...
```
3. The lyrics of [Ninety\-Nine Bottles of Beer on the Wall](https://en.wikipedia.org/wiki/99_Bottles_of_Beer) are
> 99 bottles of beer on the wall, 99 bottles of beer.
>
> Take one down, pass it around, 98 bottles of beer on the wall
This verse is repeated, each time with one few bottle, until
there are no more bottles of beer. The last verse is
> No more bottles of beer on the wall, no more bottles of beer.
>
> We’ve taken them down and passed them around; now we’re drunk and passed out!
For the bottles of beer, I define a helper function to correctly print the number of bottles.
```
bottles <- function(n) {
if (n > 1) {
str_c(n, " bottles")
} else if (n == 1) {
"1 bottle"
} else {
"no more bottles"
}
}
beer_bottles <- function(total_bottles) {
# print each lyric
for (current_bottles in seq(total_bottles, 0)) {
# first line
cat(str_to_sentence(str_c(bottles(current_bottles), " of beer on the wall, ", bottles(current_bottles), " of beer.\n")))
# second line
if (current_bottles > 0) {
cat(str_c(
"Take one down and pass it around, ", bottles(current_bottles - 1),
" of beer on the wall.\n"
))
} else {
cat(str_c("Go to the store and buy some more, ", bottles(total_bottles), " of beer on the wall.\n")) }
cat("\n")
}
}
beer_bottles(3)
#> 3 Bottles of beer on the wall, 3 bottles of beer.
#> Take one down and pass it around, 2 bottles of beer on the wall.
#>
#> 2 Bottles of beer on the wall, 2 bottles of beer.
#> Take one down and pass it around, 1 bottle of beer on the wall.
#>
#> 1 Bottle of beer on the wall, 1 bottle of beer.
#> Take one down and pass it around, no more bottles of beer on the wall.
#>
#> No more bottles of beer on the wall, no more bottles of beer.
#> Go to the store and buy some more, 3 bottles of beer on the wall.
```
#### Exercise 21\.2\.4
It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step:
```
output <- vector("integer", 0)
for (i in seq_along(x)) {
output <- c(output, lengths(x[[i]]))
}
output
```
How does this affect performance?
Design and execute an experiment.
In order to compare these two approaches, I’ll define two functions:
`add_to_vector` will append to a vector, like the example in the question,
and `add_to_vector_2` which pre\-allocates a vector.
```
add_to_vector <- function(n) {
output <- vector("integer", 0)
for (i in seq_len(n)) {
output <- c(output, i)
}
output
}
```
```
add_to_vector_2 <- function(n) {
output <- vector("integer", n)
for (i in seq_len(n)) {
output[[i]] <- i
}
output
}
```
I’ll use the package microbenchmark to run these functions several times and compare the time it takes.
The package microbenchmark contains utilities for benchmarking R expressions.
In particular, the `microbenchmark()` function will run an R expression a number of times and time it.
```
timings <- microbenchmark(add_to_vector(10000), add_to_vector_2(10000), times = 10)
timings
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> add_to_vector(10000) 111658 113151 119034 117233 120429 143037 10
#> add_to_vector_2(10000) 337 348 1400 360 486 6264 10
```
In this example, appending to a vector takes 325 times longer than pre\-allocating the vector.
You may get different answers, but the longer the vector and the larger the objects, the more that pre\-allocation will outperform appending.
21\.3 For loop variations
-------------------------
### Exercise 21\.3\.1
Imagine you have a directory full of CSV files that you want to read in.
You have their paths in a vector,
`files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now
want to read each one with `read_csv()`. Write the for loop that will
load them into a single data frame.
```
files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)
files
#> [1] "data//file1.csv" "data//file2.csv" "data//file3.csv"
```
Since, the number of files is known, pre\-allocate a list with a length equal to the number of files.
```
df_list <- vector("list", length(files))
```
Then, read each file into a data frame, and assign it to an element in that list.
The result is a list of data frames.
```
for (i in seq_along(files)) {
df_list[[i]] <- read_csv(files[[i]])
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
```
```
print(df_list)
#> [[1]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#>
#> [[2]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 3 c
#> 2 4 d
#>
#> [[3]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 5 e
#> 2 6 f
```
Finally, use use `bind_rows()` to combine the list of data frames into a single data frame.
```
df <- bind_rows(df_list)
```
```
print(df)
#> # A tibble: 6 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#> 3 3 c
#> 4 4 d
#> 5 5 e
#> 6 6 f
```
Alternatively, I could have pre\-allocated a list with the names of the files.
```
df2_list <- vector("list", length(files))
names(df2_list) <- files
for (fname in files) {
df2_list[[fname]] <- read_csv(fname)
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
df2 <- bind_rows(df2_list)
```
### Exercise 21\.3\.2
What happens if you use `for (nm in names(x))` and `x` has no names?
What if only some of the elements are named?
What if the names are not unique?
Let’s try it out and see what happens.
When there are no names for the vector, it does not run the code in the loop.
In other words, it runs zero iterations of the loop.
```
x <- c(11, 12, 13)
print(names(x))
#> NULL
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
```
Note that the length of `NULL` is zero:
```
length(NULL)
#> [1] 0
```
If there only some names, then we get an error for trying to access an element without a name.
```
x <- c(a = 11, 12, c = 13)
names(x)
#> [1] "a" "" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] ""
#> Error in x[[nm]]: subscript out of bounds
```
Finally, if the vector contains duplicate names, then `x[[nm]]` returns the *first* element with that name.
```
x <- c(a = 11, a = 12, c = 13)
names(x)
#> [1] "a" "a" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] "a"
#> [1] 11
#> [1] "c"
#> [1] 13
```
### Exercise 21\.3\.3
Write a function that prints the mean of each numeric column in a data frame, along with its name.
For example, `show_mean(iris)` would print:
```
show_mean(iris)
# > Sepal.Length: 5.84
# > Sepal.Width: 3.06
# > Petal.Length: 3.76
# > Petal.Width: 1.20
```
Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?
There may be other functions to do this, but I’ll use `str_pad()`, and `str_length()` to ensure that the space given to the variable names is the same.
I messed around with the options to `format()` until I got two digits.
```
show_mean <- function(df, digits = 2) {
# Get max length of all variable names in the dataset
maxstr <- max(str_length(names(df)))
for (nm in names(df)) {
if (is.numeric(df[[nm]])) {
cat(
str_c(str_pad(str_c(nm, ":"), maxstr + 1L, side = "right"),
format(mean(df[[nm]]), digits = digits, nsmall = digits),
sep = " "
),
"\n"
)
}
}
}
show_mean(iris)
#> Sepal.Length: 5.84
#> Sepal.Width: 3.06
#> Petal.Length: 3.76
#> Petal.Width: 1.20
```
### Exercise 21\.3\.4
What does this code do?
How does it work?
```
trans <- list(
disp = function(x) x * 0.0163871,
am = function(x) {
factor(x, labels = c("auto", "manual"))
}
)
```
```
for (var in names(trans)) {
mtcars[[var]] <- trans[[var]](mtcars[[var]])
}
```
This code mutates the `disp` and `am` columns:
* `disp` is multiplied by 0\.0163871
* `am` is replaced by a factor variable.
The code works by looping over a named list of functions.
It calls the named function in the list on the column of `mtcars` with the same name, and replaces the values of that column.
This is a function.
```
trans[["disp"]]
```
This applies the function to the column of `mtcars` with the same name
```
trans[["disp"]](mtcars[["disp"]])
```
21\.4 For loops vs. functionals
-------------------------------
### Exercise 21\.4\.1
Read the documentation for `apply()`.
In the 2nd case, what two for\-loops does it generalize.
For an object with two\-dimensions, such as a matrix or data frame, `apply()` replaces looping over the rows or columns of a matrix or data\-frame.
The `apply()` function is used like `apply(X, MARGIN, FUN, ...)`, where `X` is a matrix or array, `FUN` is a function to apply, and `...` are additional arguments passed to `FUN`.
When `MARGIN = 1`, then the function is applied to each row.
For example, the following example calculates the row means of a matrix.
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.4523 0.124 0.709
#> [2,] 0.9412 -0.998 -1.529
#> [3,] -0.3389 1.233 0.237
#> [4,] -0.0756 0.340 -1.313
#> [5,] 0.0402 -0.473 0.747
```
```
apply(X, 1, mean)
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
That is equivalent to this for\-loop.
```
X_row_means <- vector("numeric", length = nrow(X))
for (i in seq_len(nrow(X))) {
X_row_means[[i]] <- mean(X[i, ])
}
X_row_means
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.5625 1.153 1.20377
#> [2,] 0.0711 -1.687 -1.43127
#> [3,] -0.6395 -0.903 1.38291
#> [4,] -0.8452 1.318 0.00313
#> [5,] 0.6752 1.100 -0.07789
```
When `MARGIN = 2`, `apply()` is equivalent to a for\-loop looping over columns.
```
apply(X, 2, mean)
#> [1] -0.460 0.196 0.216
```
```
X_col_means <- vector("numeric", length = ncol(X))
for (i in seq_len(ncol(X))) {
X_col_means[[i]] <- mean(X[, i])
}
X_col_means
#> [1] -0.460 0.196 0.216
```
### Exercise 21\.4\.2
Adapt `col_summary()` so that it only applies to numeric columns.
You might want to start with an `is_numeric()` function that returns a logical vector that has a `TRUE` corresponding to each numeric column.
The original `col_summary()` function is
```
col_summary <- function(df, fun) {
out <- vector("double", length(df))
for (i in seq_along(df)) {
out[i] <- fun(df[[i]])
}
out
}
```
The adapted version adds extra logic to only apply the function to numeric columns.
```
col_summary2 <- function(df, fun) {
# create an empty vector which will store whether each
# column is numeric
numeric_cols <- vector("logical", length(df))
# test whether each column is numeric
for (i in seq_along(df)) {
numeric_cols[[i]] <- is.numeric(df[[i]])
}
# find the indexes of the numeric columns
idxs <- which(numeric_cols)
# find the number of numeric columns
n <- sum(numeric_cols)
# create a vector to hold the results
out <- vector("double", n)
# apply the function only to numeric vectors
for (i in seq_along(idxs)) {
out[[i]] <- fun(df[[idxs[[i]]]])
}
# name the vector
names(out) <- names(df)[idxs]
out
}
```
Let’s test that `col_summary2()` works by creating a small data frame with
some numeric and non\-numeric columns.
```
df <- tibble(
X1 = c(1, 2, 3),
X2 = c("A", "B", "C"),
X3 = c(0, -1, 5),
X4 = c(TRUE, FALSE, TRUE)
)
col_summary2(df, mean)
#> X1 X3
#> 2.00 1.33
```
As expected, it only calculates the mean of the numeric columns, `X1` and `X3`.
Let’s test that it works with another function.
```
col_summary2(df, median)
#> X1 X3
#> 2 0
```
21\.5 The map functions
-----------------------
### Exercise 21\.5\.1
Write code that uses one of the map functions to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).
1. To calculate the mean of every column in `mtcars`, apply the function
`mean()` to each column, and use `map_dbl`, since the results are numeric.
```
map_dbl(mtcars, mean)
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. To calculate the type of every column in `nycflights13::flights` apply
the function `typeof()`, discussed in the section on [Vector basics](https://r4ds.had.co.nz/vectors.html#vector-basics),
and use `map_chr()`, since the results are character.
```
map_chr(nycflights13::flights, typeof)
#> year month day dep_time sched_dep_time
#> "integer" "integer" "integer" "integer" "integer"
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> "double" "integer" "integer" "double" "character"
#> flight tailnum origin dest air_time
#> "integer" "character" "character" "character" "double"
#> distance hour minute time_hour
#> "double" "double" "double" "double"
```
3. The function `n_distinct()` calculates the number of unique values
in a vector.
```
map_int(iris, n_distinct)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
The `map_int()` function is used since `length()` returns an integer.
However, the `map_dbl()` function will also work.
```
map_dbl(iris, n_distinct)
```
An alternative to the `n_distinct()` function is the expression, `length(unique(...))`.
The `n_distinct()` function is more concise and faster, but `length(unique(...))` provides an example of using anonymous functions with map functions.
An anonymous function can be written using the standard R syntax for a function:
```
map_int(iris, function(x) length(unique(x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
Additionally, map functions accept one\-sided formulas as a more concise alternative to specify an anonymous function:
```
map_int(iris, ~length(unique(.x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
In this case, the anonymous function accepts one argument, which is referenced by `.x` in the expression `length(unique(.x))`.
4. To generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\):
The result is a list of numeric vectors.
```
map(c(-10, 0, 10, 100), ~rnorm(n = 10, mean = .))
#> [[1]]
#> [1] -9.56 -9.87 -10.83 -10.50 -11.19 -10.75 -8.54 -10.83 -9.71 -10.48
#>
#> [[2]]
#> [1] -0.6048 1.4601 0.1497 -1.4333 -0.0103 -0.2122 -0.9063 -2.1022 1.8934
#> [10] -0.9681
#>
#> [[3]]
#> [1] 9.90 10.24 10.06 7.82 9.88 10.11 10.01 11.88 12.16 10.71
#>
#> [[4]]
#> [1] 100.8 99.7 101.0 99.1 100.6 100.3 100.4 101.1 99.1 100.2
```
Since a single call of `rnorm()` returns a numeric vector with a length greater
than one we cannot use `map_dbl`, which requires the function to return a numeric
vector that is only length one (see [Exercise 21\.5\.4](iteration.html#exercise-21.5.4)).
The map functions pass any additional arguments to the function being called.
### Exercise 21\.5\.2
How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor?
The function `is.factor()` indicates whether a vector is a factor.
```
is.factor(diamonds$color)
#> [1] TRUE
```
Checking all columns in a data frame is a job for a `map_*()` function.
Since the result of `is.factor()` is logical, we will use `map_lgl()` to apply `is.factor()` to the columns of the data frame.
```
map_lgl(diamonds, is.factor)
#> carat cut color clarity depth table price x y z
#> FALSE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
```
### Exercise 21\.5\.3
What happens when you use the map functions on vectors that aren’t lists?
What does `map(1:5, runif)` do?
Why?
Map functions work with any vectors, not just lists.
As with lists, the map functions will apply the function to each element of the vector.
In the following examples, the inputs to `map()` are atomic vectors (logical, character, integer, double).
```
map(c(TRUE, FALSE, TRUE), ~ !.)
#> [[1]]
#> [1] FALSE
#>
#> [[2]]
#> [1] TRUE
#>
#> [[3]]
#> [1] FALSE
map(c("Hello", "World"), str_to_upper)
#> [[1]]
#> [1] "HELLO"
#>
#> [[2]]
#> [1] "WORLD"
map(1:5, ~ rnorm(.))
#> [[1]]
#> [1] 1.42
#>
#> [[2]]
#> [1] -0.384 -0.174
#>
#> [[3]]
#> [1] -0.222 -1.010 0.481
#>
#> [[4]]
#> [1] 1.604 -1.515 -1.416 0.877
#>
#> [[5]]
#> [1] 0.624 2.112 -0.356 -1.064 1.077
map(c(-0.5, 0, 1), ~ rnorm(1, mean = .))
#> [[1]]
#> [1] 0.682
#>
#> [[2]]
#> [1] 0.198
#>
#> [[3]]
#> [1] 0.6
```
It is important to be aware that while the input of `map()` can be any vector, the output is always a list.
```
map(1:5, runif)
#> [[1]]
#> [1] 0.731
#>
#> [[2]]
#> [1] 0.852 0.976
#>
#> [[3]]
#> [1] 0.113 0.970 0.648
#>
#> [[4]]
#> [1] 0.0561 0.4731 0.2946 0.6103
#>
#> [[5]]
#> [1] 0.1211 0.6294 0.7120 0.6121 0.0344
```
This expression is equivalent to running the following.
```
list(
runif(1),
runif(2),
runif(3),
runif(4),
runif(5)
)
#> [[1]]
#> [1] 0.666
#>
#> [[2]]
#> [1] 0.653 0.452
#>
#> [[3]]
#> [1] 0.517 0.677 0.881
#>
#> [[4]]
#> [1] 0.731 0.399 0.431 0.145
#>
#> [[5]]
#> [1] 0.4511 0.5788 0.0704 0.7423 0.5492
```
The `map()` function loops through the numbers 1 to 5\.
For each value, it calls the `runif()` with that number as the first argument, which is the number of sample to draw.
The result is a length five list with numeric vectors of sizes one through five, each with random samples from a uniform distribution.
Note that although input to `map()` was an integer vector, the return value was a list.
### Exercise 21\.5\.4
What does `map(-2:2, rnorm, n = 5)` do?
Why?
What does `map_dbl(-2:2, rnorm, n = 5)` do?
Why?
Consider the first expression.
```
map(-2:2, rnorm, n = 5)
#> [[1]]
#> [1] -1.656 -0.522 -1.928 0.126 -3.476
#>
#> [[2]]
#> [1] -0.5921 0.3940 -0.6397 -0.3454 0.0522
#>
#> [[3]]
#> [1] -1.980 1.208 -0.169 0.295 1.266
#>
#> [[4]]
#> [1] -0.135 -0.131 1.110 1.853 0.766
#>
#> [[5]]
#> [1] 4.087 1.889 0.607 0.858 3.705
```
This expression takes samples of size five from five normal distributions, with means of (\-2, \-1, 0, 1, and 2\), but the same standard deviation (1\).
It returns a list with each element a numeric vectors of length 5\.
However, if instead, we use `map_dbl()`, the expression raises an error.
```
map_dbl(-2:2, rnorm, n = 5)
#> Error: Result 1 must be a single double, not a double vector of length 5
```
This is because the `map_dbl()` function requires the function it applies to each element to return a numeric vector of length one.
If the function returns either a non\-numeric vector or a numeric vector with a length greater than one, `map_dbl()` will raise an error.
The reason for this strictness is that `map_dbl()` guarantees that it will return a numeric vector of the *same length* as its input vector.
This concept applies to the other `map_*()` functions.
The function `map_chr()` requires that the function always return a *character* vector of length one;
`map_int()` requires that the function always return an *integer* vector of length one;
`map_lgl()` requires that the function always return an *logical* vector of length one.
Use the `map()` function if the function will return values of varying types or lengths.
To return a numeric vector, use `flatten_dbl()` to coerce the list returned by `map()` to a numeric vector.
```
map(-2:2, rnorm, n = 5) %>%
flatten_dbl()
#> [1] -2.145 -1.474 -0.266 -0.551 -0.482 -1.384 0.827 -1.551 -1.866 -1.344
#> [11] 1.063 0.813 1.803 -0.105 0.982 -0.713 0.168 2.100 0.826 1.179
#> [21] 1.302 1.040 1.025 1.661 3.152
```
### Exercise 21\.5\.5
Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function.
This code in this question does not run, so I will use the following code.
```
x <- split(mtcars, mtcars$cyl)
map(x, function(df) lm(mpg ~ wt, data = df))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
We can eliminate the use of an anonymous function using the `~` shortcut.
```
map(x, ~ lm(mpg ~ wt, data = .))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
Though not the intent of this question, the other way to eliminate anonymous function is to create a named one.
```
run_reg <- function(df) {
lm(mpg ~ wt, data = df)
}
map(x, run_reg)
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
21\.6 Dealing with failure
--------------------------
No exercises
21\.7 Mapping over multiple arguments
-------------------------------------
No exercises
21\.8 Walk
----------
No exercises
21\.9 Other patterns of for loops
---------------------------------
### Exercise 21\.9\.1
Implement your own version of `every()` using a for loop.
Compare it with `purrr::every()`.
What does purrr’s version do that your version doesn’t?
```
# Use ... to pass arguments to the function
every2 <- function(.x, .p, ...) {
for (i in .x) {
if (!.p(i, ...)) {
# If any is FALSE we know not all of then were TRUE
return(FALSE)
}
}
# if nothing was FALSE, then it is TRUE
TRUE
}
every2(1:3, function(x) {
x > 1
})
#> [1] FALSE
every2(1:3, function(x) {
x > 0
})
#> [1] TRUE
```
The function `purrr::every()` does fancy things with the predicate function argument `.p`, like taking a logical vector instead of a function, or being able to test part of a string if the elements of `.x` are lists.
### Exercise 21\.9\.2
Create an enhanced `col_summary()` that applies a summary function to every numeric column in a data frame.
I will use `map` to apply the function to all the columns, and `keep` to only select numeric columns.
```
col_sum2 <- function(df, f, ...) {
map(keep(df, is.numeric), f, ...)
}
```
```
col_sum2(iris, mean)
#> $Sepal.Length
#> [1] 5.84
#>
#> $Sepal.Width
#> [1] 3.06
#>
#> $Petal.Length
#> [1] 3.76
#>
#> $Petal.Width
#> [1] 1.2
```
### Exercise 21\.9\.3
A possible base R equivalent of `col_summary()` is:
```
col_sum3 <- function(df, f) {
is_num <- sapply(df, is.numeric)
df_num <- df[, is_num]
sapply(df_num, f)
}
```
But it has a number of bugs as illustrated with the following inputs:
```
df <- tibble(
x = 1:3,
y = 3:1,
z = c("a", "b", "c")
)
# OK
col_sum3(df, mean)
# Has problems: don't always return numeric vector
col_sum3(df[1:2], mean)
col_sum3(df[1], mean)
col_sum3(df[0], mean)
```
What causes these bugs?
The cause of these bugs is the behavior of `sapply()`.
The `sapply()` function does not guarantee the type of vector it returns, and will returns different types of vectors depending on its inputs.
If no columns are selected, instead of returning an empty numeric vector, it returns an empty list.
This causes an error since we can’t use a list with `[`.
```
sapply(df[0], is.numeric)
#> named list()
```
```
sapply(df[1], is.numeric)
#> X1
#> TRUE
```
```
sapply(df[1:2], is.numeric)
#> X1 X2
#> TRUE FALSE
```
The `sapply()` function tries to be helpful by simplifying the results, but this behavior can be counterproductive.
It is okay to use the `sapply()` function interactively, but avoid programming with it.
21\.1 Introduction
------------------
The **microbenchmark** package is used for timing code.
```
library("tidyverse")
library("stringr")
library("microbenchmark")
```
The `map()` function appears in both the purrr and maps packages. See the
“Prerequisites” section of the [Introduction](https://r4ds.had.co.nz/data-visualisation.html#introduction-1).
If you see errors like the following, you are using the wrong `map()` function.
```
> map(c(TRUE, FALSE, TRUE), ~ !.)
Error: $ operator is invalid for atomic vectors
> map(-2:2, rnorm, n = 5)
Error in map(-2:2, rnorm, n = 5) :
argument 3 matches multiple formal arguments
```
You can check the package in which a function is defined using the `environment()` function:
```
environment(map)
#> <environment: namespace:purrr>
```
The result should include `namespace:purrr` if `map()` is coming from the purrr package.
To explicitly reference the package to get a function from, use the colon operator `::`.
For example,
```
purrr::map(c(TRUE, FALSE, TRUE), ~ !.)
#> [[1]]
#> [1] FALSE
#>
#> [[2]]
#> [1] TRUE
#>
#> [[3]]
#> [1] FALSE
```
21\.2 For loops
---------------
### Exercise 21\.2\.1
Write for\-loops to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
The answers for each part are below.
1. To compute the mean of every column in `mtcars`.
```
output <- vector("double", ncol(mtcars))
names(output) <- names(mtcars)
for (i in names(mtcars)) {
output[i] <- mean(mtcars[[i]])
}
output
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. Determine the type of each column in `nycflights13::flights`.
```
output <- vector("list", ncol(nycflights13::flights))
names(output) <- names(nycflights13::flights)
for (i in names(nycflights13::flights)) {
output[[i]] <- class(nycflights13::flights[[i]])
}
output
#> $year
#> [1] "integer"
#>
#> $month
#> [1] "integer"
#>
#> $day
#> [1] "integer"
#>
#> $dep_time
#> [1] "integer"
#>
#> $sched_dep_time
#> [1] "integer"
#>
#> $dep_delay
#> [1] "numeric"
#>
#> $arr_time
#> [1] "integer"
#>
#> $sched_arr_time
#> [1] "integer"
#>
#> $arr_delay
#> [1] "numeric"
#>
#> $carrier
#> [1] "character"
#>
#> $flight
#> [1] "integer"
#>
#> $tailnum
#> [1] "character"
#>
#> $origin
#> [1] "character"
#>
#> $dest
#> [1] "character"
#>
#> $air_time
#> [1] "numeric"
#>
#> $distance
#> [1] "numeric"
#>
#> $hour
#> [1] "numeric"
#>
#> $minute
#> [1] "numeric"
#>
#> $time_hour
#> [1] "POSIXct" "POSIXt"
```
I used a `list`, not a character vector, since the class of an object can have multiple values.
For example, the class of the `time_hour` column is POSIXct, POSIXt.
3. To compute the number of unique values in each column of the `iris` dataset.
```
data("iris")
iris_uniq <- vector("double", ncol(iris))
names(iris_uniq) <- names(iris)
for (i in names(iris)) {
iris_uniq[i] <- n_distinct(iris[[i]])
}
iris_uniq
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
4. To generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
```
# number to draw
n <- 10
# values of the mean
mu <- c(-10, 0, 10, 100)
normals <- vector("list", length(mu))
for (i in seq_along(normals)) {
normals[[i]] <- rnorm(n, mean = mu[i])
}
normals
#> [[1]]
#> [1] -11.40 -9.74 -12.44 -10.01 -9.38 -8.85 -11.82 -10.25 -10.24 -10.28
#>
#> [[2]]
#> [1] -0.5537 0.6290 2.0650 -1.6310 0.5124 -1.8630 -0.5220 -0.0526 0.5430
#> [10] -0.9141
#>
#> [[3]]
#> [1] 10.47 10.36 8.70 10.74 11.89 9.90 9.06 9.98 9.17 8.49
#>
#> [[4]]
#> [1] 100.9 100.2 100.2 101.6 100.1 99.9 98.1 99.7 99.7 101.1
```
However, we don’t need a for loop for this since `rnorm()` recycle the `mean` argument.
```
matrix(rnorm(n * length(mu), mean = mu), ncol = n)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] -9.930 -9.56 -9.88 -10.2061 -12.27 -8.926 -11.178 -9.51 -8.663 -9.39
#> [2,] -0.639 2.76 -1.91 0.0192 2.68 -0.665 -0.976 -1.70 0.237 -0.11
#> [3,] 9.950 10.05 10.86 10.0296 9.64 11.114 11.065 8.53 11.318 10.17
#> [4,] 99.749 100.58 99.76 100.5498 100.21 99.754 100.132 100.28 100.524 99.91
```
### Exercise 21\.2\.2
Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors:
```
out <- ""
for (x in letters) {
out <- str_c(out, x)
}
out
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
Since `str_c()` already works with vectors, use `str_c()` with the `collapse` argument to return a single string.
```
str_c(letters, collapse = "")
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
For this I’m going to rename the variable `sd` to something different because `sd` is the name of the function we want to use.
```
x <- sample(100)
sd. <- 0
for (i in seq_along(x)) {
sd. <- sd. + (x[i] - mean(x))^2
}
sd. <- sqrt(sd. / (length(x) - 1))
sd.
#> [1] 29
```
We could simply use the `sd` function.
```
sd(x)
#> [1] 29
```
Or if there was a need to use the equation (e.g. for pedagogical reasons), then
the functions `mean()` and `sum()` already work with vectors:
```
sqrt(sum((x - mean(x))^2) / (length(x) - 1))
#> [1] 29
```
```
x <- runif(100)
out <- vector("numeric", length(x))
out[1] <- x[1]
for (i in 2:length(x)) {
out[i] <- out[i - 1] + x[i]
}
out
#> [1] 0.854 1.268 2.019 2.738 3.253 4.228 4.589 4.759 5.542 5.573
#> [11] 6.363 6.529 6.558 7.344 8.169 9.134 9.513 9.687 10.291 11.097
#> [21] 11.133 11.866 12.082 12.098 12.226 12.912 13.554 13.882 14.269 14.976
#> [31] 15.674 16.600 17.059 17.655 17.820 18.387 19.285 19.879 20.711 21.304
#> [41] 22.083 22.481 23.331 24.073 24.391 24.502 24.603 25.403 25.783 25.836
#> [51] 26.823 27.427 27.576 28.114 28.240 29.203 29.250 29.412 30.348 31.319
#> [61] 32.029 32.914 33.891 33.926 34.365 35.009 36.004 36.319 37.175 37.715
#> [71] 38.588 39.104 39.973 40.830 41.176 41.176 41.381 42.326 42.607 43.488
#> [81] 44.449 44.454 45.006 45.226 45.872 46.600 47.473 47.855 48.747 49.591
#> [91] 50.321 50.359 50.693 51.443 52.356 52.560 53.032 53.417 53.810 54.028
```
The code above is calculating a cumulative sum. Use the function `cumsum()`
```
all.equal(cumsum(x), out)
#> [1] TRUE
```
### Exercise 21\.2\.3
Combine your function writing and for loop skills:
1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”.
2. Convert the nursery rhyme “ten in the bed” to a function.
Generalize it to any number of people in any sleeping structure.
3. Convert the song “99 bottles of beer on the wall” to a function.
Generalize to any number of any vessel containing any liquid on surface.
The answers to each part follow.
1. The lyrics for [Alice the Camel](https://www.kididdles.com/lyrics/a012.html) are:
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> So go, Alice, go.
This verse is repeated, each time with one fewer hump,
until there are no humps.
The last verse, with no humps, is:
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Now Alice is a horse.
We’ll iterate from five to no humps, and print out a different last line if there are no humps.
```
humps <- c("five", "four", "three", "two", "one", "no")
for (i in humps) {
cat(str_c("Alice the camel has ", rep(i, 3), " humps.",
collapse = "\n"
), "\n")
if (i == "no") {
cat("Now Alice is a horse.\n")
} else {
cat("So go, Alice, go.\n")
}
cat("\n")
}
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> So go, Alice, go.
#>
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> So go, Alice, go.
#>
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> So go, Alice, go.
#>
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> So go, Alice, go.
#>
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> So go, Alice, go.
#>
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Now Alice is a horse.
```
2. The lyrics for [Ten in the Bed](https://www.kididdles.com/lyrics/t003.html) are:
> Here we go!
>
> There were ten in the bed
>
> and the little one said,
>
> “Roll over, roll over.”
>
> So they all rolled over and one fell out.
This verse is repeated, each time with one fewer in the bed, until there is one left.
That last verse is:
> One!
> There was one in the bed
>
> and the little one said,
>
> “I’m lonely…”
```
numbers <- c(
"ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "one"
)
for (i in numbers) {
cat(str_c("There were ", i, " in the bed\n"))
cat("and the little one said\n")
if (i == "one") {
cat("I'm lonely...")
} else {
cat("Roll over, roll over\n")
cat("So they all rolled over and one fell out.\n")
}
cat("\n")
}
#> There were ten in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were nine in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were eight in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were seven in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were six in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were five in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were four in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were three in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were two in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were one in the bed
#> and the little one said
#> I'm lonely...
```
3. The lyrics of [Ninety\-Nine Bottles of Beer on the Wall](https://en.wikipedia.org/wiki/99_Bottles_of_Beer) are
> 99 bottles of beer on the wall, 99 bottles of beer.
>
> Take one down, pass it around, 98 bottles of beer on the wall
This verse is repeated, each time with one few bottle, until
there are no more bottles of beer. The last verse is
> No more bottles of beer on the wall, no more bottles of beer.
>
> We’ve taken them down and passed them around; now we’re drunk and passed out!
For the bottles of beer, I define a helper function to correctly print the number of bottles.
```
bottles <- function(n) {
if (n > 1) {
str_c(n, " bottles")
} else if (n == 1) {
"1 bottle"
} else {
"no more bottles"
}
}
beer_bottles <- function(total_bottles) {
# print each lyric
for (current_bottles in seq(total_bottles, 0)) {
# first line
cat(str_to_sentence(str_c(bottles(current_bottles), " of beer on the wall, ", bottles(current_bottles), " of beer.\n")))
# second line
if (current_bottles > 0) {
cat(str_c(
"Take one down and pass it around, ", bottles(current_bottles - 1),
" of beer on the wall.\n"
))
} else {
cat(str_c("Go to the store and buy some more, ", bottles(total_bottles), " of beer on the wall.\n")) }
cat("\n")
}
}
beer_bottles(3)
#> 3 Bottles of beer on the wall, 3 bottles of beer.
#> Take one down and pass it around, 2 bottles of beer on the wall.
#>
#> 2 Bottles of beer on the wall, 2 bottles of beer.
#> Take one down and pass it around, 1 bottle of beer on the wall.
#>
#> 1 Bottle of beer on the wall, 1 bottle of beer.
#> Take one down and pass it around, no more bottles of beer on the wall.
#>
#> No more bottles of beer on the wall, no more bottles of beer.
#> Go to the store and buy some more, 3 bottles of beer on the wall.
```
#### Exercise 21\.2\.4
It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step:
```
output <- vector("integer", 0)
for (i in seq_along(x)) {
output <- c(output, lengths(x[[i]]))
}
output
```
How does this affect performance?
Design and execute an experiment.
In order to compare these two approaches, I’ll define two functions:
`add_to_vector` will append to a vector, like the example in the question,
and `add_to_vector_2` which pre\-allocates a vector.
```
add_to_vector <- function(n) {
output <- vector("integer", 0)
for (i in seq_len(n)) {
output <- c(output, i)
}
output
}
```
```
add_to_vector_2 <- function(n) {
output <- vector("integer", n)
for (i in seq_len(n)) {
output[[i]] <- i
}
output
}
```
I’ll use the package microbenchmark to run these functions several times and compare the time it takes.
The package microbenchmark contains utilities for benchmarking R expressions.
In particular, the `microbenchmark()` function will run an R expression a number of times and time it.
```
timings <- microbenchmark(add_to_vector(10000), add_to_vector_2(10000), times = 10)
timings
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> add_to_vector(10000) 111658 113151 119034 117233 120429 143037 10
#> add_to_vector_2(10000) 337 348 1400 360 486 6264 10
```
In this example, appending to a vector takes 325 times longer than pre\-allocating the vector.
You may get different answers, but the longer the vector and the larger the objects, the more that pre\-allocation will outperform appending.
### Exercise 21\.2\.1
Write for\-loops to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
The answers for each part are below.
1. To compute the mean of every column in `mtcars`.
```
output <- vector("double", ncol(mtcars))
names(output) <- names(mtcars)
for (i in names(mtcars)) {
output[i] <- mean(mtcars[[i]])
}
output
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. Determine the type of each column in `nycflights13::flights`.
```
output <- vector("list", ncol(nycflights13::flights))
names(output) <- names(nycflights13::flights)
for (i in names(nycflights13::flights)) {
output[[i]] <- class(nycflights13::flights[[i]])
}
output
#> $year
#> [1] "integer"
#>
#> $month
#> [1] "integer"
#>
#> $day
#> [1] "integer"
#>
#> $dep_time
#> [1] "integer"
#>
#> $sched_dep_time
#> [1] "integer"
#>
#> $dep_delay
#> [1] "numeric"
#>
#> $arr_time
#> [1] "integer"
#>
#> $sched_arr_time
#> [1] "integer"
#>
#> $arr_delay
#> [1] "numeric"
#>
#> $carrier
#> [1] "character"
#>
#> $flight
#> [1] "integer"
#>
#> $tailnum
#> [1] "character"
#>
#> $origin
#> [1] "character"
#>
#> $dest
#> [1] "character"
#>
#> $air_time
#> [1] "numeric"
#>
#> $distance
#> [1] "numeric"
#>
#> $hour
#> [1] "numeric"
#>
#> $minute
#> [1] "numeric"
#>
#> $time_hour
#> [1] "POSIXct" "POSIXt"
```
I used a `list`, not a character vector, since the class of an object can have multiple values.
For example, the class of the `time_hour` column is POSIXct, POSIXt.
3. To compute the number of unique values in each column of the `iris` dataset.
```
data("iris")
iris_uniq <- vector("double", ncol(iris))
names(iris_uniq) <- names(iris)
for (i in names(iris)) {
iris_uniq[i] <- n_distinct(iris[[i]])
}
iris_uniq
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
4. To generate 10 random normals for each of \\(\\mu\\) \= \-10, 0, 10, and 100\.
```
# number to draw
n <- 10
# values of the mean
mu <- c(-10, 0, 10, 100)
normals <- vector("list", length(mu))
for (i in seq_along(normals)) {
normals[[i]] <- rnorm(n, mean = mu[i])
}
normals
#> [[1]]
#> [1] -11.40 -9.74 -12.44 -10.01 -9.38 -8.85 -11.82 -10.25 -10.24 -10.28
#>
#> [[2]]
#> [1] -0.5537 0.6290 2.0650 -1.6310 0.5124 -1.8630 -0.5220 -0.0526 0.5430
#> [10] -0.9141
#>
#> [[3]]
#> [1] 10.47 10.36 8.70 10.74 11.89 9.90 9.06 9.98 9.17 8.49
#>
#> [[4]]
#> [1] 100.9 100.2 100.2 101.6 100.1 99.9 98.1 99.7 99.7 101.1
```
However, we don’t need a for loop for this since `rnorm()` recycle the `mean` argument.
```
matrix(rnorm(n * length(mu), mean = mu), ncol = n)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] -9.930 -9.56 -9.88 -10.2061 -12.27 -8.926 -11.178 -9.51 -8.663 -9.39
#> [2,] -0.639 2.76 -1.91 0.0192 2.68 -0.665 -0.976 -1.70 0.237 -0.11
#> [3,] 9.950 10.05 10.86 10.0296 9.64 11.114 11.065 8.53 11.318 10.17
#> [4,] 99.749 100.58 99.76 100.5498 100.21 99.754 100.132 100.28 100.524 99.91
```
### Exercise 21\.2\.2
Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors:
```
out <- ""
for (x in letters) {
out <- str_c(out, x)
}
out
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
Since `str_c()` already works with vectors, use `str_c()` with the `collapse` argument to return a single string.
```
str_c(letters, collapse = "")
#> [1] "abcdefghijklmnopqrstuvwxyz"
```
For this I’m going to rename the variable `sd` to something different because `sd` is the name of the function we want to use.
```
x <- sample(100)
sd. <- 0
for (i in seq_along(x)) {
sd. <- sd. + (x[i] - mean(x))^2
}
sd. <- sqrt(sd. / (length(x) - 1))
sd.
#> [1] 29
```
We could simply use the `sd` function.
```
sd(x)
#> [1] 29
```
Or if there was a need to use the equation (e.g. for pedagogical reasons), then
the functions `mean()` and `sum()` already work with vectors:
```
sqrt(sum((x - mean(x))^2) / (length(x) - 1))
#> [1] 29
```
```
x <- runif(100)
out <- vector("numeric", length(x))
out[1] <- x[1]
for (i in 2:length(x)) {
out[i] <- out[i - 1] + x[i]
}
out
#> [1] 0.854 1.268 2.019 2.738 3.253 4.228 4.589 4.759 5.542 5.573
#> [11] 6.363 6.529 6.558 7.344 8.169 9.134 9.513 9.687 10.291 11.097
#> [21] 11.133 11.866 12.082 12.098 12.226 12.912 13.554 13.882 14.269 14.976
#> [31] 15.674 16.600 17.059 17.655 17.820 18.387 19.285 19.879 20.711 21.304
#> [41] 22.083 22.481 23.331 24.073 24.391 24.502 24.603 25.403 25.783 25.836
#> [51] 26.823 27.427 27.576 28.114 28.240 29.203 29.250 29.412 30.348 31.319
#> [61] 32.029 32.914 33.891 33.926 34.365 35.009 36.004 36.319 37.175 37.715
#> [71] 38.588 39.104 39.973 40.830 41.176 41.176 41.381 42.326 42.607 43.488
#> [81] 44.449 44.454 45.006 45.226 45.872 46.600 47.473 47.855 48.747 49.591
#> [91] 50.321 50.359 50.693 51.443 52.356 52.560 53.032 53.417 53.810 54.028
```
The code above is calculating a cumulative sum. Use the function `cumsum()`
```
all.equal(cumsum(x), out)
#> [1] TRUE
```
### Exercise 21\.2\.3
Combine your function writing and for loop skills:
1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”.
2. Convert the nursery rhyme “ten in the bed” to a function.
Generalize it to any number of people in any sleeping structure.
3. Convert the song “99 bottles of beer on the wall” to a function.
Generalize to any number of any vessel containing any liquid on surface.
The answers to each part follow.
1. The lyrics for [Alice the Camel](https://www.kididdles.com/lyrics/a012.html) are:
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> Alice the camel has five humps.
>
> So go, Alice, go.
This verse is repeated, each time with one fewer hump,
until there are no humps.
The last verse, with no humps, is:
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Alice the camel has no humps.
>
> Now Alice is a horse.
We’ll iterate from five to no humps, and print out a different last line if there are no humps.
```
humps <- c("five", "four", "three", "two", "one", "no")
for (i in humps) {
cat(str_c("Alice the camel has ", rep(i, 3), " humps.",
collapse = "\n"
), "\n")
if (i == "no") {
cat("Now Alice is a horse.\n")
} else {
cat("So go, Alice, go.\n")
}
cat("\n")
}
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> Alice the camel has five humps.
#> So go, Alice, go.
#>
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> Alice the camel has four humps.
#> So go, Alice, go.
#>
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> Alice the camel has three humps.
#> So go, Alice, go.
#>
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> Alice the camel has two humps.
#> So go, Alice, go.
#>
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> Alice the camel has one humps.
#> So go, Alice, go.
#>
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Alice the camel has no humps.
#> Now Alice is a horse.
```
2. The lyrics for [Ten in the Bed](https://www.kididdles.com/lyrics/t003.html) are:
> Here we go!
>
> There were ten in the bed
>
> and the little one said,
>
> “Roll over, roll over.”
>
> So they all rolled over and one fell out.
This verse is repeated, each time with one fewer in the bed, until there is one left.
That last verse is:
> One!
> There was one in the bed
>
> and the little one said,
>
> “I’m lonely…”
```
numbers <- c(
"ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "one"
)
for (i in numbers) {
cat(str_c("There were ", i, " in the bed\n"))
cat("and the little one said\n")
if (i == "one") {
cat("I'm lonely...")
} else {
cat("Roll over, roll over\n")
cat("So they all rolled over and one fell out.\n")
}
cat("\n")
}
#> There were ten in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were nine in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were eight in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were seven in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were six in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were five in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were four in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were three in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were two in the bed
#> and the little one said
#> Roll over, roll over
#> So they all rolled over and one fell out.
#>
#> There were one in the bed
#> and the little one said
#> I'm lonely...
```
3. The lyrics of [Ninety\-Nine Bottles of Beer on the Wall](https://en.wikipedia.org/wiki/99_Bottles_of_Beer) are
> 99 bottles of beer on the wall, 99 bottles of beer.
>
> Take one down, pass it around, 98 bottles of beer on the wall
This verse is repeated, each time with one few bottle, until
there are no more bottles of beer. The last verse is
> No more bottles of beer on the wall, no more bottles of beer.
>
> We’ve taken them down and passed them around; now we’re drunk and passed out!
For the bottles of beer, I define a helper function to correctly print the number of bottles.
```
bottles <- function(n) {
if (n > 1) {
str_c(n, " bottles")
} else if (n == 1) {
"1 bottle"
} else {
"no more bottles"
}
}
beer_bottles <- function(total_bottles) {
# print each lyric
for (current_bottles in seq(total_bottles, 0)) {
# first line
cat(str_to_sentence(str_c(bottles(current_bottles), " of beer on the wall, ", bottles(current_bottles), " of beer.\n")))
# second line
if (current_bottles > 0) {
cat(str_c(
"Take one down and pass it around, ", bottles(current_bottles - 1),
" of beer on the wall.\n"
))
} else {
cat(str_c("Go to the store and buy some more, ", bottles(total_bottles), " of beer on the wall.\n")) }
cat("\n")
}
}
beer_bottles(3)
#> 3 Bottles of beer on the wall, 3 bottles of beer.
#> Take one down and pass it around, 2 bottles of beer on the wall.
#>
#> 2 Bottles of beer on the wall, 2 bottles of beer.
#> Take one down and pass it around, 1 bottle of beer on the wall.
#>
#> 1 Bottle of beer on the wall, 1 bottle of beer.
#> Take one down and pass it around, no more bottles of beer on the wall.
#>
#> No more bottles of beer on the wall, no more bottles of beer.
#> Go to the store and buy some more, 3 bottles of beer on the wall.
```
#### Exercise 21\.2\.4
It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step:
```
output <- vector("integer", 0)
for (i in seq_along(x)) {
output <- c(output, lengths(x[[i]]))
}
output
```
How does this affect performance?
Design and execute an experiment.
In order to compare these two approaches, I’ll define two functions:
`add_to_vector` will append to a vector, like the example in the question,
and `add_to_vector_2` which pre\-allocates a vector.
```
add_to_vector <- function(n) {
output <- vector("integer", 0)
for (i in seq_len(n)) {
output <- c(output, i)
}
output
}
```
```
add_to_vector_2 <- function(n) {
output <- vector("integer", n)
for (i in seq_len(n)) {
output[[i]] <- i
}
output
}
```
I’ll use the package microbenchmark to run these functions several times and compare the time it takes.
The package microbenchmark contains utilities for benchmarking R expressions.
In particular, the `microbenchmark()` function will run an R expression a number of times and time it.
```
timings <- microbenchmark(add_to_vector(10000), add_to_vector_2(10000), times = 10)
timings
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> add_to_vector(10000) 111658 113151 119034 117233 120429 143037 10
#> add_to_vector_2(10000) 337 348 1400 360 486 6264 10
```
In this example, appending to a vector takes 325 times longer than pre\-allocating the vector.
You may get different answers, but the longer the vector and the larger the objects, the more that pre\-allocation will outperform appending.
#### Exercise 21\.2\.4
It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step:
```
output <- vector("integer", 0)
for (i in seq_along(x)) {
output <- c(output, lengths(x[[i]]))
}
output
```
How does this affect performance?
Design and execute an experiment.
In order to compare these two approaches, I’ll define two functions:
`add_to_vector` will append to a vector, like the example in the question,
and `add_to_vector_2` which pre\-allocates a vector.
```
add_to_vector <- function(n) {
output <- vector("integer", 0)
for (i in seq_len(n)) {
output <- c(output, i)
}
output
}
```
```
add_to_vector_2 <- function(n) {
output <- vector("integer", n)
for (i in seq_len(n)) {
output[[i]] <- i
}
output
}
```
I’ll use the package microbenchmark to run these functions several times and compare the time it takes.
The package microbenchmark contains utilities for benchmarking R expressions.
In particular, the `microbenchmark()` function will run an R expression a number of times and time it.
```
timings <- microbenchmark(add_to_vector(10000), add_to_vector_2(10000), times = 10)
timings
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> add_to_vector(10000) 111658 113151 119034 117233 120429 143037 10
#> add_to_vector_2(10000) 337 348 1400 360 486 6264 10
```
In this example, appending to a vector takes 325 times longer than pre\-allocating the vector.
You may get different answers, but the longer the vector and the larger the objects, the more that pre\-allocation will outperform appending.
21\.3 For loop variations
-------------------------
### Exercise 21\.3\.1
Imagine you have a directory full of CSV files that you want to read in.
You have their paths in a vector,
`files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now
want to read each one with `read_csv()`. Write the for loop that will
load them into a single data frame.
```
files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)
files
#> [1] "data//file1.csv" "data//file2.csv" "data//file3.csv"
```
Since, the number of files is known, pre\-allocate a list with a length equal to the number of files.
```
df_list <- vector("list", length(files))
```
Then, read each file into a data frame, and assign it to an element in that list.
The result is a list of data frames.
```
for (i in seq_along(files)) {
df_list[[i]] <- read_csv(files[[i]])
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
```
```
print(df_list)
#> [[1]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#>
#> [[2]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 3 c
#> 2 4 d
#>
#> [[3]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 5 e
#> 2 6 f
```
Finally, use use `bind_rows()` to combine the list of data frames into a single data frame.
```
df <- bind_rows(df_list)
```
```
print(df)
#> # A tibble: 6 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#> 3 3 c
#> 4 4 d
#> 5 5 e
#> 6 6 f
```
Alternatively, I could have pre\-allocated a list with the names of the files.
```
df2_list <- vector("list", length(files))
names(df2_list) <- files
for (fname in files) {
df2_list[[fname]] <- read_csv(fname)
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
df2 <- bind_rows(df2_list)
```
### Exercise 21\.3\.2
What happens if you use `for (nm in names(x))` and `x` has no names?
What if only some of the elements are named?
What if the names are not unique?
Let’s try it out and see what happens.
When there are no names for the vector, it does not run the code in the loop.
In other words, it runs zero iterations of the loop.
```
x <- c(11, 12, 13)
print(names(x))
#> NULL
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
```
Note that the length of `NULL` is zero:
```
length(NULL)
#> [1] 0
```
If there only some names, then we get an error for trying to access an element without a name.
```
x <- c(a = 11, 12, c = 13)
names(x)
#> [1] "a" "" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] ""
#> Error in x[[nm]]: subscript out of bounds
```
Finally, if the vector contains duplicate names, then `x[[nm]]` returns the *first* element with that name.
```
x <- c(a = 11, a = 12, c = 13)
names(x)
#> [1] "a" "a" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] "a"
#> [1] 11
#> [1] "c"
#> [1] 13
```
### Exercise 21\.3\.3
Write a function that prints the mean of each numeric column in a data frame, along with its name.
For example, `show_mean(iris)` would print:
```
show_mean(iris)
# > Sepal.Length: 5.84
# > Sepal.Width: 3.06
# > Petal.Length: 3.76
# > Petal.Width: 1.20
```
Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?
There may be other functions to do this, but I’ll use `str_pad()`, and `str_length()` to ensure that the space given to the variable names is the same.
I messed around with the options to `format()` until I got two digits.
```
show_mean <- function(df, digits = 2) {
# Get max length of all variable names in the dataset
maxstr <- max(str_length(names(df)))
for (nm in names(df)) {
if (is.numeric(df[[nm]])) {
cat(
str_c(str_pad(str_c(nm, ":"), maxstr + 1L, side = "right"),
format(mean(df[[nm]]), digits = digits, nsmall = digits),
sep = " "
),
"\n"
)
}
}
}
show_mean(iris)
#> Sepal.Length: 5.84
#> Sepal.Width: 3.06
#> Petal.Length: 3.76
#> Petal.Width: 1.20
```
### Exercise 21\.3\.4
What does this code do?
How does it work?
```
trans <- list(
disp = function(x) x * 0.0163871,
am = function(x) {
factor(x, labels = c("auto", "manual"))
}
)
```
```
for (var in names(trans)) {
mtcars[[var]] <- trans[[var]](mtcars[[var]])
}
```
This code mutates the `disp` and `am` columns:
* `disp` is multiplied by 0\.0163871
* `am` is replaced by a factor variable.
The code works by looping over a named list of functions.
It calls the named function in the list on the column of `mtcars` with the same name, and replaces the values of that column.
This is a function.
```
trans[["disp"]]
```
This applies the function to the column of `mtcars` with the same name
```
trans[["disp"]](mtcars[["disp"]])
```
### Exercise 21\.3\.1
Imagine you have a directory full of CSV files that you want to read in.
You have their paths in a vector,
`files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now
want to read each one with `read_csv()`. Write the for loop that will
load them into a single data frame.
```
files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)
files
#> [1] "data//file1.csv" "data//file2.csv" "data//file3.csv"
```
Since, the number of files is known, pre\-allocate a list with a length equal to the number of files.
```
df_list <- vector("list", length(files))
```
Then, read each file into a data frame, and assign it to an element in that list.
The result is a list of data frames.
```
for (i in seq_along(files)) {
df_list[[i]] <- read_csv(files[[i]])
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
```
```
print(df_list)
#> [[1]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#>
#> [[2]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 3 c
#> 2 4 d
#>
#> [[3]]
#> # A tibble: 2 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 5 e
#> 2 6 f
```
Finally, use use `bind_rows()` to combine the list of data frames into a single data frame.
```
df <- bind_rows(df_list)
```
```
print(df)
#> # A tibble: 6 x 2
#> X1 X2
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#> 3 3 c
#> 4 4 d
#> 5 5 e
#> 6 6 f
```
Alternatively, I could have pre\-allocated a list with the names of the files.
```
df2_list <- vector("list", length(files))
names(df2_list) <- files
for (fname in files) {
df2_list[[fname]] <- read_csv(fname)
}
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
#> Parsed with column specification:
#> cols(
#> X1 = col_double(),
#> X2 = col_character()
#> )
df2 <- bind_rows(df2_list)
```
### Exercise 21\.3\.2
What happens if you use `for (nm in names(x))` and `x` has no names?
What if only some of the elements are named?
What if the names are not unique?
Let’s try it out and see what happens.
When there are no names for the vector, it does not run the code in the loop.
In other words, it runs zero iterations of the loop.
```
x <- c(11, 12, 13)
print(names(x))
#> NULL
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
```
Note that the length of `NULL` is zero:
```
length(NULL)
#> [1] 0
```
If there only some names, then we get an error for trying to access an element without a name.
```
x <- c(a = 11, 12, c = 13)
names(x)
#> [1] "a" "" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] ""
#> Error in x[[nm]]: subscript out of bounds
```
Finally, if the vector contains duplicate names, then `x[[nm]]` returns the *first* element with that name.
```
x <- c(a = 11, a = 12, c = 13)
names(x)
#> [1] "a" "a" "c"
```
```
for (nm in names(x)) {
print(nm)
print(x[[nm]])
}
#> [1] "a"
#> [1] 11
#> [1] "a"
#> [1] 11
#> [1] "c"
#> [1] 13
```
### Exercise 21\.3\.3
Write a function that prints the mean of each numeric column in a data frame, along with its name.
For example, `show_mean(iris)` would print:
```
show_mean(iris)
# > Sepal.Length: 5.84
# > Sepal.Width: 3.06
# > Petal.Length: 3.76
# > Petal.Width: 1.20
```
Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?
There may be other functions to do this, but I’ll use `str_pad()`, and `str_length()` to ensure that the space given to the variable names is the same.
I messed around with the options to `format()` until I got two digits.
```
show_mean <- function(df, digits = 2) {
# Get max length of all variable names in the dataset
maxstr <- max(str_length(names(df)))
for (nm in names(df)) {
if (is.numeric(df[[nm]])) {
cat(
str_c(str_pad(str_c(nm, ":"), maxstr + 1L, side = "right"),
format(mean(df[[nm]]), digits = digits, nsmall = digits),
sep = " "
),
"\n"
)
}
}
}
show_mean(iris)
#> Sepal.Length: 5.84
#> Sepal.Width: 3.06
#> Petal.Length: 3.76
#> Petal.Width: 1.20
```
### Exercise 21\.3\.4
What does this code do?
How does it work?
```
trans <- list(
disp = function(x) x * 0.0163871,
am = function(x) {
factor(x, labels = c("auto", "manual"))
}
)
```
```
for (var in names(trans)) {
mtcars[[var]] <- trans[[var]](mtcars[[var]])
}
```
This code mutates the `disp` and `am` columns:
* `disp` is multiplied by 0\.0163871
* `am` is replaced by a factor variable.
The code works by looping over a named list of functions.
It calls the named function in the list on the column of `mtcars` with the same name, and replaces the values of that column.
This is a function.
```
trans[["disp"]]
```
This applies the function to the column of `mtcars` with the same name
```
trans[["disp"]](mtcars[["disp"]])
```
21\.4 For loops vs. functionals
-------------------------------
### Exercise 21\.4\.1
Read the documentation for `apply()`.
In the 2nd case, what two for\-loops does it generalize.
For an object with two\-dimensions, such as a matrix or data frame, `apply()` replaces looping over the rows or columns of a matrix or data\-frame.
The `apply()` function is used like `apply(X, MARGIN, FUN, ...)`, where `X` is a matrix or array, `FUN` is a function to apply, and `...` are additional arguments passed to `FUN`.
When `MARGIN = 1`, then the function is applied to each row.
For example, the following example calculates the row means of a matrix.
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.4523 0.124 0.709
#> [2,] 0.9412 -0.998 -1.529
#> [3,] -0.3389 1.233 0.237
#> [4,] -0.0756 0.340 -1.313
#> [5,] 0.0402 -0.473 0.747
```
```
apply(X, 1, mean)
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
That is equivalent to this for\-loop.
```
X_row_means <- vector("numeric", length = nrow(X))
for (i in seq_len(nrow(X))) {
X_row_means[[i]] <- mean(X[i, ])
}
X_row_means
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.5625 1.153 1.20377
#> [2,] 0.0711 -1.687 -1.43127
#> [3,] -0.6395 -0.903 1.38291
#> [4,] -0.8452 1.318 0.00313
#> [5,] 0.6752 1.100 -0.07789
```
When `MARGIN = 2`, `apply()` is equivalent to a for\-loop looping over columns.
```
apply(X, 2, mean)
#> [1] -0.460 0.196 0.216
```
```
X_col_means <- vector("numeric", length = ncol(X))
for (i in seq_len(ncol(X))) {
X_col_means[[i]] <- mean(X[, i])
}
X_col_means
#> [1] -0.460 0.196 0.216
```
### Exercise 21\.4\.2
Adapt `col_summary()` so that it only applies to numeric columns.
You might want to start with an `is_numeric()` function that returns a logical vector that has a `TRUE` corresponding to each numeric column.
The original `col_summary()` function is
```
col_summary <- function(df, fun) {
out <- vector("double", length(df))
for (i in seq_along(df)) {
out[i] <- fun(df[[i]])
}
out
}
```
The adapted version adds extra logic to only apply the function to numeric columns.
```
col_summary2 <- function(df, fun) {
# create an empty vector which will store whether each
# column is numeric
numeric_cols <- vector("logical", length(df))
# test whether each column is numeric
for (i in seq_along(df)) {
numeric_cols[[i]] <- is.numeric(df[[i]])
}
# find the indexes of the numeric columns
idxs <- which(numeric_cols)
# find the number of numeric columns
n <- sum(numeric_cols)
# create a vector to hold the results
out <- vector("double", n)
# apply the function only to numeric vectors
for (i in seq_along(idxs)) {
out[[i]] <- fun(df[[idxs[[i]]]])
}
# name the vector
names(out) <- names(df)[idxs]
out
}
```
Let’s test that `col_summary2()` works by creating a small data frame with
some numeric and non\-numeric columns.
```
df <- tibble(
X1 = c(1, 2, 3),
X2 = c("A", "B", "C"),
X3 = c(0, -1, 5),
X4 = c(TRUE, FALSE, TRUE)
)
col_summary2(df, mean)
#> X1 X3
#> 2.00 1.33
```
As expected, it only calculates the mean of the numeric columns, `X1` and `X3`.
Let’s test that it works with another function.
```
col_summary2(df, median)
#> X1 X3
#> 2 0
```
### Exercise 21\.4\.1
Read the documentation for `apply()`.
In the 2nd case, what two for\-loops does it generalize.
For an object with two\-dimensions, such as a matrix or data frame, `apply()` replaces looping over the rows or columns of a matrix or data\-frame.
The `apply()` function is used like `apply(X, MARGIN, FUN, ...)`, where `X` is a matrix or array, `FUN` is a function to apply, and `...` are additional arguments passed to `FUN`.
When `MARGIN = 1`, then the function is applied to each row.
For example, the following example calculates the row means of a matrix.
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.4523 0.124 0.709
#> [2,] 0.9412 -0.998 -1.529
#> [3,] -0.3389 1.233 0.237
#> [4,] -0.0756 0.340 -1.313
#> [5,] 0.0402 -0.473 0.747
```
```
apply(X, 1, mean)
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
That is equivalent to this for\-loop.
```
X_row_means <- vector("numeric", length = nrow(X))
for (i in seq_len(nrow(X))) {
X_row_means[[i]] <- mean(X[i, ])
}
X_row_means
#> [1] -0.206 -0.529 0.377 -0.349 0.105
```
```
X <- matrix(rnorm(15), nrow = 5)
X
#> [,1] [,2] [,3]
#> [1,] -1.5625 1.153 1.20377
#> [2,] 0.0711 -1.687 -1.43127
#> [3,] -0.6395 -0.903 1.38291
#> [4,] -0.8452 1.318 0.00313
#> [5,] 0.6752 1.100 -0.07789
```
When `MARGIN = 2`, `apply()` is equivalent to a for\-loop looping over columns.
```
apply(X, 2, mean)
#> [1] -0.460 0.196 0.216
```
```
X_col_means <- vector("numeric", length = ncol(X))
for (i in seq_len(ncol(X))) {
X_col_means[[i]] <- mean(X[, i])
}
X_col_means
#> [1] -0.460 0.196 0.216
```
### Exercise 21\.4\.2
Adapt `col_summary()` so that it only applies to numeric columns.
You might want to start with an `is_numeric()` function that returns a logical vector that has a `TRUE` corresponding to each numeric column.
The original `col_summary()` function is
```
col_summary <- function(df, fun) {
out <- vector("double", length(df))
for (i in seq_along(df)) {
out[i] <- fun(df[[i]])
}
out
}
```
The adapted version adds extra logic to only apply the function to numeric columns.
```
col_summary2 <- function(df, fun) {
# create an empty vector which will store whether each
# column is numeric
numeric_cols <- vector("logical", length(df))
# test whether each column is numeric
for (i in seq_along(df)) {
numeric_cols[[i]] <- is.numeric(df[[i]])
}
# find the indexes of the numeric columns
idxs <- which(numeric_cols)
# find the number of numeric columns
n <- sum(numeric_cols)
# create a vector to hold the results
out <- vector("double", n)
# apply the function only to numeric vectors
for (i in seq_along(idxs)) {
out[[i]] <- fun(df[[idxs[[i]]]])
}
# name the vector
names(out) <- names(df)[idxs]
out
}
```
Let’s test that `col_summary2()` works by creating a small data frame with
some numeric and non\-numeric columns.
```
df <- tibble(
X1 = c(1, 2, 3),
X2 = c("A", "B", "C"),
X3 = c(0, -1, 5),
X4 = c(TRUE, FALSE, TRUE)
)
col_summary2(df, mean)
#> X1 X3
#> 2.00 1.33
```
As expected, it only calculates the mean of the numeric columns, `X1` and `X3`.
Let’s test that it works with another function.
```
col_summary2(df, median)
#> X1 X3
#> 2 0
```
21\.5 The map functions
-----------------------
### Exercise 21\.5\.1
Write code that uses one of the map functions to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).
1. To calculate the mean of every column in `mtcars`, apply the function
`mean()` to each column, and use `map_dbl`, since the results are numeric.
```
map_dbl(mtcars, mean)
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. To calculate the type of every column in `nycflights13::flights` apply
the function `typeof()`, discussed in the section on [Vector basics](https://r4ds.had.co.nz/vectors.html#vector-basics),
and use `map_chr()`, since the results are character.
```
map_chr(nycflights13::flights, typeof)
#> year month day dep_time sched_dep_time
#> "integer" "integer" "integer" "integer" "integer"
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> "double" "integer" "integer" "double" "character"
#> flight tailnum origin dest air_time
#> "integer" "character" "character" "character" "double"
#> distance hour minute time_hour
#> "double" "double" "double" "double"
```
3. The function `n_distinct()` calculates the number of unique values
in a vector.
```
map_int(iris, n_distinct)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
The `map_int()` function is used since `length()` returns an integer.
However, the `map_dbl()` function will also work.
```
map_dbl(iris, n_distinct)
```
An alternative to the `n_distinct()` function is the expression, `length(unique(...))`.
The `n_distinct()` function is more concise and faster, but `length(unique(...))` provides an example of using anonymous functions with map functions.
An anonymous function can be written using the standard R syntax for a function:
```
map_int(iris, function(x) length(unique(x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
Additionally, map functions accept one\-sided formulas as a more concise alternative to specify an anonymous function:
```
map_int(iris, ~length(unique(.x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
In this case, the anonymous function accepts one argument, which is referenced by `.x` in the expression `length(unique(.x))`.
4. To generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\):
The result is a list of numeric vectors.
```
map(c(-10, 0, 10, 100), ~rnorm(n = 10, mean = .))
#> [[1]]
#> [1] -9.56 -9.87 -10.83 -10.50 -11.19 -10.75 -8.54 -10.83 -9.71 -10.48
#>
#> [[2]]
#> [1] -0.6048 1.4601 0.1497 -1.4333 -0.0103 -0.2122 -0.9063 -2.1022 1.8934
#> [10] -0.9681
#>
#> [[3]]
#> [1] 9.90 10.24 10.06 7.82 9.88 10.11 10.01 11.88 12.16 10.71
#>
#> [[4]]
#> [1] 100.8 99.7 101.0 99.1 100.6 100.3 100.4 101.1 99.1 100.2
```
Since a single call of `rnorm()` returns a numeric vector with a length greater
than one we cannot use `map_dbl`, which requires the function to return a numeric
vector that is only length one (see [Exercise 21\.5\.4](iteration.html#exercise-21.5.4)).
The map functions pass any additional arguments to the function being called.
### Exercise 21\.5\.2
How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor?
The function `is.factor()` indicates whether a vector is a factor.
```
is.factor(diamonds$color)
#> [1] TRUE
```
Checking all columns in a data frame is a job for a `map_*()` function.
Since the result of `is.factor()` is logical, we will use `map_lgl()` to apply `is.factor()` to the columns of the data frame.
```
map_lgl(diamonds, is.factor)
#> carat cut color clarity depth table price x y z
#> FALSE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
```
### Exercise 21\.5\.3
What happens when you use the map functions on vectors that aren’t lists?
What does `map(1:5, runif)` do?
Why?
Map functions work with any vectors, not just lists.
As with lists, the map functions will apply the function to each element of the vector.
In the following examples, the inputs to `map()` are atomic vectors (logical, character, integer, double).
```
map(c(TRUE, FALSE, TRUE), ~ !.)
#> [[1]]
#> [1] FALSE
#>
#> [[2]]
#> [1] TRUE
#>
#> [[3]]
#> [1] FALSE
map(c("Hello", "World"), str_to_upper)
#> [[1]]
#> [1] "HELLO"
#>
#> [[2]]
#> [1] "WORLD"
map(1:5, ~ rnorm(.))
#> [[1]]
#> [1] 1.42
#>
#> [[2]]
#> [1] -0.384 -0.174
#>
#> [[3]]
#> [1] -0.222 -1.010 0.481
#>
#> [[4]]
#> [1] 1.604 -1.515 -1.416 0.877
#>
#> [[5]]
#> [1] 0.624 2.112 -0.356 -1.064 1.077
map(c(-0.5, 0, 1), ~ rnorm(1, mean = .))
#> [[1]]
#> [1] 0.682
#>
#> [[2]]
#> [1] 0.198
#>
#> [[3]]
#> [1] 0.6
```
It is important to be aware that while the input of `map()` can be any vector, the output is always a list.
```
map(1:5, runif)
#> [[1]]
#> [1] 0.731
#>
#> [[2]]
#> [1] 0.852 0.976
#>
#> [[3]]
#> [1] 0.113 0.970 0.648
#>
#> [[4]]
#> [1] 0.0561 0.4731 0.2946 0.6103
#>
#> [[5]]
#> [1] 0.1211 0.6294 0.7120 0.6121 0.0344
```
This expression is equivalent to running the following.
```
list(
runif(1),
runif(2),
runif(3),
runif(4),
runif(5)
)
#> [[1]]
#> [1] 0.666
#>
#> [[2]]
#> [1] 0.653 0.452
#>
#> [[3]]
#> [1] 0.517 0.677 0.881
#>
#> [[4]]
#> [1] 0.731 0.399 0.431 0.145
#>
#> [[5]]
#> [1] 0.4511 0.5788 0.0704 0.7423 0.5492
```
The `map()` function loops through the numbers 1 to 5\.
For each value, it calls the `runif()` with that number as the first argument, which is the number of sample to draw.
The result is a length five list with numeric vectors of sizes one through five, each with random samples from a uniform distribution.
Note that although input to `map()` was an integer vector, the return value was a list.
### Exercise 21\.5\.4
What does `map(-2:2, rnorm, n = 5)` do?
Why?
What does `map_dbl(-2:2, rnorm, n = 5)` do?
Why?
Consider the first expression.
```
map(-2:2, rnorm, n = 5)
#> [[1]]
#> [1] -1.656 -0.522 -1.928 0.126 -3.476
#>
#> [[2]]
#> [1] -0.5921 0.3940 -0.6397 -0.3454 0.0522
#>
#> [[3]]
#> [1] -1.980 1.208 -0.169 0.295 1.266
#>
#> [[4]]
#> [1] -0.135 -0.131 1.110 1.853 0.766
#>
#> [[5]]
#> [1] 4.087 1.889 0.607 0.858 3.705
```
This expression takes samples of size five from five normal distributions, with means of (\-2, \-1, 0, 1, and 2\), but the same standard deviation (1\).
It returns a list with each element a numeric vectors of length 5\.
However, if instead, we use `map_dbl()`, the expression raises an error.
```
map_dbl(-2:2, rnorm, n = 5)
#> Error: Result 1 must be a single double, not a double vector of length 5
```
This is because the `map_dbl()` function requires the function it applies to each element to return a numeric vector of length one.
If the function returns either a non\-numeric vector or a numeric vector with a length greater than one, `map_dbl()` will raise an error.
The reason for this strictness is that `map_dbl()` guarantees that it will return a numeric vector of the *same length* as its input vector.
This concept applies to the other `map_*()` functions.
The function `map_chr()` requires that the function always return a *character* vector of length one;
`map_int()` requires that the function always return an *integer* vector of length one;
`map_lgl()` requires that the function always return an *logical* vector of length one.
Use the `map()` function if the function will return values of varying types or lengths.
To return a numeric vector, use `flatten_dbl()` to coerce the list returned by `map()` to a numeric vector.
```
map(-2:2, rnorm, n = 5) %>%
flatten_dbl()
#> [1] -2.145 -1.474 -0.266 -0.551 -0.482 -1.384 0.827 -1.551 -1.866 -1.344
#> [11] 1.063 0.813 1.803 -0.105 0.982 -0.713 0.168 2.100 0.826 1.179
#> [21] 1.302 1.040 1.025 1.661 3.152
```
### Exercise 21\.5\.5
Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function.
This code in this question does not run, so I will use the following code.
```
x <- split(mtcars, mtcars$cyl)
map(x, function(df) lm(mpg ~ wt, data = df))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
We can eliminate the use of an anonymous function using the `~` shortcut.
```
map(x, ~ lm(mpg ~ wt, data = .))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
Though not the intent of this question, the other way to eliminate anonymous function is to create a named one.
```
run_reg <- function(df) {
lm(mpg ~ wt, data = df)
}
map(x, run_reg)
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
### Exercise 21\.5\.1
Write code that uses one of the map functions to:
1. Compute the mean of every column in `mtcars`.
2. Determine the type of each column in `nycflights13::flights`.
3. Compute the number of unique values in each column of `iris`.
4. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).
1. To calculate the mean of every column in `mtcars`, apply the function
`mean()` to each column, and use `map_dbl`, since the results are numeric.
```
map_dbl(mtcars, mean)
#> mpg cyl disp hp drat wt qsec vs am gear
#> 20.091 6.188 230.722 146.688 3.597 3.217 17.849 0.438 0.406 3.688
#> carb
#> 2.812
```
2. To calculate the type of every column in `nycflights13::flights` apply
the function `typeof()`, discussed in the section on [Vector basics](https://r4ds.had.co.nz/vectors.html#vector-basics),
and use `map_chr()`, since the results are character.
```
map_chr(nycflights13::flights, typeof)
#> year month day dep_time sched_dep_time
#> "integer" "integer" "integer" "integer" "integer"
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> "double" "integer" "integer" "double" "character"
#> flight tailnum origin dest air_time
#> "integer" "character" "character" "character" "double"
#> distance hour minute time_hour
#> "double" "double" "double" "double"
```
3. The function `n_distinct()` calculates the number of unique values
in a vector.
```
map_int(iris, n_distinct)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
The `map_int()` function is used since `length()` returns an integer.
However, the `map_dbl()` function will also work.
```
map_dbl(iris, n_distinct)
```
An alternative to the `n_distinct()` function is the expression, `length(unique(...))`.
The `n_distinct()` function is more concise and faster, but `length(unique(...))` provides an example of using anonymous functions with map functions.
An anonymous function can be written using the standard R syntax for a function:
```
map_int(iris, function(x) length(unique(x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
Additionally, map functions accept one\-sided formulas as a more concise alternative to specify an anonymous function:
```
map_int(iris, ~length(unique(.x)))
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 35 23 43 22 3
```
In this case, the anonymous function accepts one argument, which is referenced by `.x` in the expression `length(unique(.x))`.
4. To generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\):
The result is a list of numeric vectors.
```
map(c(-10, 0, 10, 100), ~rnorm(n = 10, mean = .))
#> [[1]]
#> [1] -9.56 -9.87 -10.83 -10.50 -11.19 -10.75 -8.54 -10.83 -9.71 -10.48
#>
#> [[2]]
#> [1] -0.6048 1.4601 0.1497 -1.4333 -0.0103 -0.2122 -0.9063 -2.1022 1.8934
#> [10] -0.9681
#>
#> [[3]]
#> [1] 9.90 10.24 10.06 7.82 9.88 10.11 10.01 11.88 12.16 10.71
#>
#> [[4]]
#> [1] 100.8 99.7 101.0 99.1 100.6 100.3 100.4 101.1 99.1 100.2
```
Since a single call of `rnorm()` returns a numeric vector with a length greater
than one we cannot use `map_dbl`, which requires the function to return a numeric
vector that is only length one (see [Exercise 21\.5\.4](iteration.html#exercise-21.5.4)).
The map functions pass any additional arguments to the function being called.
### Exercise 21\.5\.2
How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor?
The function `is.factor()` indicates whether a vector is a factor.
```
is.factor(diamonds$color)
#> [1] TRUE
```
Checking all columns in a data frame is a job for a `map_*()` function.
Since the result of `is.factor()` is logical, we will use `map_lgl()` to apply `is.factor()` to the columns of the data frame.
```
map_lgl(diamonds, is.factor)
#> carat cut color clarity depth table price x y z
#> FALSE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
```
### Exercise 21\.5\.3
What happens when you use the map functions on vectors that aren’t lists?
What does `map(1:5, runif)` do?
Why?
Map functions work with any vectors, not just lists.
As with lists, the map functions will apply the function to each element of the vector.
In the following examples, the inputs to `map()` are atomic vectors (logical, character, integer, double).
```
map(c(TRUE, FALSE, TRUE), ~ !.)
#> [[1]]
#> [1] FALSE
#>
#> [[2]]
#> [1] TRUE
#>
#> [[3]]
#> [1] FALSE
map(c("Hello", "World"), str_to_upper)
#> [[1]]
#> [1] "HELLO"
#>
#> [[2]]
#> [1] "WORLD"
map(1:5, ~ rnorm(.))
#> [[1]]
#> [1] 1.42
#>
#> [[2]]
#> [1] -0.384 -0.174
#>
#> [[3]]
#> [1] -0.222 -1.010 0.481
#>
#> [[4]]
#> [1] 1.604 -1.515 -1.416 0.877
#>
#> [[5]]
#> [1] 0.624 2.112 -0.356 -1.064 1.077
map(c(-0.5, 0, 1), ~ rnorm(1, mean = .))
#> [[1]]
#> [1] 0.682
#>
#> [[2]]
#> [1] 0.198
#>
#> [[3]]
#> [1] 0.6
```
It is important to be aware that while the input of `map()` can be any vector, the output is always a list.
```
map(1:5, runif)
#> [[1]]
#> [1] 0.731
#>
#> [[2]]
#> [1] 0.852 0.976
#>
#> [[3]]
#> [1] 0.113 0.970 0.648
#>
#> [[4]]
#> [1] 0.0561 0.4731 0.2946 0.6103
#>
#> [[5]]
#> [1] 0.1211 0.6294 0.7120 0.6121 0.0344
```
This expression is equivalent to running the following.
```
list(
runif(1),
runif(2),
runif(3),
runif(4),
runif(5)
)
#> [[1]]
#> [1] 0.666
#>
#> [[2]]
#> [1] 0.653 0.452
#>
#> [[3]]
#> [1] 0.517 0.677 0.881
#>
#> [[4]]
#> [1] 0.731 0.399 0.431 0.145
#>
#> [[5]]
#> [1] 0.4511 0.5788 0.0704 0.7423 0.5492
```
The `map()` function loops through the numbers 1 to 5\.
For each value, it calls the `runif()` with that number as the first argument, which is the number of sample to draw.
The result is a length five list with numeric vectors of sizes one through five, each with random samples from a uniform distribution.
Note that although input to `map()` was an integer vector, the return value was a list.
### Exercise 21\.5\.4
What does `map(-2:2, rnorm, n = 5)` do?
Why?
What does `map_dbl(-2:2, rnorm, n = 5)` do?
Why?
Consider the first expression.
```
map(-2:2, rnorm, n = 5)
#> [[1]]
#> [1] -1.656 -0.522 -1.928 0.126 -3.476
#>
#> [[2]]
#> [1] -0.5921 0.3940 -0.6397 -0.3454 0.0522
#>
#> [[3]]
#> [1] -1.980 1.208 -0.169 0.295 1.266
#>
#> [[4]]
#> [1] -0.135 -0.131 1.110 1.853 0.766
#>
#> [[5]]
#> [1] 4.087 1.889 0.607 0.858 3.705
```
This expression takes samples of size five from five normal distributions, with means of (\-2, \-1, 0, 1, and 2\), but the same standard deviation (1\).
It returns a list with each element a numeric vectors of length 5\.
However, if instead, we use `map_dbl()`, the expression raises an error.
```
map_dbl(-2:2, rnorm, n = 5)
#> Error: Result 1 must be a single double, not a double vector of length 5
```
This is because the `map_dbl()` function requires the function it applies to each element to return a numeric vector of length one.
If the function returns either a non\-numeric vector or a numeric vector with a length greater than one, `map_dbl()` will raise an error.
The reason for this strictness is that `map_dbl()` guarantees that it will return a numeric vector of the *same length* as its input vector.
This concept applies to the other `map_*()` functions.
The function `map_chr()` requires that the function always return a *character* vector of length one;
`map_int()` requires that the function always return an *integer* vector of length one;
`map_lgl()` requires that the function always return an *logical* vector of length one.
Use the `map()` function if the function will return values of varying types or lengths.
To return a numeric vector, use `flatten_dbl()` to coerce the list returned by `map()` to a numeric vector.
```
map(-2:2, rnorm, n = 5) %>%
flatten_dbl()
#> [1] -2.145 -1.474 -0.266 -0.551 -0.482 -1.384 0.827 -1.551 -1.866 -1.344
#> [11] 1.063 0.813 1.803 -0.105 0.982 -0.713 0.168 2.100 0.826 1.179
#> [21] 1.302 1.040 1.025 1.661 3.152
```
### Exercise 21\.5\.5
Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function.
This code in this question does not run, so I will use the following code.
```
x <- split(mtcars, mtcars$cyl)
map(x, function(df) lm(mpg ~ wt, data = df))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
We can eliminate the use of an anonymous function using the `~` shortcut.
```
map(x, ~ lm(mpg ~ wt, data = .))
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = .)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
Though not the intent of this question, the other way to eliminate anonymous function is to create a named one.
```
run_reg <- function(df) {
lm(mpg ~ wt, data = df)
}
map(x, run_reg)
#> $`4`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 39.57 -5.65
#>
#>
#> $`6`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 28.41 -2.78
#>
#>
#> $`8`
#>
#> Call:
#> lm(formula = mpg ~ wt, data = df)
#>
#> Coefficients:
#> (Intercept) wt
#> 23.87 -2.19
```
21\.6 Dealing with failure
--------------------------
No exercises
21\.7 Mapping over multiple arguments
-------------------------------------
No exercises
21\.8 Walk
----------
No exercises
21\.9 Other patterns of for loops
---------------------------------
### Exercise 21\.9\.1
Implement your own version of `every()` using a for loop.
Compare it with `purrr::every()`.
What does purrr’s version do that your version doesn’t?
```
# Use ... to pass arguments to the function
every2 <- function(.x, .p, ...) {
for (i in .x) {
if (!.p(i, ...)) {
# If any is FALSE we know not all of then were TRUE
return(FALSE)
}
}
# if nothing was FALSE, then it is TRUE
TRUE
}
every2(1:3, function(x) {
x > 1
})
#> [1] FALSE
every2(1:3, function(x) {
x > 0
})
#> [1] TRUE
```
The function `purrr::every()` does fancy things with the predicate function argument `.p`, like taking a logical vector instead of a function, or being able to test part of a string if the elements of `.x` are lists.
### Exercise 21\.9\.2
Create an enhanced `col_summary()` that applies a summary function to every numeric column in a data frame.
I will use `map` to apply the function to all the columns, and `keep` to only select numeric columns.
```
col_sum2 <- function(df, f, ...) {
map(keep(df, is.numeric), f, ...)
}
```
```
col_sum2(iris, mean)
#> $Sepal.Length
#> [1] 5.84
#>
#> $Sepal.Width
#> [1] 3.06
#>
#> $Petal.Length
#> [1] 3.76
#>
#> $Petal.Width
#> [1] 1.2
```
### Exercise 21\.9\.3
A possible base R equivalent of `col_summary()` is:
```
col_sum3 <- function(df, f) {
is_num <- sapply(df, is.numeric)
df_num <- df[, is_num]
sapply(df_num, f)
}
```
But it has a number of bugs as illustrated with the following inputs:
```
df <- tibble(
x = 1:3,
y = 3:1,
z = c("a", "b", "c")
)
# OK
col_sum3(df, mean)
# Has problems: don't always return numeric vector
col_sum3(df[1:2], mean)
col_sum3(df[1], mean)
col_sum3(df[0], mean)
```
What causes these bugs?
The cause of these bugs is the behavior of `sapply()`.
The `sapply()` function does not guarantee the type of vector it returns, and will returns different types of vectors depending on its inputs.
If no columns are selected, instead of returning an empty numeric vector, it returns an empty list.
This causes an error since we can’t use a list with `[`.
```
sapply(df[0], is.numeric)
#> named list()
```
```
sapply(df[1], is.numeric)
#> X1
#> TRUE
```
```
sapply(df[1:2], is.numeric)
#> X1 X2
#> TRUE FALSE
```
The `sapply()` function tries to be helpful by simplifying the results, but this behavior can be counterproductive.
It is okay to use the `sapply()` function interactively, but avoid programming with it.
### Exercise 21\.9\.1
Implement your own version of `every()` using a for loop.
Compare it with `purrr::every()`.
What does purrr’s version do that your version doesn’t?
```
# Use ... to pass arguments to the function
every2 <- function(.x, .p, ...) {
for (i in .x) {
if (!.p(i, ...)) {
# If any is FALSE we know not all of then were TRUE
return(FALSE)
}
}
# if nothing was FALSE, then it is TRUE
TRUE
}
every2(1:3, function(x) {
x > 1
})
#> [1] FALSE
every2(1:3, function(x) {
x > 0
})
#> [1] TRUE
```
The function `purrr::every()` does fancy things with the predicate function argument `.p`, like taking a logical vector instead of a function, or being able to test part of a string if the elements of `.x` are lists.
### Exercise 21\.9\.2
Create an enhanced `col_summary()` that applies a summary function to every numeric column in a data frame.
I will use `map` to apply the function to all the columns, and `keep` to only select numeric columns.
```
col_sum2 <- function(df, f, ...) {
map(keep(df, is.numeric), f, ...)
}
```
```
col_sum2(iris, mean)
#> $Sepal.Length
#> [1] 5.84
#>
#> $Sepal.Width
#> [1] 3.06
#>
#> $Petal.Length
#> [1] 3.76
#>
#> $Petal.Width
#> [1] 1.2
```
### Exercise 21\.9\.3
A possible base R equivalent of `col_summary()` is:
```
col_sum3 <- function(df, f) {
is_num <- sapply(df, is.numeric)
df_num <- df[, is_num]
sapply(df_num, f)
}
```
But it has a number of bugs as illustrated with the following inputs:
```
df <- tibble(
x = 1:3,
y = 3:1,
z = c("a", "b", "c")
)
# OK
col_sum3(df, mean)
# Has problems: don't always return numeric vector
col_sum3(df[1:2], mean)
col_sum3(df[1], mean)
col_sum3(df[0], mean)
```
What causes these bugs?
The cause of these bugs is the behavior of `sapply()`.
The `sapply()` function does not guarantee the type of vector it returns, and will returns different types of vectors depending on its inputs.
If no columns are selected, instead of returning an empty numeric vector, it returns an empty list.
This causes an error since we can’t use a list with `[`.
```
sapply(df[0], is.numeric)
#> named list()
```
```
sapply(df[1], is.numeric)
#> X1
#> TRUE
```
```
sapply(df[1:2], is.numeric)
#> X1 X2
#> TRUE FALSE
```
The `sapply()` function tries to be helpful by simplifying the results, but this behavior can be counterproductive.
It is okay to use the `sapply()` function interactively, but avoid programming with it.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/model-basics.html |
23 Model basics
===============
23\.1 Introduction
------------------
```
library("tidyverse")
library("modelr")
```
The option `na.action` determines how missing values are handled.
It is a function.
`na.warn` sets it so that there is a warning if there are any missing values.
If it is not set (the default), R will silently drop them.
```
options(na.action = na.warn)
```
23\.2 A simple model
--------------------
### Exercise 23\.2\.1
One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualize the results. Rerun a few times to generate different simulated datasets. What do you notice about the model?
```
sim1a <- tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2)
)
```
Let’s run it once and plot the results:
```
ggplot(sim1a, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
#> `geom_smooth()` using formula 'y ~ x'
```
We can also do this more systematically, by generating several simulations
and plotting the line.
```
simt <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2),
.id = i
)
}
sims <- map_df(1:12, simt)
ggplot(sims, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
What if we did the same things with normal distributions?
```
sim_norm <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rnorm(length(x)),
.id = i
)
}
simdf_norm <- map_df(1:12, sim_norm)
ggplot(simdf_norm, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
There are not large outliers, and the slopes are more similar.
The reason for this is that the Student’s \\(t\\)\-distribution, from which we sample with `rt()` has heavier tails than the normal distribution (`rnorm()`). This means that the Student’s t\-distribution
assigns a larger probability to values further from the center of the distribution.
```
tibble(
x = seq(-5, 5, length.out = 100),
normal = dnorm(x),
student_t = dt(x, df = 2)
) %>%
pivot_longer(-x, names_to="distribution", values_to="density") %>%
ggplot(aes(x = x, y = density, colour = distribution)) +
geom_line()
```
For a normal distribution with mean zero and standard deviation one, the probability of being greater than 2 is,
```
pnorm(2, lower.tail = FALSE)
#> [1] 0.0228
```
For a Student’s \\(t\\) distribution with degrees of freedom \= 2, it is more than 3 times higher,
```
pt(2, df = 2, lower.tail = FALSE)
#> [1] 0.0918
```
### Exercise 23\.2\.2
One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance:
```
measure_distance <- function(mod, data) {
diff <- data$y - make_prediction(mod, data)
mean(abs(diff))
}
```
For the above function to work, we need to define a function, `make_prediction()`, that
takes a numeric vector of length two (the intercept and slope) and returns the predictions,
```
make_prediction <- function(mod, data) {
mod[1] + mod[2] * data$x
}
```
Using the `sim1a` data, the best parameters of the least absolute deviation are:
```
best <- optim(c(0, 0), measure_distance, data = sim1a)
best$par
#> [1] 5.25 1.66
```
Using the `sim1a` data, while the parameters the minimize the least squares objective function are:
```
measure_distance_ls <- function(mod, data) {
diff <- data$y - (mod[1] + mod[2] * data$x)
sqrt(mean(diff^2))
}
best <- optim(c(0, 0), measure_distance_ls, data = sim1a)
best$par
#> [1] 5.87 1.56
```
In practice, I suggest not using `optim()` to fit this model, and instead using an existing implementation.
The `rlm()` and `lqs()` functions in the [MASS](https://CRAN.R-project.org/package=MASS) fit robust and resistant linear models.
### Exercise 23\.2\.3
One challenge with performing numerical optimization is that it’s only guaranteed to find a local optimum. What’s the problem with optimizing a three parameter model like this?
```
model3 <- function(a, data) {
a[1] + data$x * a[2] + a[3]
}
```
The problem is that you for any values `a[1] = a1` and `a[3] = a3`, any other values of `a[1]` and `a[3]` where `a[1] + a[3] == (a1 + a3)` will have the same fit.
```
measure_distance_3 <- function(a, data) {
diff <- data$y - model3(a, data)
sqrt(mean(diff^2))
}
```
Depending on our starting points, we can find different optimal values:
```
best3a <- optim(c(0, 0, 0), measure_distance_3, data = sim1)
best3a$par
#> [1] 3.367 2.052 0.853
```
```
best3b <- optim(c(0, 0, 1), measure_distance_3, data = sim1)
best3b$par
#> [1] -3.47 2.05 7.69
```
```
best3c <- optim(c(0, 0, 5), measure_distance_3, data = sim1)
best3c$par
#> [1] -1.12 2.05 5.35
```
In fact there are an infinite number of optimal values for this model.
23\.3 Visualising models
------------------------
### Exercise 23\.3\.1
Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualization on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`?
I’ll use `add_predictions()` and `add_residuals()` to add the predictions and residuals from a loess regression to the `sim1` data.
```
sim1_loess <- loess(y ~ x, data = sim1)
sim1_lm <- lm(y ~ x, data = sim1)
grid_loess <- sim1 %>%
add_predictions(sim1_loess)
sim1 <- sim1 %>%
add_residuals(sim1_lm) %>%
add_predictions(sim1_lm) %>%
add_residuals(sim1_loess, var = "resid_loess") %>%
add_predictions(sim1_loess, var = "pred_loess")
```
This plots the loess predictions.
The loess produces a nonlinear, smooth line through the data.
```
plot_sim1_loess <-
ggplot(sim1, aes(x = x, y = y)) +
geom_point() +
geom_line(aes(x = x, y = pred), data = grid_loess, colour = "red")
plot_sim1_loess
```
The predictions of loess are the same as the default method for `geom_smooth()` because `geom_smooth()` uses `loess()` by default; the message even tells us that.
```
plot_sim1_loess +
geom_smooth(method = "loess", colour = "blue", se = FALSE, alpha = 0.20)
#> `geom_smooth()` using formula 'y ~ x'
```
We can plot the residuals (red), and compare them to the residuals from `lm()` (black).
In general, the loess model has smaller residuals within the sample (out of sample is a different issue, and we haven’t considered the uncertainty of these estimates).
```
ggplot(sim1, aes(x = x)) +
geom_ref_line(h = 0) +
geom_point(aes(y = resid)) +
geom_point(aes(y = resid_loess), colour = "red")
```
### Exercise 23\.3\.2
`add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`.
How do these three functions differ?
The functions `gather_predictions()` and `spread_predictions()` allow for adding predictions from multiple models at once.
Taking the `sim1_mod` example,
```
sim1_mod <- lm(y ~ x, data = sim1)
grid <- sim1 %>%
data_grid(x)
```
The function `add_predictions()` adds only a single model at a time.
To add two models:
```
grid %>%
add_predictions(sim1_mod, var = "pred_lm") %>%
add_predictions(sim1_loess, var = "pred_loess")
#> # A tibble: 10 x 3
#> x pred_lm pred_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `gather_predictions()` adds predictions from multiple models by
stacking the results and adding a column with the model name,
```
grid %>%
gather_predictions(sim1_mod, sim1_loess)
#> # A tibble: 20 x 3
#> model x pred
#> <chr> <int> <dbl>
#> 1 sim1_mod 1 6.27
#> 2 sim1_mod 2 8.32
#> 3 sim1_mod 3 10.4
#> 4 sim1_mod 4 12.4
#> 5 sim1_mod 5 14.5
#> 6 sim1_mod 6 16.5
#> # … with 14 more rows
```
The function `spread_predictions()` adds predictions from multiple models by
adding multiple columns (postfixed with the model name) with predictions from each model.
```
grid %>%
spread_predictions(sim1_mod, sim1_loess)
#> # A tibble: 10 x 3
#> x sim1_mod sim1_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `spread_predictions()` is similar to the example which runs `add_predictions()` for each model, and is equivalent to running `spread()` after
running `gather_predictions()`:
```
grid %>%
gather_predictions(sim1_mod, sim1_loess) %>%
spread(model, pred)
#> # A tibble: 10 x 3
#> x sim1_loess sim1_mod
#> <int> <dbl> <dbl>
#> 1 1 5.34 6.27
#> 2 2 8.27 8.32
#> 3 3 10.8 10.4
#> 4 4 12.8 12.4
#> 5 5 14.6 14.5
#> 6 6 16.6 16.5
#> # … with 4 more rows
```
### Exercise 23\.3\.3
What does `geom_ref_line()` do? What package does it come from?
Why is displaying a reference line in plots showing residuals useful and important?
The geom `geom_ref_line()` adds as reference line to a plot.
It is equivalent to running `geom_hline()` or `geom_vline()` with default settings that are useful for visualizing models.
Putting a reference line at zero for residuals is important because good models (generally) should have residuals centered at zero, with approximately the same variance (or distribution) over the support of x, and no correlation.
A zero reference line makes it easier to judge these characteristics visually.
### Exercise 23\.3\.4
Why might you want to look at a frequency polygon of absolute residuals?
What are the pros and cons compared to looking at the raw residuals?
Showing the absolute values of the residuals makes it easier to view the spread of the residuals.
The model assumes that the residuals have mean zero, and using the absolute values of the residuals effectively doubles the number of residuals.
```
sim1_mod <- lm(y ~ x, data = sim1)
sim1 <- sim1 %>%
add_residuals(sim1_mod)
ggplot(sim1, aes(x = abs(resid))) +
geom_freqpoly(binwidth = 0.5)
```
However, using the absolute values of residuals throws away information about the sign, meaning that the
frequency polygon cannot show whether the model systematically over\- or under\-estimates the residuals.
23\.4 Formulas and model families
---------------------------------
### Exercise 23\.4\.1
What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation?
What happens to the predictions?
To run a model without an intercept, add `- 1` or `+ 0` to the right\-hand\-side o f the formula:
```
mod2a <- lm(y ~ x - 1, data = sim2)
```
```
mod2 <- lm(y ~ x, data = sim2)
```
The predictions are exactly the same in the models with and without an intercept:
```
grid <- sim2 %>%
data_grid(x) %>%
spread_predictions(mod2, mod2a)
grid
#> # A tibble: 4 x 3
#> x mod2 mod2a
#> <chr> <dbl> <dbl>
#> 1 a 1.15 1.15
#> 2 b 8.12 8.12
#> 3 c 6.13 6.13
#> 4 d 1.91 1.91
```
### Exercise 23\.4\.2
Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`.
Why is `*` a good shorthand for interaction?
For `x1 * x2` when `x2` is a categorical variable produces indicator variables `x2b`, `x2c`, `x2d` and
variables `x1:x2b`, `x1:x2c`, and `x1:x2d` which are the products of `x1` and `x2*` variables:
```
x3 <- model_matrix(y ~ x1 * x2, data = sim3)
x3
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
We can confirm that the variables `x1:x2b` is the product of `x1` and `x2b`,
```
all(x3[["x1:x2b"]] == (x3[["x1"]] * x3[["x2b"]]))
#> [1] TRUE
```
and similarly for `x1:x2c` and `x2c`, and `x1:x2d` and `x2d`:
```
all(x3[["x1:x2c"]] == (x3[["x1"]] * x3[["x2c"]]))
#> [1] TRUE
all(x3[["x1:x2d"]] == (x3[["x1"]] * x3[["x2d"]]))
#> [1] TRUE
```
For `x1 * x2` where both `x1` and `x2` are continuous variables, `model_matrix()` creates variables
`x1`, `x2`, and `x1:x2`:
```
x4 <- model_matrix(y ~ x1 * x2, data = sim4)
x4
#> # A tibble: 300 x 4
#> `(Intercept)` x1 x2 `x1:x2`
#> <dbl> <dbl> <dbl> <dbl>
#> 1 1 -1 -1 1
#> 2 1 -1 -1 1
#> 3 1 -1 -1 1
#> 4 1 -1 -0.778 0.778
#> 5 1 -1 -0.778 0.778
#> 6 1 -1 -0.778 0.778
#> # … with 294 more rows
```
Confirm that `x1:x2` is the product of the `x1` and `x2`,
```
all(x4[["x1"]] * x4[["x2"]] == x4[["x1:x2"]])
#> [1] TRUE
```
The asterisk `*` is good shorthand for an interaction since an interaction between `x1` and `x2` includes
terms for `x1`, `x2`, and the product of `x1` and `x2`.
### Exercise 23\.4\.3
Using the basic principles, convert the formulas in the following two models into functions.
(Hint: start by converting the categorical variable into 0\-1 variables.)
```
mod1 <- lm(y ~ x1 + x2, data = sim3)
mod2 <- lm(y ~ x1 * x2, data = sim3)
```
The problem is to convert the formulas in the models into functions.
I will assume that the function is only handling the conversion of the right hand side of the formula into a model matrix.
The functions will take one argument, a data frame with `x1` and `x2` columns,
and it will return a data frame.
In other words, the functions will be special cases of the `model_matrix()` function.
Consider the right hand side of the first formula, `~ x1 + x2`.
In the `sim3` data frame, the column `x1` is an integer, and the variable `x2` is a factor with four levels.
```
levels(sim3$x2)
#> [1] "a" "b" "c" "d"
```
Since `x1` is numeric it is unchanged.
Since `x2` is a factor it is replaced with columns of indicator variables for all but one of its levels.
I will first consider the special case in which `x2` only takes the levels of `x2` in `sim3`.
In this case, “a” is considered the reference level and omitted, and new columns are made for “b”, “c”, and “d”.
```
model_matrix_mod1 <- function(.data) {
mutate(.data,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`(Intercept)` = 1
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d)
}
```
```
model_matrix_mod1(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
A more general function for `~ x1 + x2` would not hard\-code the specific levels in `x2`.
```
model_matrix_mod1b <- function(.data) {
# the levels of x2
lvls <- levels(.data$x2)
# drop the first level
# this assumes that there are at least two levels
lvls <- lvls[2:length(lvls)]
# create an indicator variable for each level of x2
for (lvl in lvls) {
# new column name x2 + level name
varname <- str_c("x2", lvl)
# add indicator variable for lvl
.data[[varname]] <- as.numeric(.data$x2 == lvl)
}
# generate the list of variables to keep
x2_variables <- str_c("x2", lvls)
# Add an intercept
.data[["(Intercept)"]] <- 1
# keep x1 and x2 indicator variables
select(.data, `(Intercept)`, x1, all_of(x2_variables))
}
```
```
model_matrix_mod1b(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
Consider the right hand side of the first formula, `~ x1 * x2`.
The output data frame will consist of `x1`, columns with indicator variables for each level (except the reference level) of `x2`,
and columns with the `x2` indicator variables multiplied by `x1`.
As with the previous formula, first I’ll write a function that hard\-codes the levels of `x2`.
```
model_matrix_mod2 <- function(.data) {
mutate(.data,
`(Intercept)` = 1,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`x1:x2b` = x1 * x2b,
`x1:x2c` = x1 * x2c,
`x1:x2d` = x1 * x2d
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d, `x1:x2b`, `x1:x2c`, `x1:x2d`)
}
```
```
model_matrix_mod2(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
For a more general function which will handle arbitrary levels in `x2`, I will
extend the `model_matrix_mod1b()` function that I wrote earlier.
```
model_matrix_mod2b <- function(.data) {
# get dataset with x1 and x2 indicator variables
out <- model_matrix_mod1b(.data)
# get names of the x2 indicator columns
x2cols <- str_subset(colnames(out), "^x2")
# create interactions between x1 and the x2 indicator columns
for (varname in x2cols) {
# name of the interaction variable
newvar <- str_c("x1:", varname)
out[[newvar]] <- out$x1 * out[[varname]]
}
out
}
```
```
model_matrix_mod2b(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
These functions could be further generalized to allow for `x1` and `x2` to
be either numeric or factors. However, generalizing much more than that and
we will soon start reimplementing all of the `matrix_model()` function.
### Exercise 23\.4\.4
For `sim4`, which of `mod1` and `mod2` is better?
I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle.
Can you come up with a plot to support my claim?
Estimate models `mod1` and `mod2` on `sim4`,
```
mod1 <- lm(y ~ x1 + x2, data = sim4)
mod2 <- lm(y ~ x1 * x2, data = sim4)
```
and add the residuals from these models to the `sim4` data,
```
sim4_mods <- gather_residuals(sim4, mod1, mod2)
```
Frequency plots of both the residuals,
```
ggplot(sim4_mods, aes(x = resid, colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
and the absolute values of the residuals,
```
ggplot(sim4_mods, aes(x = abs(resid), colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
does not show much difference in the residuals between the models.
However, `mod2` appears to have fewer residuals in the tails of the distribution between 2\.5 and 5 (although the most extreme residuals are from `mod2`.
This is confirmed by checking the standard deviation of the residuals of these models,
```
sim4_mods %>%
group_by(model) %>%
summarise(resid = sd(resid))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 2 x 2
#> model resid
#> <chr> <dbl>
#> 1 mod1 2.10
#> 2 mod2 2.07
```
The standard deviation of the residuals of `mod2` is smaller than that of `mod1`.
23\.5 Missing values
--------------------
No exercises
23\.6 Other model families
--------------------------
No exercises
23\.1 Introduction
------------------
```
library("tidyverse")
library("modelr")
```
The option `na.action` determines how missing values are handled.
It is a function.
`na.warn` sets it so that there is a warning if there are any missing values.
If it is not set (the default), R will silently drop them.
```
options(na.action = na.warn)
```
23\.2 A simple model
--------------------
### Exercise 23\.2\.1
One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualize the results. Rerun a few times to generate different simulated datasets. What do you notice about the model?
```
sim1a <- tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2)
)
```
Let’s run it once and plot the results:
```
ggplot(sim1a, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
#> `geom_smooth()` using formula 'y ~ x'
```
We can also do this more systematically, by generating several simulations
and plotting the line.
```
simt <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2),
.id = i
)
}
sims <- map_df(1:12, simt)
ggplot(sims, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
What if we did the same things with normal distributions?
```
sim_norm <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rnorm(length(x)),
.id = i
)
}
simdf_norm <- map_df(1:12, sim_norm)
ggplot(simdf_norm, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
There are not large outliers, and the slopes are more similar.
The reason for this is that the Student’s \\(t\\)\-distribution, from which we sample with `rt()` has heavier tails than the normal distribution (`rnorm()`). This means that the Student’s t\-distribution
assigns a larger probability to values further from the center of the distribution.
```
tibble(
x = seq(-5, 5, length.out = 100),
normal = dnorm(x),
student_t = dt(x, df = 2)
) %>%
pivot_longer(-x, names_to="distribution", values_to="density") %>%
ggplot(aes(x = x, y = density, colour = distribution)) +
geom_line()
```
For a normal distribution with mean zero and standard deviation one, the probability of being greater than 2 is,
```
pnorm(2, lower.tail = FALSE)
#> [1] 0.0228
```
For a Student’s \\(t\\) distribution with degrees of freedom \= 2, it is more than 3 times higher,
```
pt(2, df = 2, lower.tail = FALSE)
#> [1] 0.0918
```
### Exercise 23\.2\.2
One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance:
```
measure_distance <- function(mod, data) {
diff <- data$y - make_prediction(mod, data)
mean(abs(diff))
}
```
For the above function to work, we need to define a function, `make_prediction()`, that
takes a numeric vector of length two (the intercept and slope) and returns the predictions,
```
make_prediction <- function(mod, data) {
mod[1] + mod[2] * data$x
}
```
Using the `sim1a` data, the best parameters of the least absolute deviation are:
```
best <- optim(c(0, 0), measure_distance, data = sim1a)
best$par
#> [1] 5.25 1.66
```
Using the `sim1a` data, while the parameters the minimize the least squares objective function are:
```
measure_distance_ls <- function(mod, data) {
diff <- data$y - (mod[1] + mod[2] * data$x)
sqrt(mean(diff^2))
}
best <- optim(c(0, 0), measure_distance_ls, data = sim1a)
best$par
#> [1] 5.87 1.56
```
In practice, I suggest not using `optim()` to fit this model, and instead using an existing implementation.
The `rlm()` and `lqs()` functions in the [MASS](https://CRAN.R-project.org/package=MASS) fit robust and resistant linear models.
### Exercise 23\.2\.3
One challenge with performing numerical optimization is that it’s only guaranteed to find a local optimum. What’s the problem with optimizing a three parameter model like this?
```
model3 <- function(a, data) {
a[1] + data$x * a[2] + a[3]
}
```
The problem is that you for any values `a[1] = a1` and `a[3] = a3`, any other values of `a[1]` and `a[3]` where `a[1] + a[3] == (a1 + a3)` will have the same fit.
```
measure_distance_3 <- function(a, data) {
diff <- data$y - model3(a, data)
sqrt(mean(diff^2))
}
```
Depending on our starting points, we can find different optimal values:
```
best3a <- optim(c(0, 0, 0), measure_distance_3, data = sim1)
best3a$par
#> [1] 3.367 2.052 0.853
```
```
best3b <- optim(c(0, 0, 1), measure_distance_3, data = sim1)
best3b$par
#> [1] -3.47 2.05 7.69
```
```
best3c <- optim(c(0, 0, 5), measure_distance_3, data = sim1)
best3c$par
#> [1] -1.12 2.05 5.35
```
In fact there are an infinite number of optimal values for this model.
### Exercise 23\.2\.1
One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualize the results. Rerun a few times to generate different simulated datasets. What do you notice about the model?
```
sim1a <- tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2)
)
```
Let’s run it once and plot the results:
```
ggplot(sim1a, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
#> `geom_smooth()` using formula 'y ~ x'
```
We can also do this more systematically, by generating several simulations
and plotting the line.
```
simt <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rt(length(x), df = 2),
.id = i
)
}
sims <- map_df(1:12, simt)
ggplot(sims, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
What if we did the same things with normal distributions?
```
sim_norm <- function(i) {
tibble(
x = rep(1:10, each = 3),
y = x * 1.5 + 6 + rnorm(length(x)),
.id = i
)
}
simdf_norm <- map_df(1:12, sim_norm)
ggplot(simdf_norm, aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = "lm", colour = "red") +
facet_wrap(~.id, ncol = 4)
#> `geom_smooth()` using formula 'y ~ x'
```
There are not large outliers, and the slopes are more similar.
The reason for this is that the Student’s \\(t\\)\-distribution, from which we sample with `rt()` has heavier tails than the normal distribution (`rnorm()`). This means that the Student’s t\-distribution
assigns a larger probability to values further from the center of the distribution.
```
tibble(
x = seq(-5, 5, length.out = 100),
normal = dnorm(x),
student_t = dt(x, df = 2)
) %>%
pivot_longer(-x, names_to="distribution", values_to="density") %>%
ggplot(aes(x = x, y = density, colour = distribution)) +
geom_line()
```
For a normal distribution with mean zero and standard deviation one, the probability of being greater than 2 is,
```
pnorm(2, lower.tail = FALSE)
#> [1] 0.0228
```
For a Student’s \\(t\\) distribution with degrees of freedom \= 2, it is more than 3 times higher,
```
pt(2, df = 2, lower.tail = FALSE)
#> [1] 0.0918
```
### Exercise 23\.2\.2
One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance:
```
measure_distance <- function(mod, data) {
diff <- data$y - make_prediction(mod, data)
mean(abs(diff))
}
```
For the above function to work, we need to define a function, `make_prediction()`, that
takes a numeric vector of length two (the intercept and slope) and returns the predictions,
```
make_prediction <- function(mod, data) {
mod[1] + mod[2] * data$x
}
```
Using the `sim1a` data, the best parameters of the least absolute deviation are:
```
best <- optim(c(0, 0), measure_distance, data = sim1a)
best$par
#> [1] 5.25 1.66
```
Using the `sim1a` data, while the parameters the minimize the least squares objective function are:
```
measure_distance_ls <- function(mod, data) {
diff <- data$y - (mod[1] + mod[2] * data$x)
sqrt(mean(diff^2))
}
best <- optim(c(0, 0), measure_distance_ls, data = sim1a)
best$par
#> [1] 5.87 1.56
```
In practice, I suggest not using `optim()` to fit this model, and instead using an existing implementation.
The `rlm()` and `lqs()` functions in the [MASS](https://CRAN.R-project.org/package=MASS) fit robust and resistant linear models.
### Exercise 23\.2\.3
One challenge with performing numerical optimization is that it’s only guaranteed to find a local optimum. What’s the problem with optimizing a three parameter model like this?
```
model3 <- function(a, data) {
a[1] + data$x * a[2] + a[3]
}
```
The problem is that you for any values `a[1] = a1` and `a[3] = a3`, any other values of `a[1]` and `a[3]` where `a[1] + a[3] == (a1 + a3)` will have the same fit.
```
measure_distance_3 <- function(a, data) {
diff <- data$y - model3(a, data)
sqrt(mean(diff^2))
}
```
Depending on our starting points, we can find different optimal values:
```
best3a <- optim(c(0, 0, 0), measure_distance_3, data = sim1)
best3a$par
#> [1] 3.367 2.052 0.853
```
```
best3b <- optim(c(0, 0, 1), measure_distance_3, data = sim1)
best3b$par
#> [1] -3.47 2.05 7.69
```
```
best3c <- optim(c(0, 0, 5), measure_distance_3, data = sim1)
best3c$par
#> [1] -1.12 2.05 5.35
```
In fact there are an infinite number of optimal values for this model.
23\.3 Visualising models
------------------------
### Exercise 23\.3\.1
Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualization on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`?
I’ll use `add_predictions()` and `add_residuals()` to add the predictions and residuals from a loess regression to the `sim1` data.
```
sim1_loess <- loess(y ~ x, data = sim1)
sim1_lm <- lm(y ~ x, data = sim1)
grid_loess <- sim1 %>%
add_predictions(sim1_loess)
sim1 <- sim1 %>%
add_residuals(sim1_lm) %>%
add_predictions(sim1_lm) %>%
add_residuals(sim1_loess, var = "resid_loess") %>%
add_predictions(sim1_loess, var = "pred_loess")
```
This plots the loess predictions.
The loess produces a nonlinear, smooth line through the data.
```
plot_sim1_loess <-
ggplot(sim1, aes(x = x, y = y)) +
geom_point() +
geom_line(aes(x = x, y = pred), data = grid_loess, colour = "red")
plot_sim1_loess
```
The predictions of loess are the same as the default method for `geom_smooth()` because `geom_smooth()` uses `loess()` by default; the message even tells us that.
```
plot_sim1_loess +
geom_smooth(method = "loess", colour = "blue", se = FALSE, alpha = 0.20)
#> `geom_smooth()` using formula 'y ~ x'
```
We can plot the residuals (red), and compare them to the residuals from `lm()` (black).
In general, the loess model has smaller residuals within the sample (out of sample is a different issue, and we haven’t considered the uncertainty of these estimates).
```
ggplot(sim1, aes(x = x)) +
geom_ref_line(h = 0) +
geom_point(aes(y = resid)) +
geom_point(aes(y = resid_loess), colour = "red")
```
### Exercise 23\.3\.2
`add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`.
How do these three functions differ?
The functions `gather_predictions()` and `spread_predictions()` allow for adding predictions from multiple models at once.
Taking the `sim1_mod` example,
```
sim1_mod <- lm(y ~ x, data = sim1)
grid <- sim1 %>%
data_grid(x)
```
The function `add_predictions()` adds only a single model at a time.
To add two models:
```
grid %>%
add_predictions(sim1_mod, var = "pred_lm") %>%
add_predictions(sim1_loess, var = "pred_loess")
#> # A tibble: 10 x 3
#> x pred_lm pred_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `gather_predictions()` adds predictions from multiple models by
stacking the results and adding a column with the model name,
```
grid %>%
gather_predictions(sim1_mod, sim1_loess)
#> # A tibble: 20 x 3
#> model x pred
#> <chr> <int> <dbl>
#> 1 sim1_mod 1 6.27
#> 2 sim1_mod 2 8.32
#> 3 sim1_mod 3 10.4
#> 4 sim1_mod 4 12.4
#> 5 sim1_mod 5 14.5
#> 6 sim1_mod 6 16.5
#> # … with 14 more rows
```
The function `spread_predictions()` adds predictions from multiple models by
adding multiple columns (postfixed with the model name) with predictions from each model.
```
grid %>%
spread_predictions(sim1_mod, sim1_loess)
#> # A tibble: 10 x 3
#> x sim1_mod sim1_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `spread_predictions()` is similar to the example which runs `add_predictions()` for each model, and is equivalent to running `spread()` after
running `gather_predictions()`:
```
grid %>%
gather_predictions(sim1_mod, sim1_loess) %>%
spread(model, pred)
#> # A tibble: 10 x 3
#> x sim1_loess sim1_mod
#> <int> <dbl> <dbl>
#> 1 1 5.34 6.27
#> 2 2 8.27 8.32
#> 3 3 10.8 10.4
#> 4 4 12.8 12.4
#> 5 5 14.6 14.5
#> 6 6 16.6 16.5
#> # … with 4 more rows
```
### Exercise 23\.3\.3
What does `geom_ref_line()` do? What package does it come from?
Why is displaying a reference line in plots showing residuals useful and important?
The geom `geom_ref_line()` adds as reference line to a plot.
It is equivalent to running `geom_hline()` or `geom_vline()` with default settings that are useful for visualizing models.
Putting a reference line at zero for residuals is important because good models (generally) should have residuals centered at zero, with approximately the same variance (or distribution) over the support of x, and no correlation.
A zero reference line makes it easier to judge these characteristics visually.
### Exercise 23\.3\.4
Why might you want to look at a frequency polygon of absolute residuals?
What are the pros and cons compared to looking at the raw residuals?
Showing the absolute values of the residuals makes it easier to view the spread of the residuals.
The model assumes that the residuals have mean zero, and using the absolute values of the residuals effectively doubles the number of residuals.
```
sim1_mod <- lm(y ~ x, data = sim1)
sim1 <- sim1 %>%
add_residuals(sim1_mod)
ggplot(sim1, aes(x = abs(resid))) +
geom_freqpoly(binwidth = 0.5)
```
However, using the absolute values of residuals throws away information about the sign, meaning that the
frequency polygon cannot show whether the model systematically over\- or under\-estimates the residuals.
### Exercise 23\.3\.1
Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualization on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`?
I’ll use `add_predictions()` and `add_residuals()` to add the predictions and residuals from a loess regression to the `sim1` data.
```
sim1_loess <- loess(y ~ x, data = sim1)
sim1_lm <- lm(y ~ x, data = sim1)
grid_loess <- sim1 %>%
add_predictions(sim1_loess)
sim1 <- sim1 %>%
add_residuals(sim1_lm) %>%
add_predictions(sim1_lm) %>%
add_residuals(sim1_loess, var = "resid_loess") %>%
add_predictions(sim1_loess, var = "pred_loess")
```
This plots the loess predictions.
The loess produces a nonlinear, smooth line through the data.
```
plot_sim1_loess <-
ggplot(sim1, aes(x = x, y = y)) +
geom_point() +
geom_line(aes(x = x, y = pred), data = grid_loess, colour = "red")
plot_sim1_loess
```
The predictions of loess are the same as the default method for `geom_smooth()` because `geom_smooth()` uses `loess()` by default; the message even tells us that.
```
plot_sim1_loess +
geom_smooth(method = "loess", colour = "blue", se = FALSE, alpha = 0.20)
#> `geom_smooth()` using formula 'y ~ x'
```
We can plot the residuals (red), and compare them to the residuals from `lm()` (black).
In general, the loess model has smaller residuals within the sample (out of sample is a different issue, and we haven’t considered the uncertainty of these estimates).
```
ggplot(sim1, aes(x = x)) +
geom_ref_line(h = 0) +
geom_point(aes(y = resid)) +
geom_point(aes(y = resid_loess), colour = "red")
```
### Exercise 23\.3\.2
`add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`.
How do these three functions differ?
The functions `gather_predictions()` and `spread_predictions()` allow for adding predictions from multiple models at once.
Taking the `sim1_mod` example,
```
sim1_mod <- lm(y ~ x, data = sim1)
grid <- sim1 %>%
data_grid(x)
```
The function `add_predictions()` adds only a single model at a time.
To add two models:
```
grid %>%
add_predictions(sim1_mod, var = "pred_lm") %>%
add_predictions(sim1_loess, var = "pred_loess")
#> # A tibble: 10 x 3
#> x pred_lm pred_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `gather_predictions()` adds predictions from multiple models by
stacking the results and adding a column with the model name,
```
grid %>%
gather_predictions(sim1_mod, sim1_loess)
#> # A tibble: 20 x 3
#> model x pred
#> <chr> <int> <dbl>
#> 1 sim1_mod 1 6.27
#> 2 sim1_mod 2 8.32
#> 3 sim1_mod 3 10.4
#> 4 sim1_mod 4 12.4
#> 5 sim1_mod 5 14.5
#> 6 sim1_mod 6 16.5
#> # … with 14 more rows
```
The function `spread_predictions()` adds predictions from multiple models by
adding multiple columns (postfixed with the model name) with predictions from each model.
```
grid %>%
spread_predictions(sim1_mod, sim1_loess)
#> # A tibble: 10 x 3
#> x sim1_mod sim1_loess
#> <int> <dbl> <dbl>
#> 1 1 6.27 5.34
#> 2 2 8.32 8.27
#> 3 3 10.4 10.8
#> 4 4 12.4 12.8
#> 5 5 14.5 14.6
#> 6 6 16.5 16.6
#> # … with 4 more rows
```
The function `spread_predictions()` is similar to the example which runs `add_predictions()` for each model, and is equivalent to running `spread()` after
running `gather_predictions()`:
```
grid %>%
gather_predictions(sim1_mod, sim1_loess) %>%
spread(model, pred)
#> # A tibble: 10 x 3
#> x sim1_loess sim1_mod
#> <int> <dbl> <dbl>
#> 1 1 5.34 6.27
#> 2 2 8.27 8.32
#> 3 3 10.8 10.4
#> 4 4 12.8 12.4
#> 5 5 14.6 14.5
#> 6 6 16.6 16.5
#> # … with 4 more rows
```
### Exercise 23\.3\.3
What does `geom_ref_line()` do? What package does it come from?
Why is displaying a reference line in plots showing residuals useful and important?
The geom `geom_ref_line()` adds as reference line to a plot.
It is equivalent to running `geom_hline()` or `geom_vline()` with default settings that are useful for visualizing models.
Putting a reference line at zero for residuals is important because good models (generally) should have residuals centered at zero, with approximately the same variance (or distribution) over the support of x, and no correlation.
A zero reference line makes it easier to judge these characteristics visually.
### Exercise 23\.3\.4
Why might you want to look at a frequency polygon of absolute residuals?
What are the pros and cons compared to looking at the raw residuals?
Showing the absolute values of the residuals makes it easier to view the spread of the residuals.
The model assumes that the residuals have mean zero, and using the absolute values of the residuals effectively doubles the number of residuals.
```
sim1_mod <- lm(y ~ x, data = sim1)
sim1 <- sim1 %>%
add_residuals(sim1_mod)
ggplot(sim1, aes(x = abs(resid))) +
geom_freqpoly(binwidth = 0.5)
```
However, using the absolute values of residuals throws away information about the sign, meaning that the
frequency polygon cannot show whether the model systematically over\- or under\-estimates the residuals.
23\.4 Formulas and model families
---------------------------------
### Exercise 23\.4\.1
What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation?
What happens to the predictions?
To run a model without an intercept, add `- 1` or `+ 0` to the right\-hand\-side o f the formula:
```
mod2a <- lm(y ~ x - 1, data = sim2)
```
```
mod2 <- lm(y ~ x, data = sim2)
```
The predictions are exactly the same in the models with and without an intercept:
```
grid <- sim2 %>%
data_grid(x) %>%
spread_predictions(mod2, mod2a)
grid
#> # A tibble: 4 x 3
#> x mod2 mod2a
#> <chr> <dbl> <dbl>
#> 1 a 1.15 1.15
#> 2 b 8.12 8.12
#> 3 c 6.13 6.13
#> 4 d 1.91 1.91
```
### Exercise 23\.4\.2
Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`.
Why is `*` a good shorthand for interaction?
For `x1 * x2` when `x2` is a categorical variable produces indicator variables `x2b`, `x2c`, `x2d` and
variables `x1:x2b`, `x1:x2c`, and `x1:x2d` which are the products of `x1` and `x2*` variables:
```
x3 <- model_matrix(y ~ x1 * x2, data = sim3)
x3
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
We can confirm that the variables `x1:x2b` is the product of `x1` and `x2b`,
```
all(x3[["x1:x2b"]] == (x3[["x1"]] * x3[["x2b"]]))
#> [1] TRUE
```
and similarly for `x1:x2c` and `x2c`, and `x1:x2d` and `x2d`:
```
all(x3[["x1:x2c"]] == (x3[["x1"]] * x3[["x2c"]]))
#> [1] TRUE
all(x3[["x1:x2d"]] == (x3[["x1"]] * x3[["x2d"]]))
#> [1] TRUE
```
For `x1 * x2` where both `x1` and `x2` are continuous variables, `model_matrix()` creates variables
`x1`, `x2`, and `x1:x2`:
```
x4 <- model_matrix(y ~ x1 * x2, data = sim4)
x4
#> # A tibble: 300 x 4
#> `(Intercept)` x1 x2 `x1:x2`
#> <dbl> <dbl> <dbl> <dbl>
#> 1 1 -1 -1 1
#> 2 1 -1 -1 1
#> 3 1 -1 -1 1
#> 4 1 -1 -0.778 0.778
#> 5 1 -1 -0.778 0.778
#> 6 1 -1 -0.778 0.778
#> # … with 294 more rows
```
Confirm that `x1:x2` is the product of the `x1` and `x2`,
```
all(x4[["x1"]] * x4[["x2"]] == x4[["x1:x2"]])
#> [1] TRUE
```
The asterisk `*` is good shorthand for an interaction since an interaction between `x1` and `x2` includes
terms for `x1`, `x2`, and the product of `x1` and `x2`.
### Exercise 23\.4\.3
Using the basic principles, convert the formulas in the following two models into functions.
(Hint: start by converting the categorical variable into 0\-1 variables.)
```
mod1 <- lm(y ~ x1 + x2, data = sim3)
mod2 <- lm(y ~ x1 * x2, data = sim3)
```
The problem is to convert the formulas in the models into functions.
I will assume that the function is only handling the conversion of the right hand side of the formula into a model matrix.
The functions will take one argument, a data frame with `x1` and `x2` columns,
and it will return a data frame.
In other words, the functions will be special cases of the `model_matrix()` function.
Consider the right hand side of the first formula, `~ x1 + x2`.
In the `sim3` data frame, the column `x1` is an integer, and the variable `x2` is a factor with four levels.
```
levels(sim3$x2)
#> [1] "a" "b" "c" "d"
```
Since `x1` is numeric it is unchanged.
Since `x2` is a factor it is replaced with columns of indicator variables for all but one of its levels.
I will first consider the special case in which `x2` only takes the levels of `x2` in `sim3`.
In this case, “a” is considered the reference level and omitted, and new columns are made for “b”, “c”, and “d”.
```
model_matrix_mod1 <- function(.data) {
mutate(.data,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`(Intercept)` = 1
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d)
}
```
```
model_matrix_mod1(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
A more general function for `~ x1 + x2` would not hard\-code the specific levels in `x2`.
```
model_matrix_mod1b <- function(.data) {
# the levels of x2
lvls <- levels(.data$x2)
# drop the first level
# this assumes that there are at least two levels
lvls <- lvls[2:length(lvls)]
# create an indicator variable for each level of x2
for (lvl in lvls) {
# new column name x2 + level name
varname <- str_c("x2", lvl)
# add indicator variable for lvl
.data[[varname]] <- as.numeric(.data$x2 == lvl)
}
# generate the list of variables to keep
x2_variables <- str_c("x2", lvls)
# Add an intercept
.data[["(Intercept)"]] <- 1
# keep x1 and x2 indicator variables
select(.data, `(Intercept)`, x1, all_of(x2_variables))
}
```
```
model_matrix_mod1b(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
Consider the right hand side of the first formula, `~ x1 * x2`.
The output data frame will consist of `x1`, columns with indicator variables for each level (except the reference level) of `x2`,
and columns with the `x2` indicator variables multiplied by `x1`.
As with the previous formula, first I’ll write a function that hard\-codes the levels of `x2`.
```
model_matrix_mod2 <- function(.data) {
mutate(.data,
`(Intercept)` = 1,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`x1:x2b` = x1 * x2b,
`x1:x2c` = x1 * x2c,
`x1:x2d` = x1 * x2d
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d, `x1:x2b`, `x1:x2c`, `x1:x2d`)
}
```
```
model_matrix_mod2(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
For a more general function which will handle arbitrary levels in `x2`, I will
extend the `model_matrix_mod1b()` function that I wrote earlier.
```
model_matrix_mod2b <- function(.data) {
# get dataset with x1 and x2 indicator variables
out <- model_matrix_mod1b(.data)
# get names of the x2 indicator columns
x2cols <- str_subset(colnames(out), "^x2")
# create interactions between x1 and the x2 indicator columns
for (varname in x2cols) {
# name of the interaction variable
newvar <- str_c("x1:", varname)
out[[newvar]] <- out$x1 * out[[varname]]
}
out
}
```
```
model_matrix_mod2b(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
These functions could be further generalized to allow for `x1` and `x2` to
be either numeric or factors. However, generalizing much more than that and
we will soon start reimplementing all of the `matrix_model()` function.
### Exercise 23\.4\.4
For `sim4`, which of `mod1` and `mod2` is better?
I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle.
Can you come up with a plot to support my claim?
Estimate models `mod1` and `mod2` on `sim4`,
```
mod1 <- lm(y ~ x1 + x2, data = sim4)
mod2 <- lm(y ~ x1 * x2, data = sim4)
```
and add the residuals from these models to the `sim4` data,
```
sim4_mods <- gather_residuals(sim4, mod1, mod2)
```
Frequency plots of both the residuals,
```
ggplot(sim4_mods, aes(x = resid, colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
and the absolute values of the residuals,
```
ggplot(sim4_mods, aes(x = abs(resid), colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
does not show much difference in the residuals between the models.
However, `mod2` appears to have fewer residuals in the tails of the distribution between 2\.5 and 5 (although the most extreme residuals are from `mod2`.
This is confirmed by checking the standard deviation of the residuals of these models,
```
sim4_mods %>%
group_by(model) %>%
summarise(resid = sd(resid))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 2 x 2
#> model resid
#> <chr> <dbl>
#> 1 mod1 2.10
#> 2 mod2 2.07
```
The standard deviation of the residuals of `mod2` is smaller than that of `mod1`.
### Exercise 23\.4\.1
What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation?
What happens to the predictions?
To run a model without an intercept, add `- 1` or `+ 0` to the right\-hand\-side o f the formula:
```
mod2a <- lm(y ~ x - 1, data = sim2)
```
```
mod2 <- lm(y ~ x, data = sim2)
```
The predictions are exactly the same in the models with and without an intercept:
```
grid <- sim2 %>%
data_grid(x) %>%
spread_predictions(mod2, mod2a)
grid
#> # A tibble: 4 x 3
#> x mod2 mod2a
#> <chr> <dbl> <dbl>
#> 1 a 1.15 1.15
#> 2 b 8.12 8.12
#> 3 c 6.13 6.13
#> 4 d 1.91 1.91
```
### Exercise 23\.4\.2
Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`.
Why is `*` a good shorthand for interaction?
For `x1 * x2` when `x2` is a categorical variable produces indicator variables `x2b`, `x2c`, `x2d` and
variables `x1:x2b`, `x1:x2c`, and `x1:x2d` which are the products of `x1` and `x2*` variables:
```
x3 <- model_matrix(y ~ x1 * x2, data = sim3)
x3
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
We can confirm that the variables `x1:x2b` is the product of `x1` and `x2b`,
```
all(x3[["x1:x2b"]] == (x3[["x1"]] * x3[["x2b"]]))
#> [1] TRUE
```
and similarly for `x1:x2c` and `x2c`, and `x1:x2d` and `x2d`:
```
all(x3[["x1:x2c"]] == (x3[["x1"]] * x3[["x2c"]]))
#> [1] TRUE
all(x3[["x1:x2d"]] == (x3[["x1"]] * x3[["x2d"]]))
#> [1] TRUE
```
For `x1 * x2` where both `x1` and `x2` are continuous variables, `model_matrix()` creates variables
`x1`, `x2`, and `x1:x2`:
```
x4 <- model_matrix(y ~ x1 * x2, data = sim4)
x4
#> # A tibble: 300 x 4
#> `(Intercept)` x1 x2 `x1:x2`
#> <dbl> <dbl> <dbl> <dbl>
#> 1 1 -1 -1 1
#> 2 1 -1 -1 1
#> 3 1 -1 -1 1
#> 4 1 -1 -0.778 0.778
#> 5 1 -1 -0.778 0.778
#> 6 1 -1 -0.778 0.778
#> # … with 294 more rows
```
Confirm that `x1:x2` is the product of the `x1` and `x2`,
```
all(x4[["x1"]] * x4[["x2"]] == x4[["x1:x2"]])
#> [1] TRUE
```
The asterisk `*` is good shorthand for an interaction since an interaction between `x1` and `x2` includes
terms for `x1`, `x2`, and the product of `x1` and `x2`.
### Exercise 23\.4\.3
Using the basic principles, convert the formulas in the following two models into functions.
(Hint: start by converting the categorical variable into 0\-1 variables.)
```
mod1 <- lm(y ~ x1 + x2, data = sim3)
mod2 <- lm(y ~ x1 * x2, data = sim3)
```
The problem is to convert the formulas in the models into functions.
I will assume that the function is only handling the conversion of the right hand side of the formula into a model matrix.
The functions will take one argument, a data frame with `x1` and `x2` columns,
and it will return a data frame.
In other words, the functions will be special cases of the `model_matrix()` function.
Consider the right hand side of the first formula, `~ x1 + x2`.
In the `sim3` data frame, the column `x1` is an integer, and the variable `x2` is a factor with four levels.
```
levels(sim3$x2)
#> [1] "a" "b" "c" "d"
```
Since `x1` is numeric it is unchanged.
Since `x2` is a factor it is replaced with columns of indicator variables for all but one of its levels.
I will first consider the special case in which `x2` only takes the levels of `x2` in `sim3`.
In this case, “a” is considered the reference level and omitted, and new columns are made for “b”, “c”, and “d”.
```
model_matrix_mod1 <- function(.data) {
mutate(.data,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`(Intercept)` = 1
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d)
}
```
```
model_matrix_mod1(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
A more general function for `~ x1 + x2` would not hard\-code the specific levels in `x2`.
```
model_matrix_mod1b <- function(.data) {
# the levels of x2
lvls <- levels(.data$x2)
# drop the first level
# this assumes that there are at least two levels
lvls <- lvls[2:length(lvls)]
# create an indicator variable for each level of x2
for (lvl in lvls) {
# new column name x2 + level name
varname <- str_c("x2", lvl)
# add indicator variable for lvl
.data[[varname]] <- as.numeric(.data$x2 == lvl)
}
# generate the list of variables to keep
x2_variables <- str_c("x2", lvls)
# Add an intercept
.data[["(Intercept)"]] <- 1
# keep x1 and x2 indicator variables
select(.data, `(Intercept)`, x1, all_of(x2_variables))
}
```
```
model_matrix_mod1b(sim3)
#> # A tibble: 120 x 5
#> `(Intercept)` x1 x2b x2c x2d
#> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0
#> 2 1 1 0 0 0
#> 3 1 1 0 0 0
#> 4 1 1 1 0 0
#> 5 1 1 1 0 0
#> 6 1 1 1 0 0
#> # … with 114 more rows
```
Consider the right hand side of the first formula, `~ x1 * x2`.
The output data frame will consist of `x1`, columns with indicator variables for each level (except the reference level) of `x2`,
and columns with the `x2` indicator variables multiplied by `x1`.
As with the previous formula, first I’ll write a function that hard\-codes the levels of `x2`.
```
model_matrix_mod2 <- function(.data) {
mutate(.data,
`(Intercept)` = 1,
x2b = as.numeric(x2 == "b"),
x2c = as.numeric(x2 == "c"),
x2d = as.numeric(x2 == "d"),
`x1:x2b` = x1 * x2b,
`x1:x2c` = x1 * x2c,
`x1:x2d` = x1 * x2d
) %>%
select(`(Intercept)`, x1, x2b, x2c, x2d, `x1:x2b`, `x1:x2c`, `x1:x2d`)
}
```
```
model_matrix_mod2(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
For a more general function which will handle arbitrary levels in `x2`, I will
extend the `model_matrix_mod1b()` function that I wrote earlier.
```
model_matrix_mod2b <- function(.data) {
# get dataset with x1 and x2 indicator variables
out <- model_matrix_mod1b(.data)
# get names of the x2 indicator columns
x2cols <- str_subset(colnames(out), "^x2")
# create interactions between x1 and the x2 indicator columns
for (varname in x2cols) {
# name of the interaction variable
newvar <- str_c("x1:", varname)
out[[newvar]] <- out$x1 * out[[varname]]
}
out
}
```
```
model_matrix_mod2b(sim3)
#> # A tibble: 120 x 8
#> `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d`
#> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 0 0 0 0 0 0
#> 2 1 1 0 0 0 0 0 0
#> 3 1 1 0 0 0 0 0 0
#> 4 1 1 1 0 0 1 0 0
#> 5 1 1 1 0 0 1 0 0
#> 6 1 1 1 0 0 1 0 0
#> # … with 114 more rows
```
These functions could be further generalized to allow for `x1` and `x2` to
be either numeric or factors. However, generalizing much more than that and
we will soon start reimplementing all of the `matrix_model()` function.
### Exercise 23\.4\.4
For `sim4`, which of `mod1` and `mod2` is better?
I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle.
Can you come up with a plot to support my claim?
Estimate models `mod1` and `mod2` on `sim4`,
```
mod1 <- lm(y ~ x1 + x2, data = sim4)
mod2 <- lm(y ~ x1 * x2, data = sim4)
```
and add the residuals from these models to the `sim4` data,
```
sim4_mods <- gather_residuals(sim4, mod1, mod2)
```
Frequency plots of both the residuals,
```
ggplot(sim4_mods, aes(x = resid, colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
and the absolute values of the residuals,
```
ggplot(sim4_mods, aes(x = abs(resid), colour = model)) +
geom_freqpoly(binwidth = 0.5) +
geom_rug()
```
does not show much difference in the residuals between the models.
However, `mod2` appears to have fewer residuals in the tails of the distribution between 2\.5 and 5 (although the most extreme residuals are from `mod2`.
This is confirmed by checking the standard deviation of the residuals of these models,
```
sim4_mods %>%
group_by(model) %>%
summarise(resid = sd(resid))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 2 x 2
#> model resid
#> <chr> <dbl>
#> 1 mod1 2.10
#> 2 mod2 2.07
```
The standard deviation of the residuals of `mod2` is smaller than that of `mod1`.
23\.5 Missing values
--------------------
No exercises
23\.6 Other model families
--------------------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/model-building.html |
24 Model building
=================
24\.1 Introduction
------------------
The splines package is needed for the `ns()` function used in one of the
solutions.
```
library("tidyverse")
library("modelr")
library("lubridate")
library("broom")
library("nycflights13")
library("splines")
```
```
options(na.action = na.warn)
```
24\.2 Why are low quality diamonds more expensive?
--------------------------------------------------
This code appears in the section and is necessary for the exercises.
```
diamonds2 <- diamonds %>%
filter(carat <= 2.5) %>%
mutate(
lprice = log2(price),
lcarat = log2(carat)
)
mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2)
diamonds2 <- add_residuals(diamonds2, mod_diamond2, "lresid2")
```
### Exercise 24\.2\.1
In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips.
What do they represent?
The distribution of diamonds has more diamonds at round or otherwise human\-friendly numbers (fractions).
### Exercise 24\.2\.2
If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`?
Following the examples in the chapter, I use a base\-2 logarithm.
```
mod_log <- lm(log2(price) ~ log2(carat), data = diamonds)
mod_log
#>
#> Call:
#> lm(formula = log2(price) ~ log2(carat), data = diamonds)
#>
#> Coefficients:
#> (Intercept) log2(carat)
#> 12.19 1.68
```
The estimated relationship between `carat` and `price` looks like this.
```
tibble(carat = seq(0.25, 5, by = 0.25)) %>%
add_predictions(mod_log) %>%
ggplot(aes(x = carat, y = 2^pred)) +
geom_line() +
labs(x = "carat", y = "price")
```
The plot shows that the estimated relationship between `carat` and `price` is not linear.
The exact relationship in this model is if \\(x\\) increases \\(r\\) times, then \\(y\\) increases \\(r^{a\_1}\\) times.
For example, a two times increase in `carat` is associated with the following increase in `price`:
```
2^coef(mod_log)[2]
#> log2(carat)
#> 3.2
```
Let’s confirm this relationship by checking it for a few values of the `carat` variable.
Let’s increase `carat` from 1 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 2)) -
predict(mod_log, newdata = tibble(carat = 1)))
#> 1
#> 3.2
```
Note that, since `predict()` predicts `log2(carat)` rather than `carat`, the prediction is exponentiated by 2\.
Now let’s increase `carat` from 4 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 4)) -
predict(mod_log, newdata = tibble(carat = 2)))
#> 1
#> 3.2
```
Finally, let’s increase `carat` from 0\.5 to 1\.
```
2^(predict(mod_log, newdata = tibble(carat = 1)) -
predict(mod_log, newdata = tibble(carat = 0.5)))
#> 1
#> 3.2
```
All of these examples return the same value, \\(2 ^ {a\_1} \= 3\.2\\).
So why is this?
Let’s ignore the names of the variables in this case and consider the equation:
\\\[
\\log\_b y \= a\_0 \+ a\_1 \\log x
\\]
We want to understand how the difference in \\(y\\) is related to the difference in \\(x\\).
Now, consider this equation at two different values \\(x\_1\\) and \\(x\_0\\),
\\\[
\\log\_b y\_0 \= a\_0 \+ \\log\_b x\_0 \\\\
\\log\_b y\_1 \= a\_0 \+ \\log\_b y\_1
\\]
What is the value of the difference, \\(\\log y\_1 \- \\log y\_0\\)?
\\\[
\\begin{aligned}\[t]
\\log\_b(y\_1\) \- \\log\_b(y\_0\) \&\= (a\_0 \+ a\_1 \\log\_b x\_1\) \- (a\_0 \+ a\_1 \\log x\_0\) ,\\\\
\&\= a\_1 (\\log\_b x\_1 \- \\log x\_0\) , \\\\
\\log\_b \\left(\\frac{y\_1}{y\_0} \\right) \&\= \\log\_b \\left(\\frac{x\_1}{x\_0} \\right)^{a\_1} , \\\\
\\frac{y\_1}{y\_0} \&\= \\left( \\frac{x\_1}{x\_0} \\right)^{a\_1} .
\\end{aligned}
\\]
Let \\(s \= y\_1 / y\_0\\) and \\(r \= x\_1 / x\_0\\). Then,
\\\[
s \= r^{a\_1} \\text{.}
\\]
In other words, an \\(r\\) times increase in \\(x\\), is associated with a \\(r^{a\_1}\\) times increase in \\(y\\).
Note that this relationship does not depend on the base of the logarithm, \\(b\\).
There is another approximation that is commonly used when logarithms appear in regressions.
The first way to show this is using the approximation that \\(x\\) is small, meaning that \\(x \\approx 0\\),
\\\[
\\log (1 \+ x) \\approx x
\\]
This approximation is the first order Taylor expansion of the function at \\(x \= 0\\).
Now consider the relationship between the percent change in \\(x\\) and the percent change in \\(y\\),
\\\[
\\begin{aligned}\[t]
\\log (y \+ \\Delta y) \- \\log y \&\= (\\alpha \+ \\beta \\log (x \+ \\Delta x)) \- (\\alpha \+ \\beta \\log x) \\\\
\\log \\left(\\frac{y \+ \\Delta y}{y} \\right) \&\= \\beta \\log\\left( \\frac{x \+ \\Delta x}{x} \\right) \\\\
\\log \\left(1 \+ \\frac{\\Delta y}{y} \\right) \&\= \\beta \\log\\left( 1 \+ \\frac{\\Delta x}{x} \\right) \\\\
\\frac{\\Delta y}{y} \&\\approx \\beta \\left(\\frac{\\Delta x}{x} \\right)
\\end{aligned}
\\]
Thus a 1% percentage change in \\(x\\) is associated with a \\(\\beta\\) percent change in \\(y\\).
This relationship can also be derived by taking the derivative of \\(\\log y\\) with respect to \\(x\\).
First, rewrite the equation in terms of \\(y\\),
\\\[
y \= \\exp(a\_0 \+ a\_1 \\log(x))
\\]
Then differentiate \\(y\\) with respect to \\(x\\),
\\\[
\\begin{aligned}\[t]
dy \&\= \\exp(a\_0 \+ a\_1 \\log x) \\left(\\frac{a\_1}{x}\\right) dx \\\\
\&\= a\_1 y \\left(\\frac{dx}{x} \\right) \\\\
(dy / y) \&\= a\_1 (dx / x) \\\\
\\%\\Delta y \&\= a\_1\\%\\Delta x
\\end{aligned}
\\]
### Exercise 24\.2\.3
Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors?
The answer to this question is provided in section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model).
```
diamonds2 %>%
filter(abs(lresid2) > 1) %>%
add_predictions(mod_diamond2) %>%
mutate(pred = round(2^pred)) %>%
select(price, pred, carat:table, x:z) %>%
arrange(price)
#> # A tibble: 16 x 11
#> price pred carat cut color clarity depth table x y z
#> <int> <dbl> <dbl> <ord> <ord> <ord> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1013 264 0.25 Fair F SI2 54.4 64 4.3 4.23 2.32
#> 2 1186 284 0.25 Premium G SI2 59 60 5.33 5.28 3.12
#> 3 1186 284 0.25 Premium G SI2 58.8 60 5.33 5.28 3.12
#> 4 1262 2644 1.03 Fair E I1 78.2 54 5.72 5.59 4.42
#> 5 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> 6 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> # … with 10 more rows
```
I did not see anything too unusual. Do you?
### Exercise 24\.2\.4
Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices?
Would you trust it to tell you how much to spend if you were buying a diamond?
Section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model) already provides part of the answer to this question.
Plotting the residuals of the model shows that there are some large outliers for small carat sizes.
The largest of these residuals are a little over two, which means that the true value was four times lower; see [Exercise 24\.2\.2](model-building.html#exercise-24.2.2).
Most of the mass of the residuals is between \-0\.5 and 0\.5, which corresponds to about \\(\\pm 40%\\).
There seems to be a slight downward bias in the residuals as carat size increases.
```
ggplot(diamonds2, aes(lcarat, lresid2)) +
geom_hex(bins = 50)
```
```
lresid2_summary <- summarise(diamonds2,
rmse = sqrt(mean(lresid2^2)),
mae = mean(abs(lresid2)),
p025 = quantile(lresid2, 0.025),
p975 = quantile(lresid2, 0.975)
)
lresid2_summary
#> # A tibble: 1 x 4
#> rmse mae p025 p975
#> <dbl> <dbl> <dbl> <dbl>
#> 1 0.192 0.149 -0.369 0.384
```
While in some cases the model can be wrong, overall the model seems to perform well.
The root mean squared error is 0\.19 meaning that the
average error is about \-14%.
Another summary statistics of errors is the mean absolute error (MAE), which is the
mean of the absolute values of the errors.
The MAE is 0\.15, which is \-11%.
Finally, 95% of the residuals are between \-0\.37 and
0\.38, which correspond to 23—31\.
Whether you think that this is a good model depends on factors outside the statistical model itself.
It will depend on the how the model is being used.
I have no idea how to price diamonds, so this would be useful to me in order to understand a reasonable price range for a diamond, so I don’t get ripped off.
However, if I were buying and selling diamonds as a business, I would probably require a better model.
24\.3 What affects the number of daily flights?
-----------------------------------------------
This code is copied from the book and needed for the exercises.
```
library("nycflights13")
daily <- flights %>%
mutate(date = make_date(year, month, day)) %>%
group_by(date) %>%
summarise(n = n())
#> `summarise()` ungrouping output (override with `.groups` argument)
daily
#> # A tibble: 365 x 2
#> date n
#> <date> <int>
#> 1 2013-01-01 842
#> 2 2013-01-02 943
#> 3 2013-01-03 914
#> 4 2013-01-04 915
#> 5 2013-01-05 720
#> 6 2013-01-06 832
#> # … with 359 more rows
daily <- daily %>%
mutate(wday = wday(date, label = TRUE))
term <- function(date) {
cut(date,
breaks = ymd(20130101, 20130605, 20130825, 20140101),
labels = c("spring", "summer", "fall")
)
}
daily <- daily %>%
mutate(term = term(date))
mod <- lm(n ~ wday, data = daily)
daily <- daily %>%
add_residuals(mod)
mod1 <- lm(n ~ wday, data = daily)
mod2 <- lm(n ~ wday * term, data = daily)
```
### Exercise 24\.3\.1
Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\.
(Hint: they all have the same explanation.)
How would these days generalize to another year?
These are the Sundays before Monday holidays Martin Luther King Jr. Day, Memorial Day, and Labor Day.
For other years, use the dates of the holidays for those years—the third Monday of January for Martin Luther King Jr. Day, the last Monday of May for Memorial Day, and the first Monday in September for Labor Day.
### Exercise 24\.3\.2
What do the three days with high positive residuals represent?
How would these days generalize to another year?
The top three days correspond to the Saturday after Thanksgiving (November 30th),
the Sunday after Thanksgiving (December 1st), and the Saturday after Christmas (December 28th).
```
top_n(daily, 3, resid)
#> # A tibble: 3 x 5
#> date n wday term resid
#> <date> <int> <ord> <fct> <dbl>
#> 1 2013-11-30 857 Sat fall 112.
#> 2 2013-12-01 987 Sun fall 95.5
#> 3 2013-12-28 814 Sat fall 69.4
```
We could generalize these to other years using the dates of those holidays on those
years.
### Exercise 24\.3\.3
Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e., it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`
How does this model compare with the model with every combination of `wday` and `term`?
I’ll use the function `case_when()` to do this, though there are other ways which it could be solved.
```
daily <- daily %>%
mutate(
wday2 =
case_when(
wday == "Sat" & term == "summer" ~ "Sat-summer",
wday == "Sat" & term == "fall" ~ "Sat-fall",
wday == "Sat" & term == "spring" ~ "Sat-spring",
TRUE ~ as.character(wday)
)
)
```
```
mod3 <- lm(n ~ wday2, data = daily)
daily %>%
gather_residuals(sat_term = mod3, all_interact = mod2) %>%
ggplot(aes(date, resid, colour = model)) +
geom_line(alpha = 0.75)
```
I think the overlapping plot is hard to understand.
If we are interested in the differences, it is better to plot the differences directly.
In this code, I use `spread_residuals()` to add one *column* per model, rather than `gather_residuals()` which creates a new row for each model.
```
daily %>%
spread_residuals(sat_term = mod3, all_interact = mod2) %>%
mutate(resid_diff = sat_term - all_interact) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
The model with terms × Saturday has higher residuals in the fall and lower residuals in the spring than the model with all interactions.
Comparing models, `mod3` has a lower \\(R^2\\) and regression standard error, \\(\\hat{\\sigma}\\), despite using fewer variables.
More importantly for prediction purposes, this model has a higher AIC, which is an estimate of the out of sample error.
```
glance(mod3) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.736 47.4 3863. 9
```
```
glance(mod2) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.757 46.2 3856. 21
```
### Exercise 24\.3\.4
Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays.
What do the residuals of that model look like?
The question is unclear how to handle public holidays. There are several questions to consider.
First, what are the public holidays? I include all [federal holidays in the United States](https://en.wikipedia.org/wiki/Federal_holidays_in_the_United_States) in 2013\.
Other holidays to consider would be Easter and Good Friday which is US stock market holiday and widely celebrated religious holiday, Mothers Day, Fathers Day,
and Patriots’ Day, which is a holiday in several states, and other state holidays.
```
holidays_2013 <-
tribble(
~holiday, ~date,
"New Year's Day", 20130101,
"Martin Luther King Jr. Day", 20130121,
"Washington's Birthday", 20130218,
"Memorial Day", 20130527,
"Independence Day", 20130704,
"Labor Day", 20130902,
"Columbus Day", 20131028,
"Veteran's Day", 20131111,
"Thanksgiving", 20131128,
"Christmas", 20131225
) %>%
mutate(date = lubridate::ymd(date))
```
The model could include a single dummy variable which indicates a day was a public holiday.
Alternatively, I could include a dummy variable for each public holiday.
I would expect that Veteran’s Day and Washington’s Birthday have a different effect on travel than Thanksgiving, Christmas, and New Year’s Day.
Another question is whether and how I should handle the days before and after holidays.
Travel could be lighter on the day of the holiday,
but heavier the day before or after.
```
daily <- daily %>%
mutate(
wday3 =
case_when(
date %in% (holidays_2013$date - 1L) ~ "day before holiday",
date %in% (holidays_2013$date + 1L) ~ "day after holiday",
date %in% holidays_2013$date ~ "holiday",
.$wday == "Sat" & .$term == "summer" ~ "Sat-summer",
.$wday == "Sat" & .$term == "fall" ~ "Sat-fall",
.$wday == "Sat" & .$term == "spring" ~ "Sat-spring",
TRUE ~ as.character(.$wday)
)
)
mod4 <- lm(n ~ wday3, data = daily)
daily %>%
spread_residuals(resid_sat_terms = mod3, resid_holidays = mod4) %>%
mutate(resid_diff = resid_holidays - resid_sat_terms) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.5
What happens if you fit a day of week effect that varies by month (i.e., `n ~ wday * month`)?
Why is this not very helpful?
```
daily <- mutate(daily, month = factor(lubridate::month(date)))
mod6 <- lm(n ~ wday * month, data = daily)
print(summary(mod6))
#>
#> Call:
#> lm(formula = n ~ wday * month, data = daily)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -269.2 -5.0 1.5 8.8 113.2
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 867.400 7.598 114.16 < 2e-16 ***
#> wday.L -64.074 20.874 -3.07 0.00235 **
#> wday.Q -165.600 20.156 -8.22 7.8e-15 ***
#> wday.C -68.259 20.312 -3.36 0.00089 ***
#> wday^4 -92.081 20.499 -4.49 1.0e-05 ***
#> wday^5 9.793 19.733 0.50 0.62011
#> wday^6 -20.438 18.992 -1.08 0.28280
#> month2 23.707 10.995 2.16 0.03191 *
#> month3 67.886 10.746 6.32 1.0e-09 ***
#> month4 74.593 10.829 6.89 3.7e-11 ***
#> month5 56.279 10.746 5.24 3.2e-07 ***
#> month6 80.307 10.829 7.42 1.4e-12 ***
#> month7 77.114 10.746 7.18 6.4e-12 ***
#> month8 81.636 10.746 7.60 4.5e-13 ***
#> month9 51.371 10.829 4.74 3.3e-06 ***
#> month10 60.136 10.746 5.60 5.2e-08 ***
#> month11 46.914 10.829 4.33 2.1e-05 ***
#> month12 38.779 10.746 3.61 0.00036 ***
#> wday.L:month2 -3.723 29.627 -0.13 0.90009
#> wday.Q:month2 -3.819 29.125 -0.13 0.89578
#> wday.C:month2 0.490 29.233 0.02 0.98664
#> wday^4:month2 4.569 29.364 0.16 0.87646
#> wday^5:month2 -4.255 28.835 -0.15 0.88278
#> wday^6:month2 12.057 28.332 0.43 0.67076
#> wday.L:month3 -14.571 28.430 -0.51 0.60870
#> wday.Q:month3 15.439 28.207 0.55 0.58458
#> wday.C:month3 8.226 28.467 0.29 0.77282
#> wday^4:month3 22.720 28.702 0.79 0.42926
#> wday^5:month3 -15.330 28.504 -0.54 0.59113
#> wday^6:month3 11.373 28.268 0.40 0.68776
#> wday.L:month4 -16.668 29.359 -0.57 0.57067
#> wday.Q:month4 10.725 28.962 0.37 0.71142
#> wday.C:month4 -0.245 28.725 -0.01 0.99320
#> wday^4:month4 23.288 28.871 0.81 0.42056
#> wday^5:month4 -17.872 28.076 -0.64 0.52494
#> wday^6:month4 5.352 27.888 0.19 0.84794
#> wday.L:month5 3.666 29.359 0.12 0.90071
#> wday.Q:month5 -20.665 28.670 -0.72 0.47163
#> wday.C:month5 4.634 28.725 0.16 0.87196
#> wday^4:month5 5.999 28.511 0.21 0.83349
#> wday^5:month5 -16.912 28.076 -0.60 0.54742
#> wday^6:month5 12.764 27.194 0.47 0.63916
#> wday.L:month6 -4.526 28.651 -0.16 0.87459
#> wday.Q:month6 23.813 28.207 0.84 0.39927
#> wday.C:month6 13.758 28.725 0.48 0.63234
#> wday^4:month6 24.118 29.187 0.83 0.40932
#> wday^5:month6 -17.648 28.798 -0.61 0.54048
#> wday^6:month6 10.526 28.329 0.37 0.71051
#> wday.L:month7 -28.791 29.359 -0.98 0.32760
#> wday.Q:month7 49.585 28.670 1.73 0.08482 .
#> wday.C:month7 54.501 28.725 1.90 0.05881 .
#> wday^4:month7 50.847 28.511 1.78 0.07559 .
#> wday^5:month7 -33.698 28.076 -1.20 0.23106
#> wday^6:month7 -13.894 27.194 -0.51 0.60979
#> wday.L:month8 -20.448 28.871 -0.71 0.47938
#> wday.Q:month8 6.765 28.504 0.24 0.81258
#> wday.C:month8 6.001 28.467 0.21 0.83319
#> wday^4:month8 19.074 28.781 0.66 0.50806
#> wday^5:month8 -19.312 28.058 -0.69 0.49183
#> wday^6:month8 9.507 27.887 0.34 0.73341
#> wday.L:month9 -30.341 28.926 -1.05 0.29511
#> wday.Q:month9 -42.034 28.670 -1.47 0.14373
#> wday.C:month9 -20.719 28.725 -0.72 0.47134
#> wday^4:month9 -20.375 28.791 -0.71 0.47973
#> wday^5:month9 -18.238 28.523 -0.64 0.52308
#> wday^6:month9 11.726 28.270 0.41 0.67861
#> wday.L:month10 -61.051 29.520 -2.07 0.03954 *
#> wday.Q:month10 -26.235 28.504 -0.92 0.35815
#> wday.C:month10 -32.435 28.725 -1.13 0.25979
#> wday^4:month10 -12.212 28.990 -0.42 0.67389
#> wday^5:month10 -27.686 27.907 -0.99 0.32201
#> wday^6:month10 0.123 26.859 0.00 0.99634
#> wday.L:month11 -54.947 28.926 -1.90 0.05851 .
#> wday.Q:month11 16.012 28.670 0.56 0.57696
#> wday.C:month11 54.950 28.725 1.91 0.05677 .
#> wday^4:month11 47.286 28.791 1.64 0.10164
#> wday^5:month11 -44.740 28.523 -1.57 0.11787
#> wday^6:month11 -20.688 28.270 -0.73 0.46491
#> wday.L:month12 -9.506 28.871 -0.33 0.74221
#> wday.Q:month12 75.209 28.504 2.64 0.00879 **
#> wday.C:month12 -25.026 28.467 -0.88 0.38010
#> wday^4:month12 -23.780 28.781 -0.83 0.40938
#> wday^5:month12 20.447 28.058 0.73 0.46676
#> wday^6:month12 9.586 27.887 0.34 0.73128
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 42 on 281 degrees of freedom
#> Multiple R-squared: 0.836, Adjusted R-squared: 0.787
#> F-statistic: 17.2 on 83 and 281 DF, p-value: <2e-16
```
If we fit a day of week effect that varies by month, there will be `12 * 7 = 84` parameters in the model.
Since each month has only four to five weeks, each of these day of week \\(\\times\\) month effects is the average of only four or five observations.
These estimates have large standard errors and likely not generalize well beyond the sample data, since they are estimated from only a few observations.
### Exercise 24\.3\.6
What would you expect the model `n ~ wday + ns(date, 5)` to look like?
Knowing what you know about the data, why would you expect it to be not particularly effective?
Previous models fit in the chapter and exercises show that the effects of days of the week vary across different times of the year.
The model `wday + ns(date, 5)` does not interact the day of week effect (`wday`) with the time of year effects (`ns(date, 5)`).
I estimate a model which does not interact the day of week effects (`mod7`) with the spline to that which does (`mod8`).
I need to load the splines package to use the `ns()` function.
```
mod7 <- lm(n ~ wday + ns(date, 5), data = daily)
mod8 <- lm(n ~ wday * ns(date, 5), data = daily)
```
The residuals of the model that does not interact day of week with time of year (`mod7`) are larger than those of the model that does (`mod8`).
The model `mod7` underestimates weekends during the summer and overestimates weekends during the autumn.
```
daily %>%
gather_residuals(mod7, mod8) %>%
ggplot(aes(x = date, y = resid, color = model)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.7
We hypothesized that people leaving on Sundays are more likely to be business travelers who need to be somewhere on Monday.
Explore that hypothesis by seeing how it breaks down based on distance and time:
if it’s true, you’d expect to see more Sunday evening flights to places that are far away.
Comparing the average distances of flights by day of week, Sunday flights are the second longest.
Saturday flights are the longest on average.
Saturday may have the longest flights on average because there are fewer regularly scheduled short business/commuter flights on the weekends but that is speculation.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Average Distance")
```
Hide outliers.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot(outlier.shape = NA) +
labs(x = "Day of Week", y = "Average Distance")
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
stat_summary() +
labs(x = "Day of Week", y = "Average Distance")
#> No summary function supplied, defaulting to `mean_se()`
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_violin() +
labs(x = "Day of Week", y = "Average Distance")
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
ggplot(aes(x = hour, color = wday, y = ..density..)) +
geom_freqpoly(binwidth = 1)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = mean(distance)) %>%
ggplot(aes(x = hour, color = wday, y = distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = sum(distance)) %>%
group_by(wday) %>%
mutate(prop_distance = distance / sum(distance)) %>%
ungroup() %>%
ggplot(aes(x = hour, color = wday, y = prop_distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
### Exercise 24\.3\.8
It’s a little frustrating that Sunday and Saturday are on separate ends of the plot.
Write a small function to set the levels of the factor so that the week starts on Monday.
See the chapter [Factors](https://r4ds.had.co.nz/factors.html) for the function `fct_relevel()`.
Use `fct_relevel()` to put all levels in\-front of the first level (“Sunday”).
```
monday_first <- function(x) {
fct_relevel(x, levels(x)[-1])
}
```
Now Monday is the first day of the week.
```
daily <- daily %>%
mutate(wday = wday(date, label = TRUE))
ggplot(daily, aes(monday_first(wday), n)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Number of flights")
```
24\.4 Learning more about models
--------------------------------
No exercises
24\.1 Introduction
------------------
The splines package is needed for the `ns()` function used in one of the
solutions.
```
library("tidyverse")
library("modelr")
library("lubridate")
library("broom")
library("nycflights13")
library("splines")
```
```
options(na.action = na.warn)
```
24\.2 Why are low quality diamonds more expensive?
--------------------------------------------------
This code appears in the section and is necessary for the exercises.
```
diamonds2 <- diamonds %>%
filter(carat <= 2.5) %>%
mutate(
lprice = log2(price),
lcarat = log2(carat)
)
mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2)
diamonds2 <- add_residuals(diamonds2, mod_diamond2, "lresid2")
```
### Exercise 24\.2\.1
In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips.
What do they represent?
The distribution of diamonds has more diamonds at round or otherwise human\-friendly numbers (fractions).
### Exercise 24\.2\.2
If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`?
Following the examples in the chapter, I use a base\-2 logarithm.
```
mod_log <- lm(log2(price) ~ log2(carat), data = diamonds)
mod_log
#>
#> Call:
#> lm(formula = log2(price) ~ log2(carat), data = diamonds)
#>
#> Coefficients:
#> (Intercept) log2(carat)
#> 12.19 1.68
```
The estimated relationship between `carat` and `price` looks like this.
```
tibble(carat = seq(0.25, 5, by = 0.25)) %>%
add_predictions(mod_log) %>%
ggplot(aes(x = carat, y = 2^pred)) +
geom_line() +
labs(x = "carat", y = "price")
```
The plot shows that the estimated relationship between `carat` and `price` is not linear.
The exact relationship in this model is if \\(x\\) increases \\(r\\) times, then \\(y\\) increases \\(r^{a\_1}\\) times.
For example, a two times increase in `carat` is associated with the following increase in `price`:
```
2^coef(mod_log)[2]
#> log2(carat)
#> 3.2
```
Let’s confirm this relationship by checking it for a few values of the `carat` variable.
Let’s increase `carat` from 1 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 2)) -
predict(mod_log, newdata = tibble(carat = 1)))
#> 1
#> 3.2
```
Note that, since `predict()` predicts `log2(carat)` rather than `carat`, the prediction is exponentiated by 2\.
Now let’s increase `carat` from 4 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 4)) -
predict(mod_log, newdata = tibble(carat = 2)))
#> 1
#> 3.2
```
Finally, let’s increase `carat` from 0\.5 to 1\.
```
2^(predict(mod_log, newdata = tibble(carat = 1)) -
predict(mod_log, newdata = tibble(carat = 0.5)))
#> 1
#> 3.2
```
All of these examples return the same value, \\(2 ^ {a\_1} \= 3\.2\\).
So why is this?
Let’s ignore the names of the variables in this case and consider the equation:
\\\[
\\log\_b y \= a\_0 \+ a\_1 \\log x
\\]
We want to understand how the difference in \\(y\\) is related to the difference in \\(x\\).
Now, consider this equation at two different values \\(x\_1\\) and \\(x\_0\\),
\\\[
\\log\_b y\_0 \= a\_0 \+ \\log\_b x\_0 \\\\
\\log\_b y\_1 \= a\_0 \+ \\log\_b y\_1
\\]
What is the value of the difference, \\(\\log y\_1 \- \\log y\_0\\)?
\\\[
\\begin{aligned}\[t]
\\log\_b(y\_1\) \- \\log\_b(y\_0\) \&\= (a\_0 \+ a\_1 \\log\_b x\_1\) \- (a\_0 \+ a\_1 \\log x\_0\) ,\\\\
\&\= a\_1 (\\log\_b x\_1 \- \\log x\_0\) , \\\\
\\log\_b \\left(\\frac{y\_1}{y\_0} \\right) \&\= \\log\_b \\left(\\frac{x\_1}{x\_0} \\right)^{a\_1} , \\\\
\\frac{y\_1}{y\_0} \&\= \\left( \\frac{x\_1}{x\_0} \\right)^{a\_1} .
\\end{aligned}
\\]
Let \\(s \= y\_1 / y\_0\\) and \\(r \= x\_1 / x\_0\\). Then,
\\\[
s \= r^{a\_1} \\text{.}
\\]
In other words, an \\(r\\) times increase in \\(x\\), is associated with a \\(r^{a\_1}\\) times increase in \\(y\\).
Note that this relationship does not depend on the base of the logarithm, \\(b\\).
There is another approximation that is commonly used when logarithms appear in regressions.
The first way to show this is using the approximation that \\(x\\) is small, meaning that \\(x \\approx 0\\),
\\\[
\\log (1 \+ x) \\approx x
\\]
This approximation is the first order Taylor expansion of the function at \\(x \= 0\\).
Now consider the relationship between the percent change in \\(x\\) and the percent change in \\(y\\),
\\\[
\\begin{aligned}\[t]
\\log (y \+ \\Delta y) \- \\log y \&\= (\\alpha \+ \\beta \\log (x \+ \\Delta x)) \- (\\alpha \+ \\beta \\log x) \\\\
\\log \\left(\\frac{y \+ \\Delta y}{y} \\right) \&\= \\beta \\log\\left( \\frac{x \+ \\Delta x}{x} \\right) \\\\
\\log \\left(1 \+ \\frac{\\Delta y}{y} \\right) \&\= \\beta \\log\\left( 1 \+ \\frac{\\Delta x}{x} \\right) \\\\
\\frac{\\Delta y}{y} \&\\approx \\beta \\left(\\frac{\\Delta x}{x} \\right)
\\end{aligned}
\\]
Thus a 1% percentage change in \\(x\\) is associated with a \\(\\beta\\) percent change in \\(y\\).
This relationship can also be derived by taking the derivative of \\(\\log y\\) with respect to \\(x\\).
First, rewrite the equation in terms of \\(y\\),
\\\[
y \= \\exp(a\_0 \+ a\_1 \\log(x))
\\]
Then differentiate \\(y\\) with respect to \\(x\\),
\\\[
\\begin{aligned}\[t]
dy \&\= \\exp(a\_0 \+ a\_1 \\log x) \\left(\\frac{a\_1}{x}\\right) dx \\\\
\&\= a\_1 y \\left(\\frac{dx}{x} \\right) \\\\
(dy / y) \&\= a\_1 (dx / x) \\\\
\\%\\Delta y \&\= a\_1\\%\\Delta x
\\end{aligned}
\\]
### Exercise 24\.2\.3
Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors?
The answer to this question is provided in section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model).
```
diamonds2 %>%
filter(abs(lresid2) > 1) %>%
add_predictions(mod_diamond2) %>%
mutate(pred = round(2^pred)) %>%
select(price, pred, carat:table, x:z) %>%
arrange(price)
#> # A tibble: 16 x 11
#> price pred carat cut color clarity depth table x y z
#> <int> <dbl> <dbl> <ord> <ord> <ord> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1013 264 0.25 Fair F SI2 54.4 64 4.3 4.23 2.32
#> 2 1186 284 0.25 Premium G SI2 59 60 5.33 5.28 3.12
#> 3 1186 284 0.25 Premium G SI2 58.8 60 5.33 5.28 3.12
#> 4 1262 2644 1.03 Fair E I1 78.2 54 5.72 5.59 4.42
#> 5 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> 6 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> # … with 10 more rows
```
I did not see anything too unusual. Do you?
### Exercise 24\.2\.4
Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices?
Would you trust it to tell you how much to spend if you were buying a diamond?
Section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model) already provides part of the answer to this question.
Plotting the residuals of the model shows that there are some large outliers for small carat sizes.
The largest of these residuals are a little over two, which means that the true value was four times lower; see [Exercise 24\.2\.2](model-building.html#exercise-24.2.2).
Most of the mass of the residuals is between \-0\.5 and 0\.5, which corresponds to about \\(\\pm 40%\\).
There seems to be a slight downward bias in the residuals as carat size increases.
```
ggplot(diamonds2, aes(lcarat, lresid2)) +
geom_hex(bins = 50)
```
```
lresid2_summary <- summarise(diamonds2,
rmse = sqrt(mean(lresid2^2)),
mae = mean(abs(lresid2)),
p025 = quantile(lresid2, 0.025),
p975 = quantile(lresid2, 0.975)
)
lresid2_summary
#> # A tibble: 1 x 4
#> rmse mae p025 p975
#> <dbl> <dbl> <dbl> <dbl>
#> 1 0.192 0.149 -0.369 0.384
```
While in some cases the model can be wrong, overall the model seems to perform well.
The root mean squared error is 0\.19 meaning that the
average error is about \-14%.
Another summary statistics of errors is the mean absolute error (MAE), which is the
mean of the absolute values of the errors.
The MAE is 0\.15, which is \-11%.
Finally, 95% of the residuals are between \-0\.37 and
0\.38, which correspond to 23—31\.
Whether you think that this is a good model depends on factors outside the statistical model itself.
It will depend on the how the model is being used.
I have no idea how to price diamonds, so this would be useful to me in order to understand a reasonable price range for a diamond, so I don’t get ripped off.
However, if I were buying and selling diamonds as a business, I would probably require a better model.
### Exercise 24\.2\.1
In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips.
What do they represent?
The distribution of diamonds has more diamonds at round or otherwise human\-friendly numbers (fractions).
### Exercise 24\.2\.2
If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`?
Following the examples in the chapter, I use a base\-2 logarithm.
```
mod_log <- lm(log2(price) ~ log2(carat), data = diamonds)
mod_log
#>
#> Call:
#> lm(formula = log2(price) ~ log2(carat), data = diamonds)
#>
#> Coefficients:
#> (Intercept) log2(carat)
#> 12.19 1.68
```
The estimated relationship between `carat` and `price` looks like this.
```
tibble(carat = seq(0.25, 5, by = 0.25)) %>%
add_predictions(mod_log) %>%
ggplot(aes(x = carat, y = 2^pred)) +
geom_line() +
labs(x = "carat", y = "price")
```
The plot shows that the estimated relationship between `carat` and `price` is not linear.
The exact relationship in this model is if \\(x\\) increases \\(r\\) times, then \\(y\\) increases \\(r^{a\_1}\\) times.
For example, a two times increase in `carat` is associated with the following increase in `price`:
```
2^coef(mod_log)[2]
#> log2(carat)
#> 3.2
```
Let’s confirm this relationship by checking it for a few values of the `carat` variable.
Let’s increase `carat` from 1 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 2)) -
predict(mod_log, newdata = tibble(carat = 1)))
#> 1
#> 3.2
```
Note that, since `predict()` predicts `log2(carat)` rather than `carat`, the prediction is exponentiated by 2\.
Now let’s increase `carat` from 4 to 2\.
```
2^(predict(mod_log, newdata = tibble(carat = 4)) -
predict(mod_log, newdata = tibble(carat = 2)))
#> 1
#> 3.2
```
Finally, let’s increase `carat` from 0\.5 to 1\.
```
2^(predict(mod_log, newdata = tibble(carat = 1)) -
predict(mod_log, newdata = tibble(carat = 0.5)))
#> 1
#> 3.2
```
All of these examples return the same value, \\(2 ^ {a\_1} \= 3\.2\\).
So why is this?
Let’s ignore the names of the variables in this case and consider the equation:
\\\[
\\log\_b y \= a\_0 \+ a\_1 \\log x
\\]
We want to understand how the difference in \\(y\\) is related to the difference in \\(x\\).
Now, consider this equation at two different values \\(x\_1\\) and \\(x\_0\\),
\\\[
\\log\_b y\_0 \= a\_0 \+ \\log\_b x\_0 \\\\
\\log\_b y\_1 \= a\_0 \+ \\log\_b y\_1
\\]
What is the value of the difference, \\(\\log y\_1 \- \\log y\_0\\)?
\\\[
\\begin{aligned}\[t]
\\log\_b(y\_1\) \- \\log\_b(y\_0\) \&\= (a\_0 \+ a\_1 \\log\_b x\_1\) \- (a\_0 \+ a\_1 \\log x\_0\) ,\\\\
\&\= a\_1 (\\log\_b x\_1 \- \\log x\_0\) , \\\\
\\log\_b \\left(\\frac{y\_1}{y\_0} \\right) \&\= \\log\_b \\left(\\frac{x\_1}{x\_0} \\right)^{a\_1} , \\\\
\\frac{y\_1}{y\_0} \&\= \\left( \\frac{x\_1}{x\_0} \\right)^{a\_1} .
\\end{aligned}
\\]
Let \\(s \= y\_1 / y\_0\\) and \\(r \= x\_1 / x\_0\\). Then,
\\\[
s \= r^{a\_1} \\text{.}
\\]
In other words, an \\(r\\) times increase in \\(x\\), is associated with a \\(r^{a\_1}\\) times increase in \\(y\\).
Note that this relationship does not depend on the base of the logarithm, \\(b\\).
There is another approximation that is commonly used when logarithms appear in regressions.
The first way to show this is using the approximation that \\(x\\) is small, meaning that \\(x \\approx 0\\),
\\\[
\\log (1 \+ x) \\approx x
\\]
This approximation is the first order Taylor expansion of the function at \\(x \= 0\\).
Now consider the relationship between the percent change in \\(x\\) and the percent change in \\(y\\),
\\\[
\\begin{aligned}\[t]
\\log (y \+ \\Delta y) \- \\log y \&\= (\\alpha \+ \\beta \\log (x \+ \\Delta x)) \- (\\alpha \+ \\beta \\log x) \\\\
\\log \\left(\\frac{y \+ \\Delta y}{y} \\right) \&\= \\beta \\log\\left( \\frac{x \+ \\Delta x}{x} \\right) \\\\
\\log \\left(1 \+ \\frac{\\Delta y}{y} \\right) \&\= \\beta \\log\\left( 1 \+ \\frac{\\Delta x}{x} \\right) \\\\
\\frac{\\Delta y}{y} \&\\approx \\beta \\left(\\frac{\\Delta x}{x} \\right)
\\end{aligned}
\\]
Thus a 1% percentage change in \\(x\\) is associated with a \\(\\beta\\) percent change in \\(y\\).
This relationship can also be derived by taking the derivative of \\(\\log y\\) with respect to \\(x\\).
First, rewrite the equation in terms of \\(y\\),
\\\[
y \= \\exp(a\_0 \+ a\_1 \\log(x))
\\]
Then differentiate \\(y\\) with respect to \\(x\\),
\\\[
\\begin{aligned}\[t]
dy \&\= \\exp(a\_0 \+ a\_1 \\log x) \\left(\\frac{a\_1}{x}\\right) dx \\\\
\&\= a\_1 y \\left(\\frac{dx}{x} \\right) \\\\
(dy / y) \&\= a\_1 (dx / x) \\\\
\\%\\Delta y \&\= a\_1\\%\\Delta x
\\end{aligned}
\\]
### Exercise 24\.2\.3
Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors?
The answer to this question is provided in section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model).
```
diamonds2 %>%
filter(abs(lresid2) > 1) %>%
add_predictions(mod_diamond2) %>%
mutate(pred = round(2^pred)) %>%
select(price, pred, carat:table, x:z) %>%
arrange(price)
#> # A tibble: 16 x 11
#> price pred carat cut color clarity depth table x y z
#> <int> <dbl> <dbl> <ord> <ord> <ord> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1013 264 0.25 Fair F SI2 54.4 64 4.3 4.23 2.32
#> 2 1186 284 0.25 Premium G SI2 59 60 5.33 5.28 3.12
#> 3 1186 284 0.25 Premium G SI2 58.8 60 5.33 5.28 3.12
#> 4 1262 2644 1.03 Fair E I1 78.2 54 5.72 5.59 4.42
#> 5 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> 6 1415 639 0.35 Fair G VS2 65.9 54 5.57 5.53 3.66
#> # … with 10 more rows
```
I did not see anything too unusual. Do you?
### Exercise 24\.2\.4
Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices?
Would you trust it to tell you how much to spend if you were buying a diamond?
Section [24\.2\.2](https://r4ds.had.co.nz/model-building.html#a-more-complicated-model) already provides part of the answer to this question.
Plotting the residuals of the model shows that there are some large outliers for small carat sizes.
The largest of these residuals are a little over two, which means that the true value was four times lower; see [Exercise 24\.2\.2](model-building.html#exercise-24.2.2).
Most of the mass of the residuals is between \-0\.5 and 0\.5, which corresponds to about \\(\\pm 40%\\).
There seems to be a slight downward bias in the residuals as carat size increases.
```
ggplot(diamonds2, aes(lcarat, lresid2)) +
geom_hex(bins = 50)
```
```
lresid2_summary <- summarise(diamonds2,
rmse = sqrt(mean(lresid2^2)),
mae = mean(abs(lresid2)),
p025 = quantile(lresid2, 0.025),
p975 = quantile(lresid2, 0.975)
)
lresid2_summary
#> # A tibble: 1 x 4
#> rmse mae p025 p975
#> <dbl> <dbl> <dbl> <dbl>
#> 1 0.192 0.149 -0.369 0.384
```
While in some cases the model can be wrong, overall the model seems to perform well.
The root mean squared error is 0\.19 meaning that the
average error is about \-14%.
Another summary statistics of errors is the mean absolute error (MAE), which is the
mean of the absolute values of the errors.
The MAE is 0\.15, which is \-11%.
Finally, 95% of the residuals are between \-0\.37 and
0\.38, which correspond to 23—31\.
Whether you think that this is a good model depends on factors outside the statistical model itself.
It will depend on the how the model is being used.
I have no idea how to price diamonds, so this would be useful to me in order to understand a reasonable price range for a diamond, so I don’t get ripped off.
However, if I were buying and selling diamonds as a business, I would probably require a better model.
24\.3 What affects the number of daily flights?
-----------------------------------------------
This code is copied from the book and needed for the exercises.
```
library("nycflights13")
daily <- flights %>%
mutate(date = make_date(year, month, day)) %>%
group_by(date) %>%
summarise(n = n())
#> `summarise()` ungrouping output (override with `.groups` argument)
daily
#> # A tibble: 365 x 2
#> date n
#> <date> <int>
#> 1 2013-01-01 842
#> 2 2013-01-02 943
#> 3 2013-01-03 914
#> 4 2013-01-04 915
#> 5 2013-01-05 720
#> 6 2013-01-06 832
#> # … with 359 more rows
daily <- daily %>%
mutate(wday = wday(date, label = TRUE))
term <- function(date) {
cut(date,
breaks = ymd(20130101, 20130605, 20130825, 20140101),
labels = c("spring", "summer", "fall")
)
}
daily <- daily %>%
mutate(term = term(date))
mod <- lm(n ~ wday, data = daily)
daily <- daily %>%
add_residuals(mod)
mod1 <- lm(n ~ wday, data = daily)
mod2 <- lm(n ~ wday * term, data = daily)
```
### Exercise 24\.3\.1
Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\.
(Hint: they all have the same explanation.)
How would these days generalize to another year?
These are the Sundays before Monday holidays Martin Luther King Jr. Day, Memorial Day, and Labor Day.
For other years, use the dates of the holidays for those years—the third Monday of January for Martin Luther King Jr. Day, the last Monday of May for Memorial Day, and the first Monday in September for Labor Day.
### Exercise 24\.3\.2
What do the three days with high positive residuals represent?
How would these days generalize to another year?
The top three days correspond to the Saturday after Thanksgiving (November 30th),
the Sunday after Thanksgiving (December 1st), and the Saturday after Christmas (December 28th).
```
top_n(daily, 3, resid)
#> # A tibble: 3 x 5
#> date n wday term resid
#> <date> <int> <ord> <fct> <dbl>
#> 1 2013-11-30 857 Sat fall 112.
#> 2 2013-12-01 987 Sun fall 95.5
#> 3 2013-12-28 814 Sat fall 69.4
```
We could generalize these to other years using the dates of those holidays on those
years.
### Exercise 24\.3\.3
Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e., it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`
How does this model compare with the model with every combination of `wday` and `term`?
I’ll use the function `case_when()` to do this, though there are other ways which it could be solved.
```
daily <- daily %>%
mutate(
wday2 =
case_when(
wday == "Sat" & term == "summer" ~ "Sat-summer",
wday == "Sat" & term == "fall" ~ "Sat-fall",
wday == "Sat" & term == "spring" ~ "Sat-spring",
TRUE ~ as.character(wday)
)
)
```
```
mod3 <- lm(n ~ wday2, data = daily)
daily %>%
gather_residuals(sat_term = mod3, all_interact = mod2) %>%
ggplot(aes(date, resid, colour = model)) +
geom_line(alpha = 0.75)
```
I think the overlapping plot is hard to understand.
If we are interested in the differences, it is better to plot the differences directly.
In this code, I use `spread_residuals()` to add one *column* per model, rather than `gather_residuals()` which creates a new row for each model.
```
daily %>%
spread_residuals(sat_term = mod3, all_interact = mod2) %>%
mutate(resid_diff = sat_term - all_interact) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
The model with terms × Saturday has higher residuals in the fall and lower residuals in the spring than the model with all interactions.
Comparing models, `mod3` has a lower \\(R^2\\) and regression standard error, \\(\\hat{\\sigma}\\), despite using fewer variables.
More importantly for prediction purposes, this model has a higher AIC, which is an estimate of the out of sample error.
```
glance(mod3) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.736 47.4 3863. 9
```
```
glance(mod2) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.757 46.2 3856. 21
```
### Exercise 24\.3\.4
Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays.
What do the residuals of that model look like?
The question is unclear how to handle public holidays. There are several questions to consider.
First, what are the public holidays? I include all [federal holidays in the United States](https://en.wikipedia.org/wiki/Federal_holidays_in_the_United_States) in 2013\.
Other holidays to consider would be Easter and Good Friday which is US stock market holiday and widely celebrated religious holiday, Mothers Day, Fathers Day,
and Patriots’ Day, which is a holiday in several states, and other state holidays.
```
holidays_2013 <-
tribble(
~holiday, ~date,
"New Year's Day", 20130101,
"Martin Luther King Jr. Day", 20130121,
"Washington's Birthday", 20130218,
"Memorial Day", 20130527,
"Independence Day", 20130704,
"Labor Day", 20130902,
"Columbus Day", 20131028,
"Veteran's Day", 20131111,
"Thanksgiving", 20131128,
"Christmas", 20131225
) %>%
mutate(date = lubridate::ymd(date))
```
The model could include a single dummy variable which indicates a day was a public holiday.
Alternatively, I could include a dummy variable for each public holiday.
I would expect that Veteran’s Day and Washington’s Birthday have a different effect on travel than Thanksgiving, Christmas, and New Year’s Day.
Another question is whether and how I should handle the days before and after holidays.
Travel could be lighter on the day of the holiday,
but heavier the day before or after.
```
daily <- daily %>%
mutate(
wday3 =
case_when(
date %in% (holidays_2013$date - 1L) ~ "day before holiday",
date %in% (holidays_2013$date + 1L) ~ "day after holiday",
date %in% holidays_2013$date ~ "holiday",
.$wday == "Sat" & .$term == "summer" ~ "Sat-summer",
.$wday == "Sat" & .$term == "fall" ~ "Sat-fall",
.$wday == "Sat" & .$term == "spring" ~ "Sat-spring",
TRUE ~ as.character(.$wday)
)
)
mod4 <- lm(n ~ wday3, data = daily)
daily %>%
spread_residuals(resid_sat_terms = mod3, resid_holidays = mod4) %>%
mutate(resid_diff = resid_holidays - resid_sat_terms) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.5
What happens if you fit a day of week effect that varies by month (i.e., `n ~ wday * month`)?
Why is this not very helpful?
```
daily <- mutate(daily, month = factor(lubridate::month(date)))
mod6 <- lm(n ~ wday * month, data = daily)
print(summary(mod6))
#>
#> Call:
#> lm(formula = n ~ wday * month, data = daily)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -269.2 -5.0 1.5 8.8 113.2
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 867.400 7.598 114.16 < 2e-16 ***
#> wday.L -64.074 20.874 -3.07 0.00235 **
#> wday.Q -165.600 20.156 -8.22 7.8e-15 ***
#> wday.C -68.259 20.312 -3.36 0.00089 ***
#> wday^4 -92.081 20.499 -4.49 1.0e-05 ***
#> wday^5 9.793 19.733 0.50 0.62011
#> wday^6 -20.438 18.992 -1.08 0.28280
#> month2 23.707 10.995 2.16 0.03191 *
#> month3 67.886 10.746 6.32 1.0e-09 ***
#> month4 74.593 10.829 6.89 3.7e-11 ***
#> month5 56.279 10.746 5.24 3.2e-07 ***
#> month6 80.307 10.829 7.42 1.4e-12 ***
#> month7 77.114 10.746 7.18 6.4e-12 ***
#> month8 81.636 10.746 7.60 4.5e-13 ***
#> month9 51.371 10.829 4.74 3.3e-06 ***
#> month10 60.136 10.746 5.60 5.2e-08 ***
#> month11 46.914 10.829 4.33 2.1e-05 ***
#> month12 38.779 10.746 3.61 0.00036 ***
#> wday.L:month2 -3.723 29.627 -0.13 0.90009
#> wday.Q:month2 -3.819 29.125 -0.13 0.89578
#> wday.C:month2 0.490 29.233 0.02 0.98664
#> wday^4:month2 4.569 29.364 0.16 0.87646
#> wday^5:month2 -4.255 28.835 -0.15 0.88278
#> wday^6:month2 12.057 28.332 0.43 0.67076
#> wday.L:month3 -14.571 28.430 -0.51 0.60870
#> wday.Q:month3 15.439 28.207 0.55 0.58458
#> wday.C:month3 8.226 28.467 0.29 0.77282
#> wday^4:month3 22.720 28.702 0.79 0.42926
#> wday^5:month3 -15.330 28.504 -0.54 0.59113
#> wday^6:month3 11.373 28.268 0.40 0.68776
#> wday.L:month4 -16.668 29.359 -0.57 0.57067
#> wday.Q:month4 10.725 28.962 0.37 0.71142
#> wday.C:month4 -0.245 28.725 -0.01 0.99320
#> wday^4:month4 23.288 28.871 0.81 0.42056
#> wday^5:month4 -17.872 28.076 -0.64 0.52494
#> wday^6:month4 5.352 27.888 0.19 0.84794
#> wday.L:month5 3.666 29.359 0.12 0.90071
#> wday.Q:month5 -20.665 28.670 -0.72 0.47163
#> wday.C:month5 4.634 28.725 0.16 0.87196
#> wday^4:month5 5.999 28.511 0.21 0.83349
#> wday^5:month5 -16.912 28.076 -0.60 0.54742
#> wday^6:month5 12.764 27.194 0.47 0.63916
#> wday.L:month6 -4.526 28.651 -0.16 0.87459
#> wday.Q:month6 23.813 28.207 0.84 0.39927
#> wday.C:month6 13.758 28.725 0.48 0.63234
#> wday^4:month6 24.118 29.187 0.83 0.40932
#> wday^5:month6 -17.648 28.798 -0.61 0.54048
#> wday^6:month6 10.526 28.329 0.37 0.71051
#> wday.L:month7 -28.791 29.359 -0.98 0.32760
#> wday.Q:month7 49.585 28.670 1.73 0.08482 .
#> wday.C:month7 54.501 28.725 1.90 0.05881 .
#> wday^4:month7 50.847 28.511 1.78 0.07559 .
#> wday^5:month7 -33.698 28.076 -1.20 0.23106
#> wday^6:month7 -13.894 27.194 -0.51 0.60979
#> wday.L:month8 -20.448 28.871 -0.71 0.47938
#> wday.Q:month8 6.765 28.504 0.24 0.81258
#> wday.C:month8 6.001 28.467 0.21 0.83319
#> wday^4:month8 19.074 28.781 0.66 0.50806
#> wday^5:month8 -19.312 28.058 -0.69 0.49183
#> wday^6:month8 9.507 27.887 0.34 0.73341
#> wday.L:month9 -30.341 28.926 -1.05 0.29511
#> wday.Q:month9 -42.034 28.670 -1.47 0.14373
#> wday.C:month9 -20.719 28.725 -0.72 0.47134
#> wday^4:month9 -20.375 28.791 -0.71 0.47973
#> wday^5:month9 -18.238 28.523 -0.64 0.52308
#> wday^6:month9 11.726 28.270 0.41 0.67861
#> wday.L:month10 -61.051 29.520 -2.07 0.03954 *
#> wday.Q:month10 -26.235 28.504 -0.92 0.35815
#> wday.C:month10 -32.435 28.725 -1.13 0.25979
#> wday^4:month10 -12.212 28.990 -0.42 0.67389
#> wday^5:month10 -27.686 27.907 -0.99 0.32201
#> wday^6:month10 0.123 26.859 0.00 0.99634
#> wday.L:month11 -54.947 28.926 -1.90 0.05851 .
#> wday.Q:month11 16.012 28.670 0.56 0.57696
#> wday.C:month11 54.950 28.725 1.91 0.05677 .
#> wday^4:month11 47.286 28.791 1.64 0.10164
#> wday^5:month11 -44.740 28.523 -1.57 0.11787
#> wday^6:month11 -20.688 28.270 -0.73 0.46491
#> wday.L:month12 -9.506 28.871 -0.33 0.74221
#> wday.Q:month12 75.209 28.504 2.64 0.00879 **
#> wday.C:month12 -25.026 28.467 -0.88 0.38010
#> wday^4:month12 -23.780 28.781 -0.83 0.40938
#> wday^5:month12 20.447 28.058 0.73 0.46676
#> wday^6:month12 9.586 27.887 0.34 0.73128
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 42 on 281 degrees of freedom
#> Multiple R-squared: 0.836, Adjusted R-squared: 0.787
#> F-statistic: 17.2 on 83 and 281 DF, p-value: <2e-16
```
If we fit a day of week effect that varies by month, there will be `12 * 7 = 84` parameters in the model.
Since each month has only four to five weeks, each of these day of week \\(\\times\\) month effects is the average of only four or five observations.
These estimates have large standard errors and likely not generalize well beyond the sample data, since they are estimated from only a few observations.
### Exercise 24\.3\.6
What would you expect the model `n ~ wday + ns(date, 5)` to look like?
Knowing what you know about the data, why would you expect it to be not particularly effective?
Previous models fit in the chapter and exercises show that the effects of days of the week vary across different times of the year.
The model `wday + ns(date, 5)` does not interact the day of week effect (`wday`) with the time of year effects (`ns(date, 5)`).
I estimate a model which does not interact the day of week effects (`mod7`) with the spline to that which does (`mod8`).
I need to load the splines package to use the `ns()` function.
```
mod7 <- lm(n ~ wday + ns(date, 5), data = daily)
mod8 <- lm(n ~ wday * ns(date, 5), data = daily)
```
The residuals of the model that does not interact day of week with time of year (`mod7`) are larger than those of the model that does (`mod8`).
The model `mod7` underestimates weekends during the summer and overestimates weekends during the autumn.
```
daily %>%
gather_residuals(mod7, mod8) %>%
ggplot(aes(x = date, y = resid, color = model)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.7
We hypothesized that people leaving on Sundays are more likely to be business travelers who need to be somewhere on Monday.
Explore that hypothesis by seeing how it breaks down based on distance and time:
if it’s true, you’d expect to see more Sunday evening flights to places that are far away.
Comparing the average distances of flights by day of week, Sunday flights are the second longest.
Saturday flights are the longest on average.
Saturday may have the longest flights on average because there are fewer regularly scheduled short business/commuter flights on the weekends but that is speculation.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Average Distance")
```
Hide outliers.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot(outlier.shape = NA) +
labs(x = "Day of Week", y = "Average Distance")
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
stat_summary() +
labs(x = "Day of Week", y = "Average Distance")
#> No summary function supplied, defaulting to `mean_se()`
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_violin() +
labs(x = "Day of Week", y = "Average Distance")
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
ggplot(aes(x = hour, color = wday, y = ..density..)) +
geom_freqpoly(binwidth = 1)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = mean(distance)) %>%
ggplot(aes(x = hour, color = wday, y = distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = sum(distance)) %>%
group_by(wday) %>%
mutate(prop_distance = distance / sum(distance)) %>%
ungroup() %>%
ggplot(aes(x = hour, color = wday, y = prop_distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
### Exercise 24\.3\.8
It’s a little frustrating that Sunday and Saturday are on separate ends of the plot.
Write a small function to set the levels of the factor so that the week starts on Monday.
See the chapter [Factors](https://r4ds.had.co.nz/factors.html) for the function `fct_relevel()`.
Use `fct_relevel()` to put all levels in\-front of the first level (“Sunday”).
```
monday_first <- function(x) {
fct_relevel(x, levels(x)[-1])
}
```
Now Monday is the first day of the week.
```
daily <- daily %>%
mutate(wday = wday(date, label = TRUE))
ggplot(daily, aes(monday_first(wday), n)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Number of flights")
```
### Exercise 24\.3\.1
Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\.
(Hint: they all have the same explanation.)
How would these days generalize to another year?
These are the Sundays before Monday holidays Martin Luther King Jr. Day, Memorial Day, and Labor Day.
For other years, use the dates of the holidays for those years—the third Monday of January for Martin Luther King Jr. Day, the last Monday of May for Memorial Day, and the first Monday in September for Labor Day.
### Exercise 24\.3\.2
What do the three days with high positive residuals represent?
How would these days generalize to another year?
The top three days correspond to the Saturday after Thanksgiving (November 30th),
the Sunday after Thanksgiving (December 1st), and the Saturday after Christmas (December 28th).
```
top_n(daily, 3, resid)
#> # A tibble: 3 x 5
#> date n wday term resid
#> <date> <int> <ord> <fct> <dbl>
#> 1 2013-11-30 857 Sat fall 112.
#> 2 2013-12-01 987 Sun fall 95.5
#> 3 2013-12-28 814 Sat fall 69.4
```
We could generalize these to other years using the dates of those holidays on those
years.
### Exercise 24\.3\.3
Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e., it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`
How does this model compare with the model with every combination of `wday` and `term`?
I’ll use the function `case_when()` to do this, though there are other ways which it could be solved.
```
daily <- daily %>%
mutate(
wday2 =
case_when(
wday == "Sat" & term == "summer" ~ "Sat-summer",
wday == "Sat" & term == "fall" ~ "Sat-fall",
wday == "Sat" & term == "spring" ~ "Sat-spring",
TRUE ~ as.character(wday)
)
)
```
```
mod3 <- lm(n ~ wday2, data = daily)
daily %>%
gather_residuals(sat_term = mod3, all_interact = mod2) %>%
ggplot(aes(date, resid, colour = model)) +
geom_line(alpha = 0.75)
```
I think the overlapping plot is hard to understand.
If we are interested in the differences, it is better to plot the differences directly.
In this code, I use `spread_residuals()` to add one *column* per model, rather than `gather_residuals()` which creates a new row for each model.
```
daily %>%
spread_residuals(sat_term = mod3, all_interact = mod2) %>%
mutate(resid_diff = sat_term - all_interact) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
The model with terms × Saturday has higher residuals in the fall and lower residuals in the spring than the model with all interactions.
Comparing models, `mod3` has a lower \\(R^2\\) and regression standard error, \\(\\hat{\\sigma}\\), despite using fewer variables.
More importantly for prediction purposes, this model has a higher AIC, which is an estimate of the out of sample error.
```
glance(mod3) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.736 47.4 3863. 9
```
```
glance(mod2) %>% select(r.squared, sigma, AIC, df)
#> # A tibble: 1 x 4
#> r.squared sigma AIC df
#> <dbl> <dbl> <dbl> <int>
#> 1 0.757 46.2 3856. 21
```
### Exercise 24\.3\.4
Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays.
What do the residuals of that model look like?
The question is unclear how to handle public holidays. There are several questions to consider.
First, what are the public holidays? I include all [federal holidays in the United States](https://en.wikipedia.org/wiki/Federal_holidays_in_the_United_States) in 2013\.
Other holidays to consider would be Easter and Good Friday which is US stock market holiday and widely celebrated religious holiday, Mothers Day, Fathers Day,
and Patriots’ Day, which is a holiday in several states, and other state holidays.
```
holidays_2013 <-
tribble(
~holiday, ~date,
"New Year's Day", 20130101,
"Martin Luther King Jr. Day", 20130121,
"Washington's Birthday", 20130218,
"Memorial Day", 20130527,
"Independence Day", 20130704,
"Labor Day", 20130902,
"Columbus Day", 20131028,
"Veteran's Day", 20131111,
"Thanksgiving", 20131128,
"Christmas", 20131225
) %>%
mutate(date = lubridate::ymd(date))
```
The model could include a single dummy variable which indicates a day was a public holiday.
Alternatively, I could include a dummy variable for each public holiday.
I would expect that Veteran’s Day and Washington’s Birthday have a different effect on travel than Thanksgiving, Christmas, and New Year’s Day.
Another question is whether and how I should handle the days before and after holidays.
Travel could be lighter on the day of the holiday,
but heavier the day before or after.
```
daily <- daily %>%
mutate(
wday3 =
case_when(
date %in% (holidays_2013$date - 1L) ~ "day before holiday",
date %in% (holidays_2013$date + 1L) ~ "day after holiday",
date %in% holidays_2013$date ~ "holiday",
.$wday == "Sat" & .$term == "summer" ~ "Sat-summer",
.$wday == "Sat" & .$term == "fall" ~ "Sat-fall",
.$wday == "Sat" & .$term == "spring" ~ "Sat-spring",
TRUE ~ as.character(.$wday)
)
)
mod4 <- lm(n ~ wday3, data = daily)
daily %>%
spread_residuals(resid_sat_terms = mod3, resid_holidays = mod4) %>%
mutate(resid_diff = resid_holidays - resid_sat_terms) %>%
ggplot(aes(date, resid_diff)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.5
What happens if you fit a day of week effect that varies by month (i.e., `n ~ wday * month`)?
Why is this not very helpful?
```
daily <- mutate(daily, month = factor(lubridate::month(date)))
mod6 <- lm(n ~ wday * month, data = daily)
print(summary(mod6))
#>
#> Call:
#> lm(formula = n ~ wday * month, data = daily)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -269.2 -5.0 1.5 8.8 113.2
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 867.400 7.598 114.16 < 2e-16 ***
#> wday.L -64.074 20.874 -3.07 0.00235 **
#> wday.Q -165.600 20.156 -8.22 7.8e-15 ***
#> wday.C -68.259 20.312 -3.36 0.00089 ***
#> wday^4 -92.081 20.499 -4.49 1.0e-05 ***
#> wday^5 9.793 19.733 0.50 0.62011
#> wday^6 -20.438 18.992 -1.08 0.28280
#> month2 23.707 10.995 2.16 0.03191 *
#> month3 67.886 10.746 6.32 1.0e-09 ***
#> month4 74.593 10.829 6.89 3.7e-11 ***
#> month5 56.279 10.746 5.24 3.2e-07 ***
#> month6 80.307 10.829 7.42 1.4e-12 ***
#> month7 77.114 10.746 7.18 6.4e-12 ***
#> month8 81.636 10.746 7.60 4.5e-13 ***
#> month9 51.371 10.829 4.74 3.3e-06 ***
#> month10 60.136 10.746 5.60 5.2e-08 ***
#> month11 46.914 10.829 4.33 2.1e-05 ***
#> month12 38.779 10.746 3.61 0.00036 ***
#> wday.L:month2 -3.723 29.627 -0.13 0.90009
#> wday.Q:month2 -3.819 29.125 -0.13 0.89578
#> wday.C:month2 0.490 29.233 0.02 0.98664
#> wday^4:month2 4.569 29.364 0.16 0.87646
#> wday^5:month2 -4.255 28.835 -0.15 0.88278
#> wday^6:month2 12.057 28.332 0.43 0.67076
#> wday.L:month3 -14.571 28.430 -0.51 0.60870
#> wday.Q:month3 15.439 28.207 0.55 0.58458
#> wday.C:month3 8.226 28.467 0.29 0.77282
#> wday^4:month3 22.720 28.702 0.79 0.42926
#> wday^5:month3 -15.330 28.504 -0.54 0.59113
#> wday^6:month3 11.373 28.268 0.40 0.68776
#> wday.L:month4 -16.668 29.359 -0.57 0.57067
#> wday.Q:month4 10.725 28.962 0.37 0.71142
#> wday.C:month4 -0.245 28.725 -0.01 0.99320
#> wday^4:month4 23.288 28.871 0.81 0.42056
#> wday^5:month4 -17.872 28.076 -0.64 0.52494
#> wday^6:month4 5.352 27.888 0.19 0.84794
#> wday.L:month5 3.666 29.359 0.12 0.90071
#> wday.Q:month5 -20.665 28.670 -0.72 0.47163
#> wday.C:month5 4.634 28.725 0.16 0.87196
#> wday^4:month5 5.999 28.511 0.21 0.83349
#> wday^5:month5 -16.912 28.076 -0.60 0.54742
#> wday^6:month5 12.764 27.194 0.47 0.63916
#> wday.L:month6 -4.526 28.651 -0.16 0.87459
#> wday.Q:month6 23.813 28.207 0.84 0.39927
#> wday.C:month6 13.758 28.725 0.48 0.63234
#> wday^4:month6 24.118 29.187 0.83 0.40932
#> wday^5:month6 -17.648 28.798 -0.61 0.54048
#> wday^6:month6 10.526 28.329 0.37 0.71051
#> wday.L:month7 -28.791 29.359 -0.98 0.32760
#> wday.Q:month7 49.585 28.670 1.73 0.08482 .
#> wday.C:month7 54.501 28.725 1.90 0.05881 .
#> wday^4:month7 50.847 28.511 1.78 0.07559 .
#> wday^5:month7 -33.698 28.076 -1.20 0.23106
#> wday^6:month7 -13.894 27.194 -0.51 0.60979
#> wday.L:month8 -20.448 28.871 -0.71 0.47938
#> wday.Q:month8 6.765 28.504 0.24 0.81258
#> wday.C:month8 6.001 28.467 0.21 0.83319
#> wday^4:month8 19.074 28.781 0.66 0.50806
#> wday^5:month8 -19.312 28.058 -0.69 0.49183
#> wday^6:month8 9.507 27.887 0.34 0.73341
#> wday.L:month9 -30.341 28.926 -1.05 0.29511
#> wday.Q:month9 -42.034 28.670 -1.47 0.14373
#> wday.C:month9 -20.719 28.725 -0.72 0.47134
#> wday^4:month9 -20.375 28.791 -0.71 0.47973
#> wday^5:month9 -18.238 28.523 -0.64 0.52308
#> wday^6:month9 11.726 28.270 0.41 0.67861
#> wday.L:month10 -61.051 29.520 -2.07 0.03954 *
#> wday.Q:month10 -26.235 28.504 -0.92 0.35815
#> wday.C:month10 -32.435 28.725 -1.13 0.25979
#> wday^4:month10 -12.212 28.990 -0.42 0.67389
#> wday^5:month10 -27.686 27.907 -0.99 0.32201
#> wday^6:month10 0.123 26.859 0.00 0.99634
#> wday.L:month11 -54.947 28.926 -1.90 0.05851 .
#> wday.Q:month11 16.012 28.670 0.56 0.57696
#> wday.C:month11 54.950 28.725 1.91 0.05677 .
#> wday^4:month11 47.286 28.791 1.64 0.10164
#> wday^5:month11 -44.740 28.523 -1.57 0.11787
#> wday^6:month11 -20.688 28.270 -0.73 0.46491
#> wday.L:month12 -9.506 28.871 -0.33 0.74221
#> wday.Q:month12 75.209 28.504 2.64 0.00879 **
#> wday.C:month12 -25.026 28.467 -0.88 0.38010
#> wday^4:month12 -23.780 28.781 -0.83 0.40938
#> wday^5:month12 20.447 28.058 0.73 0.46676
#> wday^6:month12 9.586 27.887 0.34 0.73128
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 42 on 281 degrees of freedom
#> Multiple R-squared: 0.836, Adjusted R-squared: 0.787
#> F-statistic: 17.2 on 83 and 281 DF, p-value: <2e-16
```
If we fit a day of week effect that varies by month, there will be `12 * 7 = 84` parameters in the model.
Since each month has only four to five weeks, each of these day of week \\(\\times\\) month effects is the average of only four or five observations.
These estimates have large standard errors and likely not generalize well beyond the sample data, since they are estimated from only a few observations.
### Exercise 24\.3\.6
What would you expect the model `n ~ wday + ns(date, 5)` to look like?
Knowing what you know about the data, why would you expect it to be not particularly effective?
Previous models fit in the chapter and exercises show that the effects of days of the week vary across different times of the year.
The model `wday + ns(date, 5)` does not interact the day of week effect (`wday`) with the time of year effects (`ns(date, 5)`).
I estimate a model which does not interact the day of week effects (`mod7`) with the spline to that which does (`mod8`).
I need to load the splines package to use the `ns()` function.
```
mod7 <- lm(n ~ wday + ns(date, 5), data = daily)
mod8 <- lm(n ~ wday * ns(date, 5), data = daily)
```
The residuals of the model that does not interact day of week with time of year (`mod7`) are larger than those of the model that does (`mod8`).
The model `mod7` underestimates weekends during the summer and overestimates weekends during the autumn.
```
daily %>%
gather_residuals(mod7, mod8) %>%
ggplot(aes(x = date, y = resid, color = model)) +
geom_line(alpha = 0.75)
```
### Exercise 24\.3\.7
We hypothesized that people leaving on Sundays are more likely to be business travelers who need to be somewhere on Monday.
Explore that hypothesis by seeing how it breaks down based on distance and time:
if it’s true, you’d expect to see more Sunday evening flights to places that are far away.
Comparing the average distances of flights by day of week, Sunday flights are the second longest.
Saturday flights are the longest on average.
Saturday may have the longest flights on average because there are fewer regularly scheduled short business/commuter flights on the weekends but that is speculation.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Average Distance")
```
Hide outliers.
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_boxplot(outlier.shape = NA) +
labs(x = "Day of Week", y = "Average Distance")
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
stat_summary() +
labs(x = "Day of Week", y = "Average Distance")
#> No summary function supplied, defaulting to `mean_se()`
```
Try pointrange with mean and standard error of the mean (sd / sqrt(n)).
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
ggplot(aes(y = distance, x = wday)) +
geom_violin() +
labs(x = "Day of Week", y = "Average Distance")
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
ggplot(aes(x = hour, color = wday, y = ..density..)) +
geom_freqpoly(binwidth = 1)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = mean(distance)) %>%
ggplot(aes(x = hour, color = wday, y = distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
```
flights %>%
mutate(
date = make_date(year, month, day),
wday = wday(date, label = TRUE)
) %>%
filter(
distance < 3000,
hour >= 5, hour <= 21
) %>%
group_by(wday, hour) %>%
summarise(distance = sum(distance)) %>%
group_by(wday) %>%
mutate(prop_distance = distance / sum(distance)) %>%
ungroup() %>%
ggplot(aes(x = hour, color = wday, y = prop_distance)) +
geom_line()
#> `summarise()` regrouping output by 'wday' (override with `.groups` argument)
```
### Exercise 24\.3\.8
It’s a little frustrating that Sunday and Saturday are on separate ends of the plot.
Write a small function to set the levels of the factor so that the week starts on Monday.
See the chapter [Factors](https://r4ds.had.co.nz/factors.html) for the function `fct_relevel()`.
Use `fct_relevel()` to put all levels in\-front of the first level (“Sunday”).
```
monday_first <- function(x) {
fct_relevel(x, levels(x)[-1])
}
```
Now Monday is the first day of the week.
```
daily <- daily %>%
mutate(wday = wday(date, label = TRUE))
ggplot(daily, aes(monday_first(wday), n)) +
geom_boxplot() +
labs(x = "Day of Week", y = "Number of flights")
```
24\.4 Learning more about models
--------------------------------
No exercises
| Data Science |
topepo.github.io | https://topepo.github.io/caret/data-sets.html |
23 Data Sets
============
There are a few data sets included in [`caret`](http://cran.r-project.org/web/packages/caret/index.html). The first four are computational chemistry problems where the object is to relate the molecular structure of compounds (via molecular descriptors) to some property of interest ([Clark and Pickett (2000\)](http://www.sciencedirect.com/science/article/pii/S1359644699014518)). Similar data sets can be found in the [`QSARdata`](http://cran.r-project.org/web/packages/QSARdata/index.html) R pacakge.
Other R packages with data are:
* [`mlbench`](http://cran.r-project.org/web/packages/mlbench/index.html),
* [`SMCRM`](http://cran.r-project.org/web/packages/SMCRM/index.html) and
* [`AppliedPredictiveModeling`](http://cran.r-project.org/web/packages/AppliedPredictiveModeling/index.html).
23\.1 Blood\-Brain Barrier Data
-------------------------------
[Mente and Lombardo (2005\)](http://www.springerlink.com/content/72j377175n536768/?p=f546488cc8fa4ec7a3d491%20eb20adb3c&pi=0) developed models to predict the log of the ratio of the concentration of a compound in the brain and the concentration in blood. For each compound, they computed three sets of molecular descriptors: MOE 2D, rule\-of\-five and Charge Polar Surface Area (CPSA). In all, 134 descriptors were calculated. Included in this package are 208 non\-proprietary literature compounds. The vector `logBBB` contains the log concentration ratio and the data fame `bbbDescr` contains the descriptor values.
23\.2 COX\-2 Activity Data
--------------------------
From [Sutherland, O’Brien, and Weaver (2003\)](http://pubs.acs.org/cgi-bin/abstract.cgi/jmcmar/2004/47/i22/abs/jm0497141.html): A set of 467 cyclooxygenase\-2 (COX\-2\) inhibitors has been assembled from the published work of a single research group, with in vitro activities against human recombinant enzyme expressed as IC50 values ranging from 1 nM to \>100 uM (53 compounds have indeterminate IC50 values).
A set of 255 descriptors (MOE2D and QikProp) were generated. To classify the data, we used a cutoff of 2^{2\.5} to determine activity.
Using `data(cox2)` exposes three R objects: `cox2Descr` is a data frame with the descriptor data, `cox2IC50` is a numeric vector of IC50 assay values and `cox2Class` is a factor vector with the activity results.
23\.3 DHFR Inhibition
---------------------
[Sutherland and Weaver (2004\)](http://www.springerlink.com/content/q5m5xp1q356p2071/) discuss QSAR models for dihydrofolate reductase (DHFR) inhibition. This data set contains values for 325 compounds. For each compound, 228 molecular descriptors have been calculated. Additionally, each samples is designated as “active” or “inactive”.
The data frame `dhfr` contains a column called `Y` with the outcome classification. The remainder of the columns are molecular descriptor values.
23\.4 Tecator NIR Data
----------------------
These data can be found in the datasets section of StatLib. The data consist of 100 near infrared absorbance spectra used to predict the moisture, fat and protein values of chopped meat.
From [StatLib](http://lib.stat.cmu.edu/datasets/tecator):
> These data are recorded on a Tecator Infratec Food and Feed Analyzer
> working in the wavelength range 850 \- 1050 nm by the Near Infrared
> Transmission (NIT) principle. Each sample contains finely chopped pure
> meat with different moisture, fat and protein contents. If results
> from these data are used in a publication we want you to mention the
> instrument and company name (Tecator) in the publication. In addition,
> please send a preprint of your article to: Karin Thente, Tecator AB,
> Box 70, S\-263 21 Hoganas, Sweden.
One reference for these data is Borggaard and Thodberg (1992\).
Using `data(tecator)` loads a 215 x 100 matrix of absorbance spectra and a 215 x 3 matrix of outcomes.
23\.5 Fatty Acid Composition Data
---------------------------------
[Brodnjak\-Voncina et al. (2005\)](http://dx.doi.org/10.1016/j.chemolab.2004.04.011) describe a set of data where seven fatty acid compositions were used to classify commercial oils as either pumpkin (labeled `A`), sunflower (`B`), peanut (`C`), olive (`D`), soybean (`E`), rapeseed (`F`) and corn (`G`). There were 96 data points contained in their Table 1 with known results. The breakdown of the classes is given in below:
```
data(oil)
dim(fattyAcids)
```
```
## [1] 96 7
```
```
table(oilType)
```
```
## oilType
## A B C D E F G
## 37 26 3 7 11 10 2
```
As a note, the paper states on page 32 that there are 37 unknown samples while the table on pages 33 and 34 shows that there are 34 unknowns.
23\.6 German Credit Data
------------------------
Data from Dr. Hans Hofmann of the University of Hamburg and stored at the [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29).
These data have two classes for the credit worthiness: good or bad. There are predictors related to attributes, such as: checking account status, duration, credit history, purpose of the loan, amount of the loan, savings accounts or bonds, employment duration, Installment rate in percentage of disposable income, personal information, other debtors/guarantors, residence duration, property, age, other installment plans, housing, number of existing credits, job information, Number of people being liable to provide maintenance for, telephone, and foreign worker status.
Many of these predictors are discrete and have been expanded into several 0/1 indicator variables
```
library(caret)
data(GermanCredit)
## Show the first 10 columns
str(GermanCredit[, 1:10])
```
```
## 'data.frame': 1000 obs. of 10 variables:
## $ status : Factor w/ 4 levels "... < 100 DM",..: 1 2 4 1 1 4 4 2 4 2 ...
## $ duration : num 6 48 12 42 24 36 24 36 12 30 ...
## $ credit_history : Factor w/ 5 levels "no credits taken/all credits paid back duly",..: 5 3 5 3 4 3 3 3 3 5 ...
## $ purpose : Factor w/ 10 levels "car (new)","car (used)",..: 5 5 8 4 1 8 4 2 5 1 ...
## $ amount : num 1169 5951 2096 7882 4870 ...
## $ savings : Factor w/ 5 levels "... < 100 DM",..: 5 1 1 1 1 5 3 1 4 1 ...
## $ employment_duration: Ord.factor w/ 5 levels "unemployed"<"... < 1 year"<..: 5 3 4 4 3 3 5 3 4 1 ...
## $ installment_rate : num 4 2 2 2 3 2 3 2 2 4 ...
## $ personal_status_sex: Factor w/ 5 levels "male : divorced/separated",..: 3 2 3 3 3 3 3 3 1 4 ...
## $ other_debtors : Factor w/ 3 levels "none","co-applicant",..: 1 1 1 3 1 1 1 1 1 1 ...
```
23\.7 Kelly Blue Book
---------------------
Resale data for 2005 model year GM cars [Kuiper (2008\)](http://www.amstat.org/publications/jse/v16n3/datasets.kuiper.html) collected data on Kelly Blue Book resale data for 804 GM cars (2005 model year).
`cars` is data frame of the suggested retail price (column `Price`) and various characteristics of each car (columns `Mileage`, `Cylinder`, `Doors`, `Cruise`, `Sound`, `Leather`, `Buick`, `Cadillac`, `Chevy`, `Pontiac`, `Saab`, `Saturn`, `convertible`, `coupe`, `hatchback`, `sedan` and `wagon`)
```
data(cars)
str(cars)
```
```
## 'data.frame': 804 obs. of 18 variables:
## $ Price : num 22661 21725 29143 30732 33359 ...
## $ Mileage : int 20105 13457 31655 22479 17590 23635 17381 27558 25049 17319 ...
## $ Cylinder : int 6 6 4 4 4 4 4 4 4 4 ...
## $ Doors : int 4 2 2 2 2 2 2 2 2 4 ...
## $ Cruise : int 1 1 1 1 1 1 1 1 1 1 ...
## $ Sound : int 0 1 1 0 1 0 1 0 0 0 ...
## $ Leather : int 0 0 1 0 1 0 1 1 0 1 ...
## $ Buick : int 1 0 0 0 0 0 0 0 0 0 ...
## $ Cadillac : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Chevy : int 0 1 0 0 0 0 0 0 0 0 ...
## $ Pontiac : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Saab : int 0 0 1 1 1 1 1 1 1 1 ...
## $ Saturn : int 0 0 0 0 0 0 0 0 0 0 ...
## $ convertible: int 0 0 1 1 1 1 1 1 1 0 ...
## $ coupe : int 0 1 0 0 0 0 0 0 0 0 ...
## $ hatchback : int 0 0 0 0 0 0 0 0 0 0 ...
## $ sedan : int 1 0 0 0 0 0 0 0 0 1 ...
## $ wagon : int 0 0 0 0 0 0 0 0 0 0 ...
```
23\.8 Cell Body Segmentation Data
---------------------------------
[Hill, LaPan, Li and Haney (2007\)](http://www.biomedcentral.com/1471-2105/8/340) develop models to predict which cells in a high content screen were well segmented. The data consists of 119 imaging measurements on 2019\. The original analysis used 1009 for training and 1010 as a test set (see the column called `Case`).
The outcome class is contained in a factor variable called `Class` with levels `PS` for poorly segmented and `WS` for well segmented.
```
data(segmentationData)
str(segmentationData[,1:10])
```
```
## 'data.frame': 2019 obs. of 10 variables:
## $ Cell : int 207827637 207932307 207932463 207932470 207932455 207827656 207827659 207827661 207932479 207932480 ...
## $ Case : Factor w/ 2 levels "Test","Train": 1 2 2 2 1 1 1 1 1 1 ...
## $ Class : Factor w/ 2 levels "PS","WS": 1 1 2 1 1 2 2 1 2 2 ...
## $ AngleCh1 : num 143.25 133.75 106.65 69.15 2.89 ...
## $ AreaCh1 : int 185 819 431 298 285 172 177 251 495 384 ...
## $ AvgIntenCh1 : num 15.7 31.9 28 19.5 24.3 ...
## $ AvgIntenCh2 : num 4.95 206.88 116.32 102.29 112.42 ...
## $ AvgIntenCh3 : num 9.55 69.92 63.94 28.22 20.47 ...
## $ AvgIntenCh4 : num 2.21 164.15 106.7 31.03 40.58 ...
## $ ConvexHullAreaRatioCh1: num 1.12 1.26 1.05 1.2 1.11 ...
```
23\.9 Sacramento House Price Data
---------------------------------
This data frame contains house and sale price data for 932 homes in Sacramento CA. The original data were obtained from the website for the [SpatialKey software](https://support.spatialkey.com/spatialkey-sample-csv-data). From their website: “The Sacramento real estate transactions file is a list of 985 real estate transactions in the Sacramento area reported over a five\-day period, as reported by the Sacramento Bee.” Google was used to fill in missing/incorrect data.
```
data(Sacramento)
str(Sacramento)
```
```
## 'data.frame': 932 obs. of 9 variables:
## $ city : Factor w/ 37 levels "ANTELOPE","AUBURN",..: 34 34 34 34 34 34 34 34 29 31 ...
## $ zip : Factor w/ 68 levels "z95603","z95608",..: 64 52 44 44 53 65 66 49 24 25 ...
## $ beds : int 2 3 2 2 2 3 3 3 2 3 ...
## $ baths : num 1 1 1 1 1 1 2 1 2 2 ...
## $ sqft : int 836 1167 796 852 797 1122 1104 1177 941 1146 ...
## $ type : Factor w/ 3 levels "Condo","Multi_Family",..: 3 3 3 3 3 1 3 3 1 3 ...
## $ price : int 59222 68212 68880 69307 81900 89921 90895 91002 94905 98937 ...
## $ latitude : num 38.6 38.5 38.6 38.6 38.5 ...
## $ longitude: num -121 -121 -121 -121 -121 ...
```
23\.10 Animal Scat Data
-----------------------
[Reid (2105\)](http://www.bioone.org/doi/full/10.2981/wlb.00105) collected data on animal feses in coastal California. The data consist of DNA verified species designations as well as fields related to the time and place of the collection and the scat itself. The data frame `scat_orig` contains while scat contains data on the three main species.
```
data(scat)
str(scat)
```
```
## 'data.frame': 110 obs. of 19 variables:
## $ Species : Factor w/ 3 levels "bobcat","coyote",..: 2 2 1 2 2 2 1 1 1 1 ...
## $ Month : Factor w/ 9 levels "April","August",..: 4 4 4 4 4 4 4 4 4 4 ...
## $ Year : int 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 ...
## $ Site : Factor w/ 2 levels "ANNU","YOLA": 2 2 2 2 2 2 1 1 1 1 ...
## $ Location : Factor w/ 3 levels "edge","middle",..: 1 1 2 2 1 1 3 3 3 2 ...
## $ Age : int 5 3 3 5 5 5 1 3 5 5 ...
## $ Number : int 2 2 2 2 4 3 5 7 2 1 ...
## $ Length : num 9.5 14 9 8.5 8 9 6 5.5 11 20.5 ...
## $ Diameter : num 25.7 25.4 18.8 18.1 20.7 21.2 15.7 21.9 17.5 18 ...
## $ Taper : num 41.9 37.1 16.5 24.7 20.1 28.5 8.2 19.3 29.1 21.4 ...
## $ TI : num 1.63 1.46 0.88 1.36 0.97 1.34 0.52 0.88 1.66 1.19 ...
## $ Mass : num 15.9 17.6 8.4 7.4 25.4 ...
## $ d13C : num -26.9 -29.6 -28.7 -20.1 -23.2 ...
## $ d15N : num 6.94 9.87 8.52 5.79 7.01 8.28 4.2 3.89 7.34 6.06 ...
## $ CN : num 8.5 11.3 8.1 11.5 10.6 9 5.4 5.6 5.8 7.7 ...
## $ ropey : int 0 0 1 1 0 1 1 0 0 1 ...
## $ segmented: int 0 0 1 0 1 0 1 1 1 1 ...
## $ flat : int 0 0 0 0 0 0 0 0 0 0 ...
## $ scrape : int 0 0 1 0 0 0 1 0 0 0 ...
```
23\.1 Blood\-Brain Barrier Data
-------------------------------
[Mente and Lombardo (2005\)](http://www.springerlink.com/content/72j377175n536768/?p=f546488cc8fa4ec7a3d491%20eb20adb3c&pi=0) developed models to predict the log of the ratio of the concentration of a compound in the brain and the concentration in blood. For each compound, they computed three sets of molecular descriptors: MOE 2D, rule\-of\-five and Charge Polar Surface Area (CPSA). In all, 134 descriptors were calculated. Included in this package are 208 non\-proprietary literature compounds. The vector `logBBB` contains the log concentration ratio and the data fame `bbbDescr` contains the descriptor values.
23\.2 COX\-2 Activity Data
--------------------------
From [Sutherland, O’Brien, and Weaver (2003\)](http://pubs.acs.org/cgi-bin/abstract.cgi/jmcmar/2004/47/i22/abs/jm0497141.html): A set of 467 cyclooxygenase\-2 (COX\-2\) inhibitors has been assembled from the published work of a single research group, with in vitro activities against human recombinant enzyme expressed as IC50 values ranging from 1 nM to \>100 uM (53 compounds have indeterminate IC50 values).
A set of 255 descriptors (MOE2D and QikProp) were generated. To classify the data, we used a cutoff of 2^{2\.5} to determine activity.
Using `data(cox2)` exposes three R objects: `cox2Descr` is a data frame with the descriptor data, `cox2IC50` is a numeric vector of IC50 assay values and `cox2Class` is a factor vector with the activity results.
23\.3 DHFR Inhibition
---------------------
[Sutherland and Weaver (2004\)](http://www.springerlink.com/content/q5m5xp1q356p2071/) discuss QSAR models for dihydrofolate reductase (DHFR) inhibition. This data set contains values for 325 compounds. For each compound, 228 molecular descriptors have been calculated. Additionally, each samples is designated as “active” or “inactive”.
The data frame `dhfr` contains a column called `Y` with the outcome classification. The remainder of the columns are molecular descriptor values.
23\.4 Tecator NIR Data
----------------------
These data can be found in the datasets section of StatLib. The data consist of 100 near infrared absorbance spectra used to predict the moisture, fat and protein values of chopped meat.
From [StatLib](http://lib.stat.cmu.edu/datasets/tecator):
> These data are recorded on a Tecator Infratec Food and Feed Analyzer
> working in the wavelength range 850 \- 1050 nm by the Near Infrared
> Transmission (NIT) principle. Each sample contains finely chopped pure
> meat with different moisture, fat and protein contents. If results
> from these data are used in a publication we want you to mention the
> instrument and company name (Tecator) in the publication. In addition,
> please send a preprint of your article to: Karin Thente, Tecator AB,
> Box 70, S\-263 21 Hoganas, Sweden.
One reference for these data is Borggaard and Thodberg (1992\).
Using `data(tecator)` loads a 215 x 100 matrix of absorbance spectra and a 215 x 3 matrix of outcomes.
23\.5 Fatty Acid Composition Data
---------------------------------
[Brodnjak\-Voncina et al. (2005\)](http://dx.doi.org/10.1016/j.chemolab.2004.04.011) describe a set of data where seven fatty acid compositions were used to classify commercial oils as either pumpkin (labeled `A`), sunflower (`B`), peanut (`C`), olive (`D`), soybean (`E`), rapeseed (`F`) and corn (`G`). There were 96 data points contained in their Table 1 with known results. The breakdown of the classes is given in below:
```
data(oil)
dim(fattyAcids)
```
```
## [1] 96 7
```
```
table(oilType)
```
```
## oilType
## A B C D E F G
## 37 26 3 7 11 10 2
```
As a note, the paper states on page 32 that there are 37 unknown samples while the table on pages 33 and 34 shows that there are 34 unknowns.
23\.6 German Credit Data
------------------------
Data from Dr. Hans Hofmann of the University of Hamburg and stored at the [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29).
These data have two classes for the credit worthiness: good or bad. There are predictors related to attributes, such as: checking account status, duration, credit history, purpose of the loan, amount of the loan, savings accounts or bonds, employment duration, Installment rate in percentage of disposable income, personal information, other debtors/guarantors, residence duration, property, age, other installment plans, housing, number of existing credits, job information, Number of people being liable to provide maintenance for, telephone, and foreign worker status.
Many of these predictors are discrete and have been expanded into several 0/1 indicator variables
```
library(caret)
data(GermanCredit)
## Show the first 10 columns
str(GermanCredit[, 1:10])
```
```
## 'data.frame': 1000 obs. of 10 variables:
## $ status : Factor w/ 4 levels "... < 100 DM",..: 1 2 4 1 1 4 4 2 4 2 ...
## $ duration : num 6 48 12 42 24 36 24 36 12 30 ...
## $ credit_history : Factor w/ 5 levels "no credits taken/all credits paid back duly",..: 5 3 5 3 4 3 3 3 3 5 ...
## $ purpose : Factor w/ 10 levels "car (new)","car (used)",..: 5 5 8 4 1 8 4 2 5 1 ...
## $ amount : num 1169 5951 2096 7882 4870 ...
## $ savings : Factor w/ 5 levels "... < 100 DM",..: 5 1 1 1 1 5 3 1 4 1 ...
## $ employment_duration: Ord.factor w/ 5 levels "unemployed"<"... < 1 year"<..: 5 3 4 4 3 3 5 3 4 1 ...
## $ installment_rate : num 4 2 2 2 3 2 3 2 2 4 ...
## $ personal_status_sex: Factor w/ 5 levels "male : divorced/separated",..: 3 2 3 3 3 3 3 3 1 4 ...
## $ other_debtors : Factor w/ 3 levels "none","co-applicant",..: 1 1 1 3 1 1 1 1 1 1 ...
```
23\.7 Kelly Blue Book
---------------------
Resale data for 2005 model year GM cars [Kuiper (2008\)](http://www.amstat.org/publications/jse/v16n3/datasets.kuiper.html) collected data on Kelly Blue Book resale data for 804 GM cars (2005 model year).
`cars` is data frame of the suggested retail price (column `Price`) and various characteristics of each car (columns `Mileage`, `Cylinder`, `Doors`, `Cruise`, `Sound`, `Leather`, `Buick`, `Cadillac`, `Chevy`, `Pontiac`, `Saab`, `Saturn`, `convertible`, `coupe`, `hatchback`, `sedan` and `wagon`)
```
data(cars)
str(cars)
```
```
## 'data.frame': 804 obs. of 18 variables:
## $ Price : num 22661 21725 29143 30732 33359 ...
## $ Mileage : int 20105 13457 31655 22479 17590 23635 17381 27558 25049 17319 ...
## $ Cylinder : int 6 6 4 4 4 4 4 4 4 4 ...
## $ Doors : int 4 2 2 2 2 2 2 2 2 4 ...
## $ Cruise : int 1 1 1 1 1 1 1 1 1 1 ...
## $ Sound : int 0 1 1 0 1 0 1 0 0 0 ...
## $ Leather : int 0 0 1 0 1 0 1 1 0 1 ...
## $ Buick : int 1 0 0 0 0 0 0 0 0 0 ...
## $ Cadillac : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Chevy : int 0 1 0 0 0 0 0 0 0 0 ...
## $ Pontiac : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Saab : int 0 0 1 1 1 1 1 1 1 1 ...
## $ Saturn : int 0 0 0 0 0 0 0 0 0 0 ...
## $ convertible: int 0 0 1 1 1 1 1 1 1 0 ...
## $ coupe : int 0 1 0 0 0 0 0 0 0 0 ...
## $ hatchback : int 0 0 0 0 0 0 0 0 0 0 ...
## $ sedan : int 1 0 0 0 0 0 0 0 0 1 ...
## $ wagon : int 0 0 0 0 0 0 0 0 0 0 ...
```
23\.8 Cell Body Segmentation Data
---------------------------------
[Hill, LaPan, Li and Haney (2007\)](http://www.biomedcentral.com/1471-2105/8/340) develop models to predict which cells in a high content screen were well segmented. The data consists of 119 imaging measurements on 2019\. The original analysis used 1009 for training and 1010 as a test set (see the column called `Case`).
The outcome class is contained in a factor variable called `Class` with levels `PS` for poorly segmented and `WS` for well segmented.
```
data(segmentationData)
str(segmentationData[,1:10])
```
```
## 'data.frame': 2019 obs. of 10 variables:
## $ Cell : int 207827637 207932307 207932463 207932470 207932455 207827656 207827659 207827661 207932479 207932480 ...
## $ Case : Factor w/ 2 levels "Test","Train": 1 2 2 2 1 1 1 1 1 1 ...
## $ Class : Factor w/ 2 levels "PS","WS": 1 1 2 1 1 2 2 1 2 2 ...
## $ AngleCh1 : num 143.25 133.75 106.65 69.15 2.89 ...
## $ AreaCh1 : int 185 819 431 298 285 172 177 251 495 384 ...
## $ AvgIntenCh1 : num 15.7 31.9 28 19.5 24.3 ...
## $ AvgIntenCh2 : num 4.95 206.88 116.32 102.29 112.42 ...
## $ AvgIntenCh3 : num 9.55 69.92 63.94 28.22 20.47 ...
## $ AvgIntenCh4 : num 2.21 164.15 106.7 31.03 40.58 ...
## $ ConvexHullAreaRatioCh1: num 1.12 1.26 1.05 1.2 1.11 ...
```
23\.9 Sacramento House Price Data
---------------------------------
This data frame contains house and sale price data for 932 homes in Sacramento CA. The original data were obtained from the website for the [SpatialKey software](https://support.spatialkey.com/spatialkey-sample-csv-data). From their website: “The Sacramento real estate transactions file is a list of 985 real estate transactions in the Sacramento area reported over a five\-day period, as reported by the Sacramento Bee.” Google was used to fill in missing/incorrect data.
```
data(Sacramento)
str(Sacramento)
```
```
## 'data.frame': 932 obs. of 9 variables:
## $ city : Factor w/ 37 levels "ANTELOPE","AUBURN",..: 34 34 34 34 34 34 34 34 29 31 ...
## $ zip : Factor w/ 68 levels "z95603","z95608",..: 64 52 44 44 53 65 66 49 24 25 ...
## $ beds : int 2 3 2 2 2 3 3 3 2 3 ...
## $ baths : num 1 1 1 1 1 1 2 1 2 2 ...
## $ sqft : int 836 1167 796 852 797 1122 1104 1177 941 1146 ...
## $ type : Factor w/ 3 levels "Condo","Multi_Family",..: 3 3 3 3 3 1 3 3 1 3 ...
## $ price : int 59222 68212 68880 69307 81900 89921 90895 91002 94905 98937 ...
## $ latitude : num 38.6 38.5 38.6 38.6 38.5 ...
## $ longitude: num -121 -121 -121 -121 -121 ...
```
23\.10 Animal Scat Data
-----------------------
[Reid (2105\)](http://www.bioone.org/doi/full/10.2981/wlb.00105) collected data on animal feses in coastal California. The data consist of DNA verified species designations as well as fields related to the time and place of the collection and the scat itself. The data frame `scat_orig` contains while scat contains data on the three main species.
```
data(scat)
str(scat)
```
```
## 'data.frame': 110 obs. of 19 variables:
## $ Species : Factor w/ 3 levels "bobcat","coyote",..: 2 2 1 2 2 2 1 1 1 1 ...
## $ Month : Factor w/ 9 levels "April","August",..: 4 4 4 4 4 4 4 4 4 4 ...
## $ Year : int 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 ...
## $ Site : Factor w/ 2 levels "ANNU","YOLA": 2 2 2 2 2 2 1 1 1 1 ...
## $ Location : Factor w/ 3 levels "edge","middle",..: 1 1 2 2 1 1 3 3 3 2 ...
## $ Age : int 5 3 3 5 5 5 1 3 5 5 ...
## $ Number : int 2 2 2 2 4 3 5 7 2 1 ...
## $ Length : num 9.5 14 9 8.5 8 9 6 5.5 11 20.5 ...
## $ Diameter : num 25.7 25.4 18.8 18.1 20.7 21.2 15.7 21.9 17.5 18 ...
## $ Taper : num 41.9 37.1 16.5 24.7 20.1 28.5 8.2 19.3 29.1 21.4 ...
## $ TI : num 1.63 1.46 0.88 1.36 0.97 1.34 0.52 0.88 1.66 1.19 ...
## $ Mass : num 15.9 17.6 8.4 7.4 25.4 ...
## $ d13C : num -26.9 -29.6 -28.7 -20.1 -23.2 ...
## $ d15N : num 6.94 9.87 8.52 5.79 7.01 8.28 4.2 3.89 7.34 6.06 ...
## $ CN : num 8.5 11.3 8.1 11.5 10.6 9 5.4 5.6 5.8 7.7 ...
## $ ropey : int 0 0 1 1 0 1 1 0 0 1 ...
## $ segmented: int 0 0 1 0 1 0 1 1 1 1 ...
## $ flat : int 0 0 0 0 0 0 0 0 0 0 ...
## $ scrape : int 0 0 1 0 0 0 1 0 0 0 ...
```
| Machine Learning |
topepo.github.io | https://topepo.github.io/caret/session-information.html |
24 Session Information
======================
This documentation was created on Thu Mar 28 2019 with the following R packages
```
## ─ Session info ──────────────────────────────────────────────────────────
## setting value
## version R Under development (unstable) (2019-03-18 r76245)
## os macOS High Sierra 10.13.6
## system x86_64, darwin15.6.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz America/New_York
## date 2019-03-28
##
## ─ Packages ──────────────────────────────────────────────────────────────
## package * version date lib
## abind 1.4-5 2016-07-21 [1]
## acepack 1.4.1 2016-10-29 [1]
## AmesHousing * 0.0.3 2017-12-17 [1]
## AppliedPredictiveModeling * 1.1-7 2018-05-22 [1]
## assertthat 0.2.1 2019-03-21 [1]
## backports 1.1.3 2018-12-14 [1]
## base64enc 0.1-3 2015-07-28 [1]
## bitops 1.0-6 2013-08-17 [1]
## bookdown * 0.9 2018-12-21 [1]
## broom 0.5.1 2018-12-05 [1]
## C50 * 0.1.2 2018-05-22 [1]
## Cairo 1.5-9 2015-09-26 [1]
## caret * 6.0-82 2019-03-26 [1]
## caTools * 1.17.1.1 2018-07-20 [1]
## cellranger 1.1.0 2016-07-27 [1]
## checkmate 1.9.1 2019-01-15 [1]
## class 7.3-15 2019-01-01 [1]
## cli 1.1.0 2019-03-19 [1]
## cluster 2.0.7-1 2018-04-13 [1]
## codetools 0.2-16 2018-12-24 [1]
## coin 1.2-2 2017-11-28 [1]
## colorspace 1.4-1 2019-03-18 [1]
## combinat 0.0-8 2012-10-29 [1]
## CORElearn 1.53.1 2018-09-29 [1]
## crayon 1.3.4 2017-09-16 [1]
## crosstalk 1.0.0 2016-12-21 [1]
## Cubist 0.2.2 2019-03-05 [1]
## curl 3.3 2019-01-10 [1]
## data.table 1.12.0 2019-01-13 [1]
## dendextend 1.9.0 2018-10-19 [1]
## DEoptimR 1.0-8 2016-11-19 [1]
## desirability * 2.1 2016-09-22 [1]
## digest 0.6.18 2018-10-10 [1]
## diptest 0.75-7 2016-12-05 [1]
## DMwR * 0.4.1 2013-08-08 [1]
## doMC * 1.3.5 2017-12-12 [1]
## dplyr * 0.8.0.1 2019-02-15 [1]
## DT * 0.5 2018-11-05 [1]
## e1071 * 1.7-0.1 2019-01-21 [1]
## earth * 4.7.0 2019-01-03 [1]
## ellipse 0.4.1 2018-01-05 [1]
## evaluate 0.13 2019-02-12 [1]
## flexmix 2.3-15 2019-02-18 [1]
## forcats * 0.4.0 2019-02-17 [1]
## foreach * 1.4.4 2017-12-12 [1]
## foreign 0.8-71 2018-07-20 [1]
## Formula * 1.2-3 2018-05-03 [1]
## fpc 2.1-11.1 2018-07-20 [1]
## gam 1.16 2018-07-20 [1]
## gbm * 2.1.5 2019-01-14 [1]
## gclus 1.3.2 2019-01-07 [1]
## gdata 2.18.0 2017-06-06 [1]
## generics 0.0.2 2018-11-29 [1]
## ggplot2 * 3.1.0 2018-10-25 [1]
## ggthemes * 4.1.0 2019-02-19 [1]
## glue 1.3.1 2019-03-12 [1]
## gower 0.2.0 2019-03-07 [1]
## gplots 3.0.1.1 2019-01-27 [1]
## gridExtra 2.3 2017-09-09 [1]
## gtable 0.2.0 2016-02-26 [1]
## gtools 3.8.1 2018-06-26 [1]
## haven 2.1.0 2019-02-19 [1]
## heatmaply * 0.15.2 2018-07-06 [1]
## highr 0.8 2019-03-20 [1]
## Hmisc * 4.2-0 2019-01-26 [1]
## hms 0.4.2 2018-03-10 [1]
## htmlTable 1.13.1 2019-01-07 [1]
## htmltools 0.3.6 2017-04-28 [1]
## htmlwidgets 1.3 2018-09-30 [1]
## httpuv 1.4.5.1 2018-12-18 [1]
## httr 1.4.0 2018-12-11 [1]
## igraph 1.2.4 2019-02-13 [1]
## inum 1.0-0 2017-12-12 [1]
## ipred * 0.9-8 2018-11-05 [1]
## iterators * 1.0.10 2018-07-13 [1]
## jpeg 0.1-8 2014-01-23 [1]
## jsonlite 1.6 2018-12-07 [1]
## kernlab * 0.9-27 2018-08-10 [1]
## KernSmooth 2.23-15 2015-06-29 [1]
## klaR * 0.6-14 2018-03-19 [1]
## knitr * 1.22 2019-03-08 [1]
## labeling 0.3 2014-08-23 [1]
## later 0.8.0 2019-02-11 [1]
## lattice * 0.20-38 2018-11-04 [1]
## latticeExtra * 0.6-28 2016-02-09 [1]
## lava 1.6.5 2019-02-12 [1]
## lazyeval 0.2.2 2019-03-15 [1]
## libcoin 1.0-4 2019-02-28 [1]
## lubridate 1.7.4 2018-04-11 [1]
## magrittr 1.5 2014-11-22 [1]
## MASS * 7.3-51.2 2019-03-01 [1]
## Matrix 1.2-16 2019-03-08 [1]
## mboost * 2.9-1 2018-08-22 [1]
## mclust 5.4.2 2018-11-17 [1]
## mda 0.4-10 2017-11-02 [1]
## mime 0.6 2018-10-05 [1]
## miniUI 0.1.1.1 2018-05-18 [1]
## mlbench * 2.1-1 2012-07-10 [1]
## MLmetrics 1.1.1 2016-05-13 [1]
## ModelMetrics 1.2.2 2018-11-03 [1]
## modelr 0.1.4 2019-02-18 [1]
## modeltools * 0.2-22 2018-07-16 [1]
## multcomp 1.4-8 2017-11-08 [1]
## munsell 0.5.0 2018-06-12 [1]
## mvtnorm * 1.0-9 2019-02-28 [1]
## networkD3 * 0.4 2017-03-18 [1]
## nlme * 3.1-137 2018-04-07 [1]
## nnet 7.3-12 2016-02-02 [1]
## nnls 1.4 2012-03-19 [1]
## party * 1.3-2 2019-03-01 [1]
## partykit 1.2-3 2019-01-31 [1]
## pillar 1.3.1 2018-12-15 [1]
## pkgconfig 2.0.2 2018-08-16 [1]
## plotly * 4.8.0 2018-07-20 [1]
## plotmo * 3.5.2 2019-01-02 [1]
## plotrix * 3.7-4 2018-10-03 [1]
## pls * 2.7-0 2018-08-21 [1]
## plyr * 1.8.4 2016-06-08 [1]
## prabclus 2.2-7 2019-01-17 [1]
## pROC * 1.13.0 2018-09-24 [1]
## prodlim 2018.04.18 2018-04-18 [1]
## promises 1.0.1 2018-04-13 [1]
## proxy * 0.4-23 2019-03-05 [1]
## purrr * 0.3.2 2019-03-15 [1]
## QSARdata * 1.3 2013-07-16 [1]
## quadprog 1.5-5 2013-04-17 [1]
## quantmod 0.4-13 2018-04-13 [1]
## questionr 0.7.0 2018-11-26 [1]
## R6 2.4.0 2019-02-14 [1]
## randomForest * 4.6-14 2018-03-25 [1]
## RColorBrewer * 1.1-2 2014-12-07 [1]
## Rcpp 1.0.1 2019-03-17 [1]
## readr * 1.3.1 2018-12-21 [1]
## readxl 1.3.0 2019-02-15 [1]
## recipes * 0.1.5 2019-03-21 [1]
## registry 0.5 2017-12-03 [1]
## reshape2 * 1.4.3 2017-12-11 [1]
## rlang 0.3.2 2019-03-21 [1]
## rmarkdown 1.12 2019-03-14 [1]
## robustbase 0.93-3 2018-09-21 [1]
## ROCR 1.0-7 2015-03-26 [1]
## ROSE * 0.0-3 2014-07-15 [1]
## rpart 4.1-13 2018-02-23 [1]
## rsample * 0.0.4 2019-01-07 [1]
## rstudioapi 0.10 2019-03-19 [1]
## rvest 0.3.2 2016-06-17 [1]
## sandwich * 2.5-0 2018-08-17 [1]
## scales 1.0.0 2018-08-09 [1]
## seriation 1.2-3 2018-02-05 [1]
## sessioninfo * 1.1.1.9000 2019-03-26 [1]
## shiny 1.2.0 2018-11-02 [1]
## stabs * 0.6-3 2017-07-19 [1]
## stringi 1.4.3 2019-03-12 [1]
## stringr * 1.4.0 2019-02-10 [1]
## strucchange * 1.5-1 2015-06-06 [1]
## survival * 2.43-3 2018-11-26 [1]
## TeachingDemos * 2.10 2016-02-12 [1]
## TH.data 1.0-10 2019-01-21 [1]
## tibble * 2.1.1 2019-03-16 [1]
## tidyr * 0.8.3 2019-03-01 [1]
## tidyselect 0.2.5 2018-10-11 [1]
## tidyverse * 1.2.1 2017-11-14 [1]
## timeDate 3043.102 2018-02-21 [1]
## trimcluster 0.1-2.1 2018-07-20 [1]
## TSP 1.1-6 2018-04-30 [1]
## TTR 0.23-4 2018-09-20 [1]
## viridis * 0.5.1 2018-03-29 [1]
## viridisLite * 0.3.0 2018-02-01 [1]
## webshot 0.5.1 2018-09-28 [1]
## whisker 0.3-2 2013-04-28 [1]
## withr 2.1.2 2018-03-15 [1]
## xfun 0.5 2019-02-20 [1]
## xml2 1.2.0 2018-01-24 [1]
## xtable 1.8-3 2018-08-29 [1]
## xts 0.11-2 2018-11-05 [1]
## yaml 2.2.0 2018-07-25 [1]
## zoo * 1.8-4 2018-09-19 [1]
## source
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## local
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## Github (r-lib/sessioninfo@dfb3ea8)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
## CRAN (R 3.6.0)
##
## [1] /Library/Frameworks/R.framework/Versions/3.6/Resources/library
```
| Machine Learning |
schochastics.github.io | https://schochastics.github.io/R4SNA/network-data.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/descriptives-basic.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/centrality-basic.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/centrality-advanced.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/clustering.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/two-mode-networks.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/signed-networks.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/visualization.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/ggraph-basics.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/ggraph-advanced.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/enhance-viz.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/tidygraph.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/tidygraph-basics.html | Network Analysis |
|
schochastics.github.io | https://schochastics.github.io/R4SNA/tidygraph-descriptive.html | Network Analysis |
|
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/1-introduction.html |
Chapter 1 Introduction
======================
This first chapter will serve as a “crash course” in R and we will superficially introduce `data.frames`, simple data manipulations and graphing, and producing a reasonable document for your output. These topics will each be covered in greater detail, but it is helpful to get the basic ideas first.
R is a open\-source program that is commonly used in Statistics. It runs on almost every platform and is completely free and is available at [www.r\-project.org](www.r-project.org). Most of the cutting\-edge statistical research is first available on R.
R is a script based language, so there is no point and click interface. (Actually there are packages that attempt to provide a point and click interface, but they are still somewhat primitive.) While the initial learning curve will be steeper, understanding how to write scripts will be valuable because it leaves a clear description of what steps you performed in your data analysis. Typically you will want to write a script in a separate file and then run individual lines. The saves you from having to retype a bunch of commands and speeds up the debugging process.
Finding help about a certain function is very easy. At the prompt, just type `help(function.name)` or `?function.name`. If you don’t know the name of the function, your best bet is to go the the web page www.rseek.org which will search various R resources for your keyword(s). Another great resource is the coding question and answer site [stackoverflow](http://stackoverflow.com).
The basic editor that comes with R works fairly well, but you should consider running R through the program RStudio which is located at [rstudio.com](http://www.rstudio.org). This is a completely free Integrated Developement Environment that works on Macs, Windows and a couple of flavors of Linux. It simplifies a bunch of more annoying aspects of the standard R GUI and supports things like tab completion.
When you first open up R (or RStudio) the console window gives you some information about the version of R you are running and then it gives the prompt `>`. This prompt is waiting for you to input a command. The prompt \+ tells you that the current command is spanning multiple lines. In a script file you might have typed something like this:
```
for( i in 1:5 ){
print(i)
}
```
But when you copy and paste it into the console in R you’ll see something like this:
```
> for (i in 1:5){
+ print(i)
+ }
```
If you type your commands into a file, you won’t type the `>` or `+` prompts. For the rest of the tutorial, I will show the code as you would type it into a script and I will show the output being shown with two hashtags (`##`) before it to designate that it is output.
1\.1 R as a simple calculator
-----------------------------
Assuming that you have started R on whatever platform you like, you can use R as a simple calculator. At the prompt, type 2\+3 and hit enter. What you should see is the following
```
# Some simple addition
2+3
```
```
## [1] 5
```
In this fashion you can use R as a very capable calculator.
```
6*8
```
```
## [1] 48
```
```
4^3
```
```
## [1] 64
```
```
exp(1) # exp() is the exponential function
```
```
## [1] 2.718282
```
R has most constants and common mathematical functions you could ever want. `sin()`, `cos()`, and other trigonometry functions are available, as are the exponential and log functions `exp()`, `log()`. The absolute value is given by `abs()`, and `round()` will round a value to the nearest integer.
```
pi # the constant 3.14159265...
```
```
## [1] 3.141593
```
```
sin(0)
```
```
## [1] 0
```
```
log(5) # unless you specify the base, R will assume base e
```
```
## [1] 1.609438
```
```
log(5, base=10) # base 10
```
```
## [1] 0.69897
```
Whenever I call a function, there will be some arguments that are mandatory, and some that are optional and the arguments are separated by a comma. In the above statements the function `log()` requires at least one argument, and that is the number(s) to take the log of. However, the base argument is optional. If you do not specify what base to use, R will use a default value. You can see that R will default to using base \\(e\\) by looking at the help page (by typing `help(log)` or `?log` at the command prompt).
Arguments can be specified via the order in which they are passed or by naming the arguments. So for the `log()` function which has arguments `log(x, base=exp(1))`. If I specify which arguments are which using the named values, then order doesn’t matter.
```
# Demonstrating order does not matter if you specify
# which argument is which
log(x=5, base=10)
```
```
## [1] 0.69897
```
```
log(base=10, x=5)
```
```
## [1] 0.69897
```
But if we don’t specify which argument is which, R will decide that `x` is the first argument, and `base` is the second.
```
# If not specified, R will assume the second value is the base...
log(5, 10)
```
```
## [1] 0.69897
```
```
log(10, 5)
```
```
## [1] 1.430677
```
When I specify the arguments, I have been using the `name=value` notation and a student might be tempted to use the `<-` notation here. Don’t do that as the `name=value` notation is making an association mapping and not a permanent assignment.
1\.2 Assignment
---------------
We need to be able to assign a value to a variable to be able to use it later. R does this by using an arrow `<-` or an equal sign `=`. While R supports either, for readability, I suggest people pick one assignment operator and stick with it. I personally prefer to use the arrow. Variable names cannot start with a number, may not include spaces, and are case sensitive.
```
tau <- 2*pi # create two variables
my.test.var = 5 # notice they show up in 'Environment' tab in RStudio!
tau
```
```
## [1] 6.283185
```
```
my.test.var
```
```
## [1] 5
```
```
tau * my.test.var
```
```
## [1] 31.41593
```
As your analysis gets more complicated, you’ll want to save the results to a variable so that you can access the results later. *If you don’t assign the result to a variable, you have no way of accessing the result.*
1\.3 Data frames
----------------
Matrices are great for mathematical operations, but I also want to be able to store data that is numerical. For example I might want to store a categorical variable such as manufacturer brand. To generalize our concept of a matrix to include these types of data, We want a way of storing data where it feels just as if we had an Excel Spreadsheet where each row represents an observation and each column represents some information about that observation. We will call this object a `data.frame`.
Perhaps the easiest way to understand data frames is to create one. We will create a `data.frame` that represents an instructor’s grade book, where each row is a student, and each column represents some sort of assessment.
```
Grades <- data.frame(
Name = c('Bob','Jeff','Mary','Valerie'),
Exam.1 = c(90, 75, 92, 85),
Exam.2 = c(87, 71, 95, 81)
)
# Show the data.frame
# View(Grades) # show the data in an Excel-like tab. Doesn't work when knitting
Grades # show the output in the console. This works when knitting
```
```
## Name Exam.1 Exam.2
## 1 Bob 90 87
## 2 Jeff 75 71
## 3 Mary 92 95
## 4 Valerie 85 81
```
R allows two differnt was to access elements of the `data.frame`. First is a matrix\-like notation for accessing particular values.
| Format | Result |
| --- | --- |
| `[a,b]` | Element in row `a` and column `b` |
| `[a,]` | All of row `a` |
| `[,b]` | All of column `b` |
Because the columns have meaning and we have given them column names, it is desirable to want to access an element by the name of the column as opposed to the column number. In large Excel spreadsheets I often get annoyed trying to remember which column something was in and muttering “Was total biomass in column P or Q?” A system where I could just name the column `Total.Biomass` and be done with it. This is much nicer to work with and I make fewer dumb mistakes.
```
Grades[, 2] # print out all of column 2
```
```
## [1] 90 75 92 85
```
```
Grades$Name # The $-sign means to reference a column by its label
```
```
## [1] Bob Jeff Mary Valerie
## Levels: Bob Jeff Mary Valerie
```
Usually we won’t type the data in by hand, but rather load the data from some package.
1\.4 Packages
-------------
One of the greatest strengths about R is that so many people have devloped add\-on packages to do some additional function. For example, plant community ecologists have a large number of multivariate methods that are useful but were not part of R. So Jari Oksanen got together with some other folks and put together a package of functions that they found useful. The result is the package `vegan`.
To download and install the package from the Comprehensive R Archive Network (CRAN), you just need to ask RStudio it to install it via the menu `Tools` \-\> `Install Packages...`. Once there, you just need to give the name of the package and RStudio will download and install the package on your computer.
Many major analysis types are available via downloaded packages as well as problem sets from various books (e.g. `Sleuth3` or `faraway`) and can be easily downloaded and installed via the menu.
Once a package is downloaded and installed on your computer, it is available, but it is not loaded into your current R session by default. The reason it isn’t loaded is that there are thousands of packages, some of which are quite large and only used occasionally. So to improve overall performance only a few packages are loaded by default and the you must explicitly load packages whenever you want to use them. You only need to load them once per session/script.
```
library(vegan) # load the vegan library
```
For a similar performance reason, many packages do not automatically load their datasets unless explicitly asked. Therefore when loading datasets from a package, you might need to do a *two\-step* process of loading the package and then loading the dataset.
```
library(faraway) # load the package into memory
```
```
##
## Attaching package: 'faraway'
```
```
## The following object is masked from 'package:lattice':
##
## melanoma
```
```
data("butterfat") # load the dataset into memory
```
If you don’t need to load any functions from a package and you just want the datasets, you can do it in one step.
```
data('butterfat', package='faraway') # just load the dataset, not anything else
butterfat[1:6, ] # print out the first 6 rows of the data
```
```
## Butterfat Breed Age
## 1 3.74 Ayrshire Mature
## 2 4.01 Ayrshire 2year
## 3 3.77 Ayrshire Mature
## 4 3.78 Ayrshire 2year
## 5 4.10 Ayrshire Mature
## 6 4.06 Ayrshire 2year
```
1\.5 Summarizing Data
---------------------
It is very important to be able to take a data set and produce summary statistics such as the mean and standard deviation of a column. For this sort of manipulation, I use the package `dplyr`. This package allows me to chain together many common actions to form a particular task.
The foundational operations to perform on a data set are:
* Subsetting \- Returns a with only particular columns or rows
– `select` \- Selecting a subset of columns by name or column number.
– `filter` \- Selecting a subset of rows from a data frame based on logical expressions.
– `slice` \- Selecting a subset of rows by row number.
* `arrange` \- Re\-ordering the rows of a data frame.
* `mutate` \- Add a new column that is some function of other columns.
* `summarise` \- calculate some summary statistic of a column of data. This collapses a set of rows into a single row.
Each of these operations is a function in the package `dplyr`. These functions all have a similar calling syntax, the first argument is a data set, subsequent arguments describe what to do with the input data frame and you can refer to the columns without using the `df$column` notation. All of these functions will return a data set.
The `dplyr` package also includes a function that “pipes” commands together. The pipe command `%>%` allows for very readable code. The idea is that the `%>%` operator works by translating the command `a %>% f(b)` to the expression `f(a,b)`. This operator works on any function and was introduced in the `magrittr` package. The beauty of this comes when you have a suite of functions that takes input arguments of the same type as their output. For example if we wanted to start with `x`, and first apply function `f()`, then `g()`, and then `h()`, the usual R command would be `h(g(f(x)))` which is hard to read because you have to start reading at the innermost set of parentheses. Using the pipe command `%>%`, this sequence of operations becomes `x %>% f() %>% g() %>% h()`.
```
library(dplyr) # load the dplyr package!
Grades # Recall the Grades data
```
```
## Name Exam.1 Exam.2
## 1 Bob 90 87
## 2 Jeff 75 71
## 3 Mary 92 95
## 4 Valerie 85 81
```
```
# The following code takes the Grades data.frame and calculates
# a column for the average exam score, and then sorts the data
# according to the that average score
Grades %>%
mutate( Avg.Score = (Exam.1 + Exam.2) / 2 ) %>%
arrange( Avg.Score )
```
```
## Name Exam.1 Exam.2 Avg.Score
## 1 Jeff 75 71 73.0
## 2 Valerie 85 81 83.0
## 3 Bob 90 87 88.5
## 4 Mary 92 95 93.5
```
Next we consider the summarization function to calculate the mean score for `Exam.1`. Notice that this takes a data frame of four rows, and summarizes it down to just one row that represents the summarized data for all four students.
```
Grades %>%
summarize( Exam.1.mean = mean( Exam.1 ) )
```
```
## Exam.1.mean
## 1 85.5
```
Similarly you could calculate the standard deviation for the exam as well.
```
Grades %>%
summarize( Exam.1.mean = mean( Exam.1 ),
Exam.1.sd = sd( Exam.1 ) )
```
```
## Exam.1.mean Exam.1.sd
## 1 85.5 7.593857
```
Recall the `butterfat` data we loaded earlier.
```
butterfat[1:6, ] # only the first 6 observations
```
```
## Butterfat Breed Age
## 1 3.74 Ayrshire Mature
## 2 4.01 Ayrshire 2year
## 3 3.77 Ayrshire Mature
## 4 3.78 Ayrshire 2year
## 5 4.10 Ayrshire Mature
## 6 4.06 Ayrshire 2year
```
We have 100 observations for different breeds of cows and different ages. We might want to find the mean and standard deviation of the butterfat content for each breed. To do this, we are still going to use the `summarize`, but we will precede that with `group_by(Breed)` to tell the subsequent `dplyr` functions to perform the actions separately for each breed.
```
butterfat %>%
group_by( Breed ) %>%
summarise( Mean = mean(Butterfat),
Std.Dev = sd(Butterfat) )
```
```
## # A tibble: 5 x 3
## Breed Mean Std.Dev
## <fct> <dbl> <dbl>
## 1 Ayrshire 4.06 0.261
## 2 Canadian 4.44 0.366
## 3 Guernsey 4.95 0.483
## 4 Holstein-Fresian 3.67 0.259
## 5 Jersey 5.29 0.599
```
1\.6 Graphing Data
------------------
There are three major “systems” of making graphs in R. The basic plotting commands in R are quite effective but the commands do not have a way of being combined in easy ways. Lattice graphics (which the `mosaic` package uses) makes it possible to create some quite complicated graphs but it is very difficult to do make non\-standard graphs. The last package, `ggplot2` tries to not anticipate what the user wants to do, but rather provide the mechanisms for pulling together different graphical concepts and the user gets to decide which elements to combine.
To make the most of `ggplot2` it is important to wrap your mind around “The Grammar of Graphics”. Briefly, the act of building a graph can be broken down into three steps.
1. Define what data we are using.
2. What is the major relationship we wish to examine?
3. In what way should we present that relationship? These relationships can be presented in multiple ways, and the process of creating a good graph relies on building layers upon layers of information. For example, we might start with printing the raw data and then overlay a regression line over the top.
Next, it should be noted that `ggplot2` is designed to act on data frames. It is actually hard to just draw three data points and for simple graphs it might be easier to use the base graphing system in R. However for any real data analysis project, the data will already be in a data frame and this is not an annoyance.
One way that `ggplot2` makes it easy to form very complicated graphs is that it provides a large number of basic building blocks that, when stacked upon each other, can produce extremely complicated graphs. A full list is available at [http://docs.ggplot2\.org/current/](http://docs.ggplot2.org/current/) but the following list gives some idea of different building blocks. These different geometries are different ways to display the relationship between variables and can be combined in many interesting ways.
| Geom | Description | Required Aesthetics |
| --- | --- | --- |
| `geom_histogram` | A histogram | `x` |
| `geom_bar` | A barplot | `x` |
| `geom_density` | A density plot of data. (smoothed histogram) | `x` |
| `geom_boxplot` | Boxplots | `x, y` |
| `geom_line` | Draw a line (after sorting x\-values) | `x, y` |
| `geom_path` | Draw a line (without sorting x\-values) | `x, y` |
| `geom_point` | Draw points (for a scatterplot) | `x, y` |
| `geom_smooth` | Add a ribbon that summarizes a scatterplot | `x, y` |
| `geom_ribbon` | Enclose a region, and color the interior | `ymin, ymax` |
| `geom_errorbar` | Error bars | `ymin, ymax` |
| `geom_text` | Add text to a graph | `x, y, label` |
| `geom_label` | Add text to a graph | `x, y, label` |
| `geom_tile` | Create Heat map | `x, y, fill` |
A graph can be built up layer by layer, where:
* Each layer corresponds to a `geom`, each of which requires a dataset and a mapping between an aesthetic and a column of the data set.
+ If you don’t specify either, then the layer inherits everything defined in the `ggplot()` command.
+ You can have different datasets for each layer!
* Layers can be added with a `+`, or you can define two plots and add them together (second one over\-writes anything that conflicts).
### 1\.6\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg) # print out what columns are present
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
library(ggplot2) # load the ggplot2 package!
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have already summarized the data and I just want to make the barchart some height, I would instead use `geom_col` instead.
### 1\.6\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
### 1\.6\.3 Scatterplots
There is a famous data set that contains 150 observations from three species of iris. For each observation the length and width of the flowers petals and sepals were measured. This dataset is available in R as `iris` and is always loaded. We’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in the data set.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 1\.6\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(iris, aes(x=Species, y=Petal.Length)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
1\.7 Scripts and RMarkdown
--------------------------
One of the worst things about a pocket calculator is there is no good way to go several steps and easily see what you did or fix a mistake (there is nothing more annoying than re\-typing something because of a typo. To avoid these issues I always work with script (or RMarkdown) files instead of typing directly into the console. You will quickly learn that it is impossible to write R code correctly the first time and you’ll save yourself a huge amount of work by just embracing scripts (and RMarkdown) from the beginning. Furthermore, having a script file fully documents how you did your analysis, which can help when writing the methods section of a paper. Finally, having a script makes it easy to re\-run an analysis after a change in the data (additional data values, transformed data, or removal of outliers).
It often makes your script more readable if you break a single command up into multiple lines. R will disregard all whitespace (including line breaks) so you can safely spread your command over as multiple lines. Finally, it is useful to leave comments in the script for things such as explaining a tricky step, who wrote the code and when, or why you chose a particular name for a variable. The `#` sign will denote that the rest of the line is a comment and R will ignore it.
### 1\.7\.1 R Scripts (.R files)
The first type of file that we’ll discuss is a traditional script file. To create a new .R script in RStudio go to `File -> New File -> R Script`. This opens a new window in RStudio where you can type commands and functions as a common text editor. Type whatever you like in the script window and then you can execute the code line by line (using the run button or its keyboard shortcut to run the highlighted region or whatever line the curser is on) or the entire script (using the source button). Other options for what piece of code to run are available under the Code dropdown box.
An R script for a homework assignment might look something like this:
```
# Problem 1
# Calculate the log of a couple of values and make a plot
# of the log function from 0 to 3
log(0)
log(1)
log(2)
x <- seq(.1,3, length=1000)
plot(x, log(x))
# Problem 2
# Calculate the exponential function of a couple of values
# and make a plot of the function from -2 to 2
exp(-2)
exp(0)
exp(2)
x <- seq(-2, 2, length=1000)
plot(x, exp(x))
```
This looks perfectly acceptable as a way of documenting what you did, but this script file doesn’t contain the actual results of commands I ran, nor does it show you the plots. Also anytime I want to comment on some output, it needs to be offset with the commenting character `#`. It would be nice to have both the commands and the results merged into one document. This is what the R Markdown file does for us.
### 1\.7\.2 R Markdown (.Rmd files)
When I was a graduate student, I had to tediously copy and past tables of output from the R console and figures I had made into my Microsoft Word document. Far too often I would realize I had made a small mistake in part (b) of a problem and would have to go back, correct my mistake, and then redo all the laborious copying. I often wished that I could write both the code for my statistical analysis and the long discussion about the interpretation all in the same document so that I could just re\-run the analysis with a click of a button and all the tables and figures would be updated by magic. Fortunately that magic now exists.
To create a new R Markdown document, we use the `File -> New File -> R Markdown...` dropdown option and a menu will appear asking you for the document title, author, and preferred output type. In order to create a PDF, you’ll need to have LaTeX installed, but the HTML output nearly always works and I’ve had good luck with the MS Word output as well.
The R Markdown is an implementation of the Markdown syntax that makes it extremely easy to write webpages and give instructions for how to do typesetting sorts of things. This syntax was extended to allow use to embed R commands directly into the document. Perhaps the easiest way to understand the syntax is to look at an at the [RMarkdown website](http://rmarkdown.rstudio.com).
The R code in my document is nicely separated from my regular text using the three backticks and an instruction that it is R code that needs to be evaluated. The output of this document looks good as a HTML, PDF, or MS Word document. I have actually created this entire book using RMarkdown.
1\.8 Exercises
--------------
Create an RMarkdown file that solves the following exercises.
1. Calculate \\(\\log\\left(6\.2\\right)\\) first using base \\(e\\) and second using base \\(10\\). To figure out how to do different bases, it might be helpful to look at the help page for the `log` function.
2. Calculate the square root of 2 and save the result as the variable named sqrt2\. Have R display the decimal value of sqrt2\. *Hint: use Google to find the square root function. Perhaps search on the keywords “R square root function”.*
3. This exercise walks you through installing a package with all the datasets used in the textbook *The Statistical Sleuth*.
1. Install the package `Sleuth3` on your computer using RStudio.
2. Load the package using the `library()` command.
3. Print out the dataset `case0101`
1\.1 R as a simple calculator
-----------------------------
Assuming that you have started R on whatever platform you like, you can use R as a simple calculator. At the prompt, type 2\+3 and hit enter. What you should see is the following
```
# Some simple addition
2+3
```
```
## [1] 5
```
In this fashion you can use R as a very capable calculator.
```
6*8
```
```
## [1] 48
```
```
4^3
```
```
## [1] 64
```
```
exp(1) # exp() is the exponential function
```
```
## [1] 2.718282
```
R has most constants and common mathematical functions you could ever want. `sin()`, `cos()`, and other trigonometry functions are available, as are the exponential and log functions `exp()`, `log()`. The absolute value is given by `abs()`, and `round()` will round a value to the nearest integer.
```
pi # the constant 3.14159265...
```
```
## [1] 3.141593
```
```
sin(0)
```
```
## [1] 0
```
```
log(5) # unless you specify the base, R will assume base e
```
```
## [1] 1.609438
```
```
log(5, base=10) # base 10
```
```
## [1] 0.69897
```
Whenever I call a function, there will be some arguments that are mandatory, and some that are optional and the arguments are separated by a comma. In the above statements the function `log()` requires at least one argument, and that is the number(s) to take the log of. However, the base argument is optional. If you do not specify what base to use, R will use a default value. You can see that R will default to using base \\(e\\) by looking at the help page (by typing `help(log)` or `?log` at the command prompt).
Arguments can be specified via the order in which they are passed or by naming the arguments. So for the `log()` function which has arguments `log(x, base=exp(1))`. If I specify which arguments are which using the named values, then order doesn’t matter.
```
# Demonstrating order does not matter if you specify
# which argument is which
log(x=5, base=10)
```
```
## [1] 0.69897
```
```
log(base=10, x=5)
```
```
## [1] 0.69897
```
But if we don’t specify which argument is which, R will decide that `x` is the first argument, and `base` is the second.
```
# If not specified, R will assume the second value is the base...
log(5, 10)
```
```
## [1] 0.69897
```
```
log(10, 5)
```
```
## [1] 1.430677
```
When I specify the arguments, I have been using the `name=value` notation and a student might be tempted to use the `<-` notation here. Don’t do that as the `name=value` notation is making an association mapping and not a permanent assignment.
1\.2 Assignment
---------------
We need to be able to assign a value to a variable to be able to use it later. R does this by using an arrow `<-` or an equal sign `=`. While R supports either, for readability, I suggest people pick one assignment operator and stick with it. I personally prefer to use the arrow. Variable names cannot start with a number, may not include spaces, and are case sensitive.
```
tau <- 2*pi # create two variables
my.test.var = 5 # notice they show up in 'Environment' tab in RStudio!
tau
```
```
## [1] 6.283185
```
```
my.test.var
```
```
## [1] 5
```
```
tau * my.test.var
```
```
## [1] 31.41593
```
As your analysis gets more complicated, you’ll want to save the results to a variable so that you can access the results later. *If you don’t assign the result to a variable, you have no way of accessing the result.*
1\.3 Data frames
----------------
Matrices are great for mathematical operations, but I also want to be able to store data that is numerical. For example I might want to store a categorical variable such as manufacturer brand. To generalize our concept of a matrix to include these types of data, We want a way of storing data where it feels just as if we had an Excel Spreadsheet where each row represents an observation and each column represents some information about that observation. We will call this object a `data.frame`.
Perhaps the easiest way to understand data frames is to create one. We will create a `data.frame` that represents an instructor’s grade book, where each row is a student, and each column represents some sort of assessment.
```
Grades <- data.frame(
Name = c('Bob','Jeff','Mary','Valerie'),
Exam.1 = c(90, 75, 92, 85),
Exam.2 = c(87, 71, 95, 81)
)
# Show the data.frame
# View(Grades) # show the data in an Excel-like tab. Doesn't work when knitting
Grades # show the output in the console. This works when knitting
```
```
## Name Exam.1 Exam.2
## 1 Bob 90 87
## 2 Jeff 75 71
## 3 Mary 92 95
## 4 Valerie 85 81
```
R allows two differnt was to access elements of the `data.frame`. First is a matrix\-like notation for accessing particular values.
| Format | Result |
| --- | --- |
| `[a,b]` | Element in row `a` and column `b` |
| `[a,]` | All of row `a` |
| `[,b]` | All of column `b` |
Because the columns have meaning and we have given them column names, it is desirable to want to access an element by the name of the column as opposed to the column number. In large Excel spreadsheets I often get annoyed trying to remember which column something was in and muttering “Was total biomass in column P or Q?” A system where I could just name the column `Total.Biomass` and be done with it. This is much nicer to work with and I make fewer dumb mistakes.
```
Grades[, 2] # print out all of column 2
```
```
## [1] 90 75 92 85
```
```
Grades$Name # The $-sign means to reference a column by its label
```
```
## [1] Bob Jeff Mary Valerie
## Levels: Bob Jeff Mary Valerie
```
Usually we won’t type the data in by hand, but rather load the data from some package.
1\.4 Packages
-------------
One of the greatest strengths about R is that so many people have devloped add\-on packages to do some additional function. For example, plant community ecologists have a large number of multivariate methods that are useful but were not part of R. So Jari Oksanen got together with some other folks and put together a package of functions that they found useful. The result is the package `vegan`.
To download and install the package from the Comprehensive R Archive Network (CRAN), you just need to ask RStudio it to install it via the menu `Tools` \-\> `Install Packages...`. Once there, you just need to give the name of the package and RStudio will download and install the package on your computer.
Many major analysis types are available via downloaded packages as well as problem sets from various books (e.g. `Sleuth3` or `faraway`) and can be easily downloaded and installed via the menu.
Once a package is downloaded and installed on your computer, it is available, but it is not loaded into your current R session by default. The reason it isn’t loaded is that there are thousands of packages, some of which are quite large and only used occasionally. So to improve overall performance only a few packages are loaded by default and the you must explicitly load packages whenever you want to use them. You only need to load them once per session/script.
```
library(vegan) # load the vegan library
```
For a similar performance reason, many packages do not automatically load their datasets unless explicitly asked. Therefore when loading datasets from a package, you might need to do a *two\-step* process of loading the package and then loading the dataset.
```
library(faraway) # load the package into memory
```
```
##
## Attaching package: 'faraway'
```
```
## The following object is masked from 'package:lattice':
##
## melanoma
```
```
data("butterfat") # load the dataset into memory
```
If you don’t need to load any functions from a package and you just want the datasets, you can do it in one step.
```
data('butterfat', package='faraway') # just load the dataset, not anything else
butterfat[1:6, ] # print out the first 6 rows of the data
```
```
## Butterfat Breed Age
## 1 3.74 Ayrshire Mature
## 2 4.01 Ayrshire 2year
## 3 3.77 Ayrshire Mature
## 4 3.78 Ayrshire 2year
## 5 4.10 Ayrshire Mature
## 6 4.06 Ayrshire 2year
```
1\.5 Summarizing Data
---------------------
It is very important to be able to take a data set and produce summary statistics such as the mean and standard deviation of a column. For this sort of manipulation, I use the package `dplyr`. This package allows me to chain together many common actions to form a particular task.
The foundational operations to perform on a data set are:
* Subsetting \- Returns a with only particular columns or rows
– `select` \- Selecting a subset of columns by name or column number.
– `filter` \- Selecting a subset of rows from a data frame based on logical expressions.
– `slice` \- Selecting a subset of rows by row number.
* `arrange` \- Re\-ordering the rows of a data frame.
* `mutate` \- Add a new column that is some function of other columns.
* `summarise` \- calculate some summary statistic of a column of data. This collapses a set of rows into a single row.
Each of these operations is a function in the package `dplyr`. These functions all have a similar calling syntax, the first argument is a data set, subsequent arguments describe what to do with the input data frame and you can refer to the columns without using the `df$column` notation. All of these functions will return a data set.
The `dplyr` package also includes a function that “pipes” commands together. The pipe command `%>%` allows for very readable code. The idea is that the `%>%` operator works by translating the command `a %>% f(b)` to the expression `f(a,b)`. This operator works on any function and was introduced in the `magrittr` package. The beauty of this comes when you have a suite of functions that takes input arguments of the same type as their output. For example if we wanted to start with `x`, and first apply function `f()`, then `g()`, and then `h()`, the usual R command would be `h(g(f(x)))` which is hard to read because you have to start reading at the innermost set of parentheses. Using the pipe command `%>%`, this sequence of operations becomes `x %>% f() %>% g() %>% h()`.
```
library(dplyr) # load the dplyr package!
Grades # Recall the Grades data
```
```
## Name Exam.1 Exam.2
## 1 Bob 90 87
## 2 Jeff 75 71
## 3 Mary 92 95
## 4 Valerie 85 81
```
```
# The following code takes the Grades data.frame and calculates
# a column for the average exam score, and then sorts the data
# according to the that average score
Grades %>%
mutate( Avg.Score = (Exam.1 + Exam.2) / 2 ) %>%
arrange( Avg.Score )
```
```
## Name Exam.1 Exam.2 Avg.Score
## 1 Jeff 75 71 73.0
## 2 Valerie 85 81 83.0
## 3 Bob 90 87 88.5
## 4 Mary 92 95 93.5
```
Next we consider the summarization function to calculate the mean score for `Exam.1`. Notice that this takes a data frame of four rows, and summarizes it down to just one row that represents the summarized data for all four students.
```
Grades %>%
summarize( Exam.1.mean = mean( Exam.1 ) )
```
```
## Exam.1.mean
## 1 85.5
```
Similarly you could calculate the standard deviation for the exam as well.
```
Grades %>%
summarize( Exam.1.mean = mean( Exam.1 ),
Exam.1.sd = sd( Exam.1 ) )
```
```
## Exam.1.mean Exam.1.sd
## 1 85.5 7.593857
```
Recall the `butterfat` data we loaded earlier.
```
butterfat[1:6, ] # only the first 6 observations
```
```
## Butterfat Breed Age
## 1 3.74 Ayrshire Mature
## 2 4.01 Ayrshire 2year
## 3 3.77 Ayrshire Mature
## 4 3.78 Ayrshire 2year
## 5 4.10 Ayrshire Mature
## 6 4.06 Ayrshire 2year
```
We have 100 observations for different breeds of cows and different ages. We might want to find the mean and standard deviation of the butterfat content for each breed. To do this, we are still going to use the `summarize`, but we will precede that with `group_by(Breed)` to tell the subsequent `dplyr` functions to perform the actions separately for each breed.
```
butterfat %>%
group_by( Breed ) %>%
summarise( Mean = mean(Butterfat),
Std.Dev = sd(Butterfat) )
```
```
## # A tibble: 5 x 3
## Breed Mean Std.Dev
## <fct> <dbl> <dbl>
## 1 Ayrshire 4.06 0.261
## 2 Canadian 4.44 0.366
## 3 Guernsey 4.95 0.483
## 4 Holstein-Fresian 3.67 0.259
## 5 Jersey 5.29 0.599
```
1\.6 Graphing Data
------------------
There are three major “systems” of making graphs in R. The basic plotting commands in R are quite effective but the commands do not have a way of being combined in easy ways. Lattice graphics (which the `mosaic` package uses) makes it possible to create some quite complicated graphs but it is very difficult to do make non\-standard graphs. The last package, `ggplot2` tries to not anticipate what the user wants to do, but rather provide the mechanisms for pulling together different graphical concepts and the user gets to decide which elements to combine.
To make the most of `ggplot2` it is important to wrap your mind around “The Grammar of Graphics”. Briefly, the act of building a graph can be broken down into three steps.
1. Define what data we are using.
2. What is the major relationship we wish to examine?
3. In what way should we present that relationship? These relationships can be presented in multiple ways, and the process of creating a good graph relies on building layers upon layers of information. For example, we might start with printing the raw data and then overlay a regression line over the top.
Next, it should be noted that `ggplot2` is designed to act on data frames. It is actually hard to just draw three data points and for simple graphs it might be easier to use the base graphing system in R. However for any real data analysis project, the data will already be in a data frame and this is not an annoyance.
One way that `ggplot2` makes it easy to form very complicated graphs is that it provides a large number of basic building blocks that, when stacked upon each other, can produce extremely complicated graphs. A full list is available at [http://docs.ggplot2\.org/current/](http://docs.ggplot2.org/current/) but the following list gives some idea of different building blocks. These different geometries are different ways to display the relationship between variables and can be combined in many interesting ways.
| Geom | Description | Required Aesthetics |
| --- | --- | --- |
| `geom_histogram` | A histogram | `x` |
| `geom_bar` | A barplot | `x` |
| `geom_density` | A density plot of data. (smoothed histogram) | `x` |
| `geom_boxplot` | Boxplots | `x, y` |
| `geom_line` | Draw a line (after sorting x\-values) | `x, y` |
| `geom_path` | Draw a line (without sorting x\-values) | `x, y` |
| `geom_point` | Draw points (for a scatterplot) | `x, y` |
| `geom_smooth` | Add a ribbon that summarizes a scatterplot | `x, y` |
| `geom_ribbon` | Enclose a region, and color the interior | `ymin, ymax` |
| `geom_errorbar` | Error bars | `ymin, ymax` |
| `geom_text` | Add text to a graph | `x, y, label` |
| `geom_label` | Add text to a graph | `x, y, label` |
| `geom_tile` | Create Heat map | `x, y, fill` |
A graph can be built up layer by layer, where:
* Each layer corresponds to a `geom`, each of which requires a dataset and a mapping between an aesthetic and a column of the data set.
+ If you don’t specify either, then the layer inherits everything defined in the `ggplot()` command.
+ You can have different datasets for each layer!
* Layers can be added with a `+`, or you can define two plots and add them together (second one over\-writes anything that conflicts).
### 1\.6\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg) # print out what columns are present
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
library(ggplot2) # load the ggplot2 package!
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have already summarized the data and I just want to make the barchart some height, I would instead use `geom_col` instead.
### 1\.6\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
### 1\.6\.3 Scatterplots
There is a famous data set that contains 150 observations from three species of iris. For each observation the length and width of the flowers petals and sepals were measured. This dataset is available in R as `iris` and is always loaded. We’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in the data set.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 1\.6\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(iris, aes(x=Species, y=Petal.Length)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
### 1\.6\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg) # print out what columns are present
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
library(ggplot2) # load the ggplot2 package!
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have already summarized the data and I just want to make the barchart some height, I would instead use `geom_col` instead.
### 1\.6\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
### 1\.6\.3 Scatterplots
There is a famous data set that contains 150 observations from three species of iris. For each observation the length and width of the flowers petals and sepals were measured. This dataset is available in R as `iris` and is always loaded. We’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in the data set.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 1\.6\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(iris, aes(x=Species, y=Petal.Length)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
1\.7 Scripts and RMarkdown
--------------------------
One of the worst things about a pocket calculator is there is no good way to go several steps and easily see what you did or fix a mistake (there is nothing more annoying than re\-typing something because of a typo. To avoid these issues I always work with script (or RMarkdown) files instead of typing directly into the console. You will quickly learn that it is impossible to write R code correctly the first time and you’ll save yourself a huge amount of work by just embracing scripts (and RMarkdown) from the beginning. Furthermore, having a script file fully documents how you did your analysis, which can help when writing the methods section of a paper. Finally, having a script makes it easy to re\-run an analysis after a change in the data (additional data values, transformed data, or removal of outliers).
It often makes your script more readable if you break a single command up into multiple lines. R will disregard all whitespace (including line breaks) so you can safely spread your command over as multiple lines. Finally, it is useful to leave comments in the script for things such as explaining a tricky step, who wrote the code and when, or why you chose a particular name for a variable. The `#` sign will denote that the rest of the line is a comment and R will ignore it.
### 1\.7\.1 R Scripts (.R files)
The first type of file that we’ll discuss is a traditional script file. To create a new .R script in RStudio go to `File -> New File -> R Script`. This opens a new window in RStudio where you can type commands and functions as a common text editor. Type whatever you like in the script window and then you can execute the code line by line (using the run button or its keyboard shortcut to run the highlighted region or whatever line the curser is on) or the entire script (using the source button). Other options for what piece of code to run are available under the Code dropdown box.
An R script for a homework assignment might look something like this:
```
# Problem 1
# Calculate the log of a couple of values and make a plot
# of the log function from 0 to 3
log(0)
log(1)
log(2)
x <- seq(.1,3, length=1000)
plot(x, log(x))
# Problem 2
# Calculate the exponential function of a couple of values
# and make a plot of the function from -2 to 2
exp(-2)
exp(0)
exp(2)
x <- seq(-2, 2, length=1000)
plot(x, exp(x))
```
This looks perfectly acceptable as a way of documenting what you did, but this script file doesn’t contain the actual results of commands I ran, nor does it show you the plots. Also anytime I want to comment on some output, it needs to be offset with the commenting character `#`. It would be nice to have both the commands and the results merged into one document. This is what the R Markdown file does for us.
### 1\.7\.2 R Markdown (.Rmd files)
When I was a graduate student, I had to tediously copy and past tables of output from the R console and figures I had made into my Microsoft Word document. Far too often I would realize I had made a small mistake in part (b) of a problem and would have to go back, correct my mistake, and then redo all the laborious copying. I often wished that I could write both the code for my statistical analysis and the long discussion about the interpretation all in the same document so that I could just re\-run the analysis with a click of a button and all the tables and figures would be updated by magic. Fortunately that magic now exists.
To create a new R Markdown document, we use the `File -> New File -> R Markdown...` dropdown option and a menu will appear asking you for the document title, author, and preferred output type. In order to create a PDF, you’ll need to have LaTeX installed, but the HTML output nearly always works and I’ve had good luck with the MS Word output as well.
The R Markdown is an implementation of the Markdown syntax that makes it extremely easy to write webpages and give instructions for how to do typesetting sorts of things. This syntax was extended to allow use to embed R commands directly into the document. Perhaps the easiest way to understand the syntax is to look at an at the [RMarkdown website](http://rmarkdown.rstudio.com).
The R code in my document is nicely separated from my regular text using the three backticks and an instruction that it is R code that needs to be evaluated. The output of this document looks good as a HTML, PDF, or MS Word document. I have actually created this entire book using RMarkdown.
### 1\.7\.1 R Scripts (.R files)
The first type of file that we’ll discuss is a traditional script file. To create a new .R script in RStudio go to `File -> New File -> R Script`. This opens a new window in RStudio where you can type commands and functions as a common text editor. Type whatever you like in the script window and then you can execute the code line by line (using the run button or its keyboard shortcut to run the highlighted region or whatever line the curser is on) or the entire script (using the source button). Other options for what piece of code to run are available under the Code dropdown box.
An R script for a homework assignment might look something like this:
```
# Problem 1
# Calculate the log of a couple of values and make a plot
# of the log function from 0 to 3
log(0)
log(1)
log(2)
x <- seq(.1,3, length=1000)
plot(x, log(x))
# Problem 2
# Calculate the exponential function of a couple of values
# and make a plot of the function from -2 to 2
exp(-2)
exp(0)
exp(2)
x <- seq(-2, 2, length=1000)
plot(x, exp(x))
```
This looks perfectly acceptable as a way of documenting what you did, but this script file doesn’t contain the actual results of commands I ran, nor does it show you the plots. Also anytime I want to comment on some output, it needs to be offset with the commenting character `#`. It would be nice to have both the commands and the results merged into one document. This is what the R Markdown file does for us.
### 1\.7\.2 R Markdown (.Rmd files)
When I was a graduate student, I had to tediously copy and past tables of output from the R console and figures I had made into my Microsoft Word document. Far too often I would realize I had made a small mistake in part (b) of a problem and would have to go back, correct my mistake, and then redo all the laborious copying. I often wished that I could write both the code for my statistical analysis and the long discussion about the interpretation all in the same document so that I could just re\-run the analysis with a click of a button and all the tables and figures would be updated by magic. Fortunately that magic now exists.
To create a new R Markdown document, we use the `File -> New File -> R Markdown...` dropdown option and a menu will appear asking you for the document title, author, and preferred output type. In order to create a PDF, you’ll need to have LaTeX installed, but the HTML output nearly always works and I’ve had good luck with the MS Word output as well.
The R Markdown is an implementation of the Markdown syntax that makes it extremely easy to write webpages and give instructions for how to do typesetting sorts of things. This syntax was extended to allow use to embed R commands directly into the document. Perhaps the easiest way to understand the syntax is to look at an at the [RMarkdown website](http://rmarkdown.rstudio.com).
The R code in my document is nicely separated from my regular text using the three backticks and an instruction that it is R code that needs to be evaluated. The output of this document looks good as a HTML, PDF, or MS Word document. I have actually created this entire book using RMarkdown.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/2-vectors.html |
Chapter 2 Vectors
=================
R operates on vectors where we think of a vector as a collection of objects, usually numbers. The first thing we need to be able to do is define an arbitrary collection using the `c()` function[1](#fn1).
```
# Define the vector of numbers 1, ..., 4
c(1,2,3,4)
```
```
## [1] 1 2 3 4
```
There are many other ways to define vectors. The function `rep(x, times)` just repeats `x` a the number times specified by `times`.
```
rep(2, 5) # repeat 2 five times... 2 2 2 2 2
```
```
## [1] 2 2 2 2 2
```
```
rep( c('A','B'), 3 ) # repeat A B three times A B A B A B
```
```
## [1] "A" "B" "A" "B" "A" "B"
```
Finally, we can also define a sequence of numbers using the `seq(from, to, by, length.out)` function which expects the user to supply 3 out of 4 possible arguments. The possible arguments are `from`, `to`, `by`, and `length.out`. From is the starting point of the sequence, to is the ending point, by is the difference between any two successive elements, and `length.out` is the total number of elements in the vector.
```
seq(from=1, to=4, by=1)
```
```
## [1] 1 2 3 4
```
```
seq(1,4) # 'by' has a default of 1
```
```
## [1] 1 2 3 4
```
```
1:4 # a shortcut for seq(1,4)
```
```
## [1] 1 2 3 4
```
```
seq(1,5, by=.5)
```
```
## [1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
```
```
seq(1,5, length.out=11)
```
```
## [1] 1.0 1.4 1.8 2.2 2.6 3.0 3.4 3.8 4.2 4.6 5.0
```
If we have two vectors and we wish to combine them, we can again use the `c()` function.
```
vec1 <- c(1,2,3)
vec2 <- c(4,5,6)
vec3 <- c(vec1, vec2)
vec3
```
```
## [1] 1 2 3 4 5 6
```
2\.1 Accessing Vector Elements
------------------------------
Suppose I have defined a vector
```
foo <- c('A', 'B', 'C', 'D', 'F')
```
and I am interested in accessing whatever is in the first spot of the vector. Or perhaps the 3rd or 5th element. To do that we use the `[]` notation, where the square bracket represents a subscript.
```
foo[1] # First element in vector foo
```
```
## [1] "A"
```
```
foo[4] # Fourth element in vector foo
```
```
## [1] "D"
```
This subscripting notation can get more complicated. For example I might want the 2nd and 3rd element or the 3rd through 5th elements.
```
foo[c(2,3)] # elements 2 and 3
```
```
## [1] "B" "C"
```
```
foo[ 3:5 ] # elements 3 to 5
```
```
## [1] "C" "D" "F"
```
Finally, I might be interested in getting the entire vector except for a certain element. To do this, R allows us to use the square bracket notation with a negative index number.
```
foo[-1] # everything but the first element
```
```
## [1] "B" "C" "D" "F"
```
```
foo[ -1*c(1,2) ] # everything but the first two elements
```
```
## [1] "C" "D" "F"
```
Now is a good time to address what is the `[1]` doing in our output? Because vectors are often very long and might span multiple lines, R is trying to help us by telling us the index number of the left most value. If we have a very long vector, the second line of values will start with the index of the first value on the second line.
```
# The letters vector is a vector of all 26 lower-case letters
letters
```
```
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q"
## [18] "r" "s" "t" "u" "v" "w" "x" "y" "z"
```
Here the `[1]` is telling me that `a` is the first element of the vector and the `[18]` is telling me that `r` is the 18th element of the vector.
2\.2 Scalar Functions Applied to Vectors
----------------------------------------
It is very common to want to perform some operation on all the elements of a vector simultaneously. For example, I might want take the absolute value of every element. Functions that are inherently defined on single values will almost always apply the function to each element of the vector if given a vector.
```
x <- -5:5
x
```
```
## [1] -5 -4 -3 -2 -1 0 1 2 3 4 5
```
```
abs(x)
```
```
## [1] 5 4 3 2 1 0 1 2 3 4 5
```
```
exp(x)
```
```
## [1] 6.737947e-03 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01
## [6] 1.000000e+00 2.718282e+00 7.389056e+00 2.008554e+01 5.459815e+01
## [11] 1.484132e+02
```
2\.3 Vector Algebra
-------------------
All algebra done with vectors will be done element\-wise by default.For matrix and vector multiplication as usually defined by mathematicians, use `%*%` instead of `*`. So two vectors added together result in their individual elements being summed.
```
x <- 1:4
y <- 5:8
x + y
```
```
## [1] 6 8 10 12
```
```
x * y
```
```
## [1] 5 12 21 32
```
R does another trick when doing vector algebra. If the lengths of the two vectors don’t match, R will recycle the elements of the shorter vector to come up with vector the same length as the longer. This is potentially confusing, but is most often used when adding a long vector to a vector of length 1\.
```
x <- 1:4
x + 1
```
```
## [1] 2 3 4 5
```
2\.4 Commonly Used Vector Functions
-----------------------------------
| Function | Result |
| --- | --- |
| `min(x)` | Minimum value in vector x |
| `max(x)` | Maximum value in vector x |
| `length(x)` | Number of elements in vector x |
| `sum(x)` | Sum of all the elements in vector x |
| `mean(x)` | Mean of the elements in vector x |
| `median(x)` | Median of the elements in vector x |
| `var(x)` | Variance of the elements in vector x |
| `sd(x)` | Standard deviation of the elements in x |
Putting this all together, we can easily perform tedious calculations with ease. To demonstrate how scalars, vectors, and functions of them work together, we will calculate the variance of 5 numbers. Recall that variance is defined as \\\[ Var\\left(x\\right)\=\\frac{\\sum\_{i\=1}^{n}\\left(x\_{i}\-\\bar{x}\\right)^{2}}{n\-1} \\]
```
x <- c(2,4,6,8,10)
xbar <- mean(x) # calculate the mean
xbar
```
```
## [1] 6
```
```
x - xbar # calculate the errors
```
```
## [1] -4 -2 0 2 4
```
```
(x-xbar)^2
```
```
## [1] 16 4 0 4 16
```
```
sum( (x-xbar)^2 )
```
```
## [1] 40
```
```
n <- length(x) # how many data points do we have
n
```
```
## [1] 5
```
```
sum((x-xbar)^2)/(n-1) # calculating the variance by hand
```
```
## [1] 10
```
```
var(x) # Same thing using the built-in variance function
```
```
## [1] 10
```
2\.5 Exercises
--------------
1. Create a vector of three elements (2,4,6\) and name that vector `vec_a`. Create a second vector, `vec_b`, that contains (8,10,12\). Add these two vectors together and name the result `vec_c`.
2. Create a vector, named `vec_d`, that contains only two elements (14,20\). Add this vector to `vec_a`. What is the result and what do you think R did (look up the recycling rule using Google)? What is the warning message that R gives you?
3. Next add 5 to the vector vec\_a. What is the result and what did R do? Why doesn’t in give you a warning message similar to what you saw in the previous problem?
4. Generate the vector of integers \\(\\left\\{ 1,2,\\dots5\\right\\}\\) in two different ways.
1. First using the `seq()` function
2. Using the `a:b` shortcut.
5. Generate the vector of even numbers \\(\\left\\{ 2,4,6,\\dots,20\\right\\}\\)
1. Using the seq() function and
2. Using the a:b shortcut and some subsequent algebra. *Hint: Generate the vector 1\-10 and then multiple it by 2*.
6. Generate a vector of 21 elements that are evenly placed between 0 and 1 using the `seq()` command and name this vector `x`.
7. Generate the vector \\(\\left\\{ 2,4,8,2,4,8,2,4,8\\right\\}\\) using the `rep()` command to replicate the vector c(2,4,8\).
8. Generate the vector \\(\\left\\{ 2,2,2,2,4,4,4,4,8,8,8,8\\right\\}\\) using the `rep()` command. You might need to check the help file for rep() to see all of the options that rep() will accept. In particular, look at the optional argument `each=`.
9. The vector `letters` is a built\-in vector to R and contains the lower case English alphabet.
1. Extract the 9th element of the letters vector.
2. Extract the sub\-vector that contains the 9th, 11th, and 19th elements.
3. Extract the sub\-vector that contains everything except the last two elements.
2\.1 Accessing Vector Elements
------------------------------
Suppose I have defined a vector
```
foo <- c('A', 'B', 'C', 'D', 'F')
```
and I am interested in accessing whatever is in the first spot of the vector. Or perhaps the 3rd or 5th element. To do that we use the `[]` notation, where the square bracket represents a subscript.
```
foo[1] # First element in vector foo
```
```
## [1] "A"
```
```
foo[4] # Fourth element in vector foo
```
```
## [1] "D"
```
This subscripting notation can get more complicated. For example I might want the 2nd and 3rd element or the 3rd through 5th elements.
```
foo[c(2,3)] # elements 2 and 3
```
```
## [1] "B" "C"
```
```
foo[ 3:5 ] # elements 3 to 5
```
```
## [1] "C" "D" "F"
```
Finally, I might be interested in getting the entire vector except for a certain element. To do this, R allows us to use the square bracket notation with a negative index number.
```
foo[-1] # everything but the first element
```
```
## [1] "B" "C" "D" "F"
```
```
foo[ -1*c(1,2) ] # everything but the first two elements
```
```
## [1] "C" "D" "F"
```
Now is a good time to address what is the `[1]` doing in our output? Because vectors are often very long and might span multiple lines, R is trying to help us by telling us the index number of the left most value. If we have a very long vector, the second line of values will start with the index of the first value on the second line.
```
# The letters vector is a vector of all 26 lower-case letters
letters
```
```
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q"
## [18] "r" "s" "t" "u" "v" "w" "x" "y" "z"
```
Here the `[1]` is telling me that `a` is the first element of the vector and the `[18]` is telling me that `r` is the 18th element of the vector.
2\.2 Scalar Functions Applied to Vectors
----------------------------------------
It is very common to want to perform some operation on all the elements of a vector simultaneously. For example, I might want take the absolute value of every element. Functions that are inherently defined on single values will almost always apply the function to each element of the vector if given a vector.
```
x <- -5:5
x
```
```
## [1] -5 -4 -3 -2 -1 0 1 2 3 4 5
```
```
abs(x)
```
```
## [1] 5 4 3 2 1 0 1 2 3 4 5
```
```
exp(x)
```
```
## [1] 6.737947e-03 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01
## [6] 1.000000e+00 2.718282e+00 7.389056e+00 2.008554e+01 5.459815e+01
## [11] 1.484132e+02
```
2\.3 Vector Algebra
-------------------
All algebra done with vectors will be done element\-wise by default.For matrix and vector multiplication as usually defined by mathematicians, use `%*%` instead of `*`. So two vectors added together result in their individual elements being summed.
```
x <- 1:4
y <- 5:8
x + y
```
```
## [1] 6 8 10 12
```
```
x * y
```
```
## [1] 5 12 21 32
```
R does another trick when doing vector algebra. If the lengths of the two vectors don’t match, R will recycle the elements of the shorter vector to come up with vector the same length as the longer. This is potentially confusing, but is most often used when adding a long vector to a vector of length 1\.
```
x <- 1:4
x + 1
```
```
## [1] 2 3 4 5
```
2\.4 Commonly Used Vector Functions
-----------------------------------
| Function | Result |
| --- | --- |
| `min(x)` | Minimum value in vector x |
| `max(x)` | Maximum value in vector x |
| `length(x)` | Number of elements in vector x |
| `sum(x)` | Sum of all the elements in vector x |
| `mean(x)` | Mean of the elements in vector x |
| `median(x)` | Median of the elements in vector x |
| `var(x)` | Variance of the elements in vector x |
| `sd(x)` | Standard deviation of the elements in x |
Putting this all together, we can easily perform tedious calculations with ease. To demonstrate how scalars, vectors, and functions of them work together, we will calculate the variance of 5 numbers. Recall that variance is defined as \\\[ Var\\left(x\\right)\=\\frac{\\sum\_{i\=1}^{n}\\left(x\_{i}\-\\bar{x}\\right)^{2}}{n\-1} \\]
```
x <- c(2,4,6,8,10)
xbar <- mean(x) # calculate the mean
xbar
```
```
## [1] 6
```
```
x - xbar # calculate the errors
```
```
## [1] -4 -2 0 2 4
```
```
(x-xbar)^2
```
```
## [1] 16 4 0 4 16
```
```
sum( (x-xbar)^2 )
```
```
## [1] 40
```
```
n <- length(x) # how many data points do we have
n
```
```
## [1] 5
```
```
sum((x-xbar)^2)/(n-1) # calculating the variance by hand
```
```
## [1] 10
```
```
var(x) # Same thing using the built-in variance function
```
```
## [1] 10
```
2\.5 Exercises
--------------
1. Create a vector of three elements (2,4,6\) and name that vector `vec_a`. Create a second vector, `vec_b`, that contains (8,10,12\). Add these two vectors together and name the result `vec_c`.
2. Create a vector, named `vec_d`, that contains only two elements (14,20\). Add this vector to `vec_a`. What is the result and what do you think R did (look up the recycling rule using Google)? What is the warning message that R gives you?
3. Next add 5 to the vector vec\_a. What is the result and what did R do? Why doesn’t in give you a warning message similar to what you saw in the previous problem?
4. Generate the vector of integers \\(\\left\\{ 1,2,\\dots5\\right\\}\\) in two different ways.
1. First using the `seq()` function
2. Using the `a:b` shortcut.
5. Generate the vector of even numbers \\(\\left\\{ 2,4,6,\\dots,20\\right\\}\\)
1. Using the seq() function and
2. Using the a:b shortcut and some subsequent algebra. *Hint: Generate the vector 1\-10 and then multiple it by 2*.
6. Generate a vector of 21 elements that are evenly placed between 0 and 1 using the `seq()` command and name this vector `x`.
7. Generate the vector \\(\\left\\{ 2,4,8,2,4,8,2,4,8\\right\\}\\) using the `rep()` command to replicate the vector c(2,4,8\).
8. Generate the vector \\(\\left\\{ 2,2,2,2,4,4,4,4,8,8,8,8\\right\\}\\) using the `rep()` command. You might need to check the help file for rep() to see all of the options that rep() will accept. In particular, look at the optional argument `each=`.
9. The vector `letters` is a built\-in vector to R and contains the lower case English alphabet.
1. Extract the 9th element of the letters vector.
2. Extract the sub\-vector that contains the 9th, 11th, and 19th elements.
3. Extract the sub\-vector that contains everything except the last two elements.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/3-statistical-tables.html |
Chapter 3 Statistical Tables
============================
Statistics makes use of a wide variety of distributions and before the days of personal computers, every statistician had books with hundreds and hundreds of pages of tables allowing them to look up particular values. Fortunately in the modern age, we don’t need those books and tables, but we do still need to access those values. To make life easier and consistent for R users, every distribution is accessed in the same manner.
3\.1 `mosaic::plotDist()` function
----------------------------------
The `mosaic` package provides a very useful routine for understanding a distribution. The `plotDist()` function takes the R name of the distribution along with whatever parameters are necessary for that function and show the distribution. For reference below is a list of common distributions and their R name and a list of necessary parameters.
| Distribution | Stem | Parameters | Parameter Interpretation |
| --- | --- | --- | --- |
| Binomial | `binom` | `size` `prob` | Number of Trials Probability of Success (per Trial) |
| Exponential | `exp` | `rate` | Mean of the distribution |
| Normal | `norm` | `mean=0` `sd=1` | Center of the distribution Standard deviation |
| Uniform | `unif` | `min=0` `max=1` | Minimum of the distribution Maximum of the distribution |
For example, to see the normal distribution with mean \\(\\mu\=10\\) and standard deviation \\(\\sigma\=2\\), we use
```
library(mosaic)
plotDist('norm', mean=10, sd=2)
```
This function works for discrete distributions as well.
```
plotDist('binom', size=10, prob=.3)
```
3\.2 Base R functions
---------------------
All the probability distributions available in R are accessed in exactly the same way, using a `d`\-function, `p`\-function, `q`\-function, and `r`\-function. For the rest of this section suppose that \\(X\\) is a random variable from the distribution of interest and \\(x\\) is some possible value that \\(X\\) could take on. Notice that the `p`\-function is the inverse of the `q`\-function.
| Function | Result |
| --- | --- |
| `d`\-function(x) | The height of the probability distribution/density at given \\(x\\) |
| `p`\-function(x) | Find \\(q\\) such that \\(P\\left(X\\le x\\right) \= q\\) where \\(x\\) is given |
| `q`\-function(q) | Find \\(x\\) such that \\(P\\left(X\\le x\\right) \= q\\) where \\(q\\) is given |
| `r`\-function(n) | Generate \\(n\\) random observations from the distribution |
For each distribution in R, there will be this set of functions but we replace the “\-function” with the distribution name or a shortened version. `norm`, `exp`, `binom`, `t`, `f` are the names for the normal, exponential, binomial, T and F distributions. Furthermore, most distributions have additional parameters that define the distribution and will also be passed as arguments to these functions, although, if a reasonable default value for the parameter exists, there will be a default.
### 3\.2\.1 d\-function
The purpose of the d\-function is to calculate the height of a probability mass function or a density function (The “d” actually stands for density). Notice that for discrete distributions, this is the probability of observing that particular value, while for continuous distributions, the height doesn’t have a nice physical interpretation.
We start with an example of the Binomial distribution. For \\(X\\sim Binomial\\left(n\=10,\\pi\=.2\\right)\\) suppose we wanted to know \\(P(X\=0\)\\)? We know the probability mass function is \\\[P\\left(X\=x\\right)\={n \\choose x}\\pi^{x}\\left(1\-\\pi\\right)^{n\-x}\\] thus \\\[P\\left(X\=0\\right) \= {10 \\choose 0}\\,0\.2^{0}\\left(0\.8\\right)^{10} \= 1\\cdot1\\cdot0\.8^{10} \\approx 0\.107\\] but that calculation is fairly tedious. To get R to do the same calculation, we just need the height of the probability mass function at \\(0\\). To do this calculation, we need to know the x value we are interested in along with the distribution parameters \\(n\\) and \\(\\pi\\).
The first thing we should do is check the help file for the binomial distribution functions to see what parameters are needed and what they are named.
```
?dbinom
```
The help file shows us the parameters \\(n\\) and \\(\\pi\\) are called size and prob respectively. So to calculate the probability that \\(X\=0\\) we would use the following command:
```
dbinom(0, size=10, prob=.2)
```
```
## [1] 0.1073742
```
### 3\.2\.2 p\-function
Often we are interested in the probability of observing some value or anything less (In probability theory, we call this the cumulative density function or CDF). P\-values will be calculated this way, so we want a nice easy way to do this.
To start our example with the binomial distribution, again let \\(X\\sim Binomial\\left(n\=10,\\pi\=0\.2\\right)\\). Suppose I want to know what the probability of observing a 0, 1, or 2? That is, what is \\(P\\left(X\\le2\\right)\\)? I could just find the probability of each and add them up.
```
dbinom(0, size=10, prob=.2) + # P(X==0) +
dbinom(1, size=10, prob=.2) + # P(X==1) +
dbinom(2, size=10, prob=.2) # P(X==2)
```
```
## [1] 0.6777995
```
but this would get tedious for binomial distributions with a large number of trials. The shortcut is to use the `pbinom()` function.
```
pbinom(2, size=10, prob=.2)
```
```
## [1] 0.6777995
```
For discrete distributions, you must be careful because R will give you the probability of less than or equal to 2\. If you wanted less than two, you should use `dbinom(1,10,.2)`.
The normal distribution works similarly. Suppose for \\(Z\\sim N\\left(0,1\\right)\\) and we wanted to know \\(P\\left(Z\\le\-1\\right)\\)?
The answer is easily found via `pnorm()`.
```
pnorm(-1)
```
```
## [1] 0.1586553
```
Notice for continuous random variables, the probability \\(P\\left(Z\=\-1\\right)\=0\\) so we can ignore the issue of “less than” vs “less than or equal to”.
Often times we will want to know the probability of greater than some value. That is, we might want to find \\(P\\left(Z \\ge \-1\\right)\\). For the normal distribution, there are a number of tricks we could use. Notably \\\[P\\left(Z\\ge\-1\\right) \= P\\left(Z\\le1\\right)\=1\-P\\left(Z\<\-1\\right)\\] but sometimes I’m lazy and would like to tell R to give me the area to the right instead of area to the left (which is the default). This can be done by setting the argument \\(lower.tail\=FALSE\\).
The `mosaic` package includes an augmented version of the `pnorm()` function called `xpnorm()` that calculates the same number but includes some extra information and produces a pretty graph to help us understand what we just calculated and do the tedious “1 minus” calculation to find the upper area. Fortunately this x\-variant exists for the Normal, Chi\-squared, F, Gamma continuous distributions and the discrete Poisson, Geometric, and Binomial distributions.
```
library(mosaic)
xpnorm(-1)
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -1) = P(Z <= -1) = 0.1587
```
```
## P(X > -1) = P(Z > -1) = 0.8413
```
```
##
```
```
## [1] 0.1586553
```
### 3\.2\.3 `q`\-function
In class, we will also find ourselves asking for the quantiles of a distribution. Percentiles are by definition 1/100, 2/100, etc but if I am interested in something that isn’t and even division of 100, we get fancy can call them quantiles. This is a small semantic quibble, but we ought to be precise. That being said, I won’t correct somebody if they call these percentiles. For example, I might want to find the 0\.30 quantile, which is the value such that 30% of the distribution is less than it, and 70% is greater. Mathematically, I wish to find the value \\(z\\) such that \\(P(Z\<z)\=0\.30\\).
To find this value in the tables in a book, we use the table in reverse. R gives us a handy way to do this with the `qnorm()` function and the mosaic package provides a nice visualization using the augmented `xqnorm()`. Below, I specify that I’m using a function in the `mosaic` package by specifying it via `PackageName::FunctionName()` but that isn’t strictly necessary but can improve readability of your code.
```
mosaic::xqnorm(0.30) # Give me the value along with a pretty picture
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -0.5244005) = 0.3
```
```
## P(X > -0.5244005) = 0.7
```
```
##
```
```
## [1] -0.5244005
```
```
qnorm(.30) # No pretty picture, just the value
```
```
## [1] -0.5244005
```
### 3\.2\.4 r\-function
Finally, I often want to be able to generate random data from a particular distribution. R does this with the r\-function. The first argument to this function the number of random variables to draw and any remaining arguments are the parameters of the distribution.
```
rnorm(5, mean=20, sd=2)
```
```
## [1] 20.53183 18.78752 15.21955 19.25521 17.99265
```
```
rbinom(4, size=10, prob=.8)
```
```
## [1] 8 9 8 6
```
3\.3 Exercises
--------------
1. We will examine how to use the probability mass functions (a.k.a. d\-functions) and cumulative probability function (a.k.a. p\-function) for the Poisson distribution.
1. Create a graph of the distribution of a Poisson random variable with rate parameter \\(\\lambda\=2\\) using the mosaic function `plotDist()`.
2. Calculate the probability that a Poisson random variable (with rate parameter \\(\\lambda\=2\\) ) is exactly equal to 3 using the `dpois()` function. Be sure that this value matches the graphed distribution in part (a).
3. For a Poisson random variable with rate parameter \\(\\lambda\=2\\), calculate the probability it is less than or equal to 3, by summing the four values returned by the Poisson `d`\-function.
4. Perform the same calculation as the previous question but using the cumulative probability function `ppois()`.
2. We will examine how to use the cumulative probability functions (a.k.a. p\-functions) for the normal and exponential distributions.
1. Use the mosaic function `plotDist()` to produce a graph of the standard normal distribution (that is a normal distribution with mean \\(\\mu\=0\\) and standard deviation \\(\\sigma\=1\\).
2. For a standard normal, use the `pnorm()` function or its `mosaic` augmented version `xpnorm()` to calculate
1. \\(P\\left(Z\<\-1\\right)\\)
2. \\(P\\left(Z\\ge1\.5\\right)\\)
3. Use the mosaic function `plotDist()` to produce a graph of an exponential distribution with rate parameter 2\.
4. Suppose that \\(Y\\sim Exp\\left(2\\right)\\), as above, use the `pexp()` function to calculate \\(P\\left(Y \\le 1 \\right)\\). (Unfortunately there isn’t a mosaic augmented `xpexp()` function.)
3. We next examine how to use the quantile functions for the normal and exponential distributions using R’s q\-functions.
1. Find the value of a standard normal distribution (\\(\\mu\=0\\), \\(\\sigma\=1\\)) such that 5% of the distribution is to the left of the value using the `qnorm()` function or the mosaic augmented version `xqnorm()`.
2. Find the value of an exponential distribution with rate 2 such that 60% of the distribution is less than it using the `qexp()` function.
4. Finally we will look at generating random deviates from a distribution.
1. Generate a single value from a uniform distribution with minimum 0, and maximum 1 using the `runif()` function. Repeat this step several times and confirm you are getting different values each time.
2. Generate a sample of size 20 from the same uniform distribution and save it as the vector `x` using the following:
```
x <- runif(20, min=0, max=1)
```
Then produce a histogram of the sample using the function `hist()`
```
hist(x)
```
3. Generate a sample of 2000 from a normal distribution with `mean=10` and standard deviation `sd=2` using the `rnorm()` function. Create a histogram the the resulting sample.
3\.1 `mosaic::plotDist()` function
----------------------------------
The `mosaic` package provides a very useful routine for understanding a distribution. The `plotDist()` function takes the R name of the distribution along with whatever parameters are necessary for that function and show the distribution. For reference below is a list of common distributions and their R name and a list of necessary parameters.
| Distribution | Stem | Parameters | Parameter Interpretation |
| --- | --- | --- | --- |
| Binomial | `binom` | `size` `prob` | Number of Trials Probability of Success (per Trial) |
| Exponential | `exp` | `rate` | Mean of the distribution |
| Normal | `norm` | `mean=0` `sd=1` | Center of the distribution Standard deviation |
| Uniform | `unif` | `min=0` `max=1` | Minimum of the distribution Maximum of the distribution |
For example, to see the normal distribution with mean \\(\\mu\=10\\) and standard deviation \\(\\sigma\=2\\), we use
```
library(mosaic)
plotDist('norm', mean=10, sd=2)
```
This function works for discrete distributions as well.
```
plotDist('binom', size=10, prob=.3)
```
3\.2 Base R functions
---------------------
All the probability distributions available in R are accessed in exactly the same way, using a `d`\-function, `p`\-function, `q`\-function, and `r`\-function. For the rest of this section suppose that \\(X\\) is a random variable from the distribution of interest and \\(x\\) is some possible value that \\(X\\) could take on. Notice that the `p`\-function is the inverse of the `q`\-function.
| Function | Result |
| --- | --- |
| `d`\-function(x) | The height of the probability distribution/density at given \\(x\\) |
| `p`\-function(x) | Find \\(q\\) such that \\(P\\left(X\\le x\\right) \= q\\) where \\(x\\) is given |
| `q`\-function(q) | Find \\(x\\) such that \\(P\\left(X\\le x\\right) \= q\\) where \\(q\\) is given |
| `r`\-function(n) | Generate \\(n\\) random observations from the distribution |
For each distribution in R, there will be this set of functions but we replace the “\-function” with the distribution name or a shortened version. `norm`, `exp`, `binom`, `t`, `f` are the names for the normal, exponential, binomial, T and F distributions. Furthermore, most distributions have additional parameters that define the distribution and will also be passed as arguments to these functions, although, if a reasonable default value for the parameter exists, there will be a default.
### 3\.2\.1 d\-function
The purpose of the d\-function is to calculate the height of a probability mass function or a density function (The “d” actually stands for density). Notice that for discrete distributions, this is the probability of observing that particular value, while for continuous distributions, the height doesn’t have a nice physical interpretation.
We start with an example of the Binomial distribution. For \\(X\\sim Binomial\\left(n\=10,\\pi\=.2\\right)\\) suppose we wanted to know \\(P(X\=0\)\\)? We know the probability mass function is \\\[P\\left(X\=x\\right)\={n \\choose x}\\pi^{x}\\left(1\-\\pi\\right)^{n\-x}\\] thus \\\[P\\left(X\=0\\right) \= {10 \\choose 0}\\,0\.2^{0}\\left(0\.8\\right)^{10} \= 1\\cdot1\\cdot0\.8^{10} \\approx 0\.107\\] but that calculation is fairly tedious. To get R to do the same calculation, we just need the height of the probability mass function at \\(0\\). To do this calculation, we need to know the x value we are interested in along with the distribution parameters \\(n\\) and \\(\\pi\\).
The first thing we should do is check the help file for the binomial distribution functions to see what parameters are needed and what they are named.
```
?dbinom
```
The help file shows us the parameters \\(n\\) and \\(\\pi\\) are called size and prob respectively. So to calculate the probability that \\(X\=0\\) we would use the following command:
```
dbinom(0, size=10, prob=.2)
```
```
## [1] 0.1073742
```
### 3\.2\.2 p\-function
Often we are interested in the probability of observing some value or anything less (In probability theory, we call this the cumulative density function or CDF). P\-values will be calculated this way, so we want a nice easy way to do this.
To start our example with the binomial distribution, again let \\(X\\sim Binomial\\left(n\=10,\\pi\=0\.2\\right)\\). Suppose I want to know what the probability of observing a 0, 1, or 2? That is, what is \\(P\\left(X\\le2\\right)\\)? I could just find the probability of each and add them up.
```
dbinom(0, size=10, prob=.2) + # P(X==0) +
dbinom(1, size=10, prob=.2) + # P(X==1) +
dbinom(2, size=10, prob=.2) # P(X==2)
```
```
## [1] 0.6777995
```
but this would get tedious for binomial distributions with a large number of trials. The shortcut is to use the `pbinom()` function.
```
pbinom(2, size=10, prob=.2)
```
```
## [1] 0.6777995
```
For discrete distributions, you must be careful because R will give you the probability of less than or equal to 2\. If you wanted less than two, you should use `dbinom(1,10,.2)`.
The normal distribution works similarly. Suppose for \\(Z\\sim N\\left(0,1\\right)\\) and we wanted to know \\(P\\left(Z\\le\-1\\right)\\)?
The answer is easily found via `pnorm()`.
```
pnorm(-1)
```
```
## [1] 0.1586553
```
Notice for continuous random variables, the probability \\(P\\left(Z\=\-1\\right)\=0\\) so we can ignore the issue of “less than” vs “less than or equal to”.
Often times we will want to know the probability of greater than some value. That is, we might want to find \\(P\\left(Z \\ge \-1\\right)\\). For the normal distribution, there are a number of tricks we could use. Notably \\\[P\\left(Z\\ge\-1\\right) \= P\\left(Z\\le1\\right)\=1\-P\\left(Z\<\-1\\right)\\] but sometimes I’m lazy and would like to tell R to give me the area to the right instead of area to the left (which is the default). This can be done by setting the argument \\(lower.tail\=FALSE\\).
The `mosaic` package includes an augmented version of the `pnorm()` function called `xpnorm()` that calculates the same number but includes some extra information and produces a pretty graph to help us understand what we just calculated and do the tedious “1 minus” calculation to find the upper area. Fortunately this x\-variant exists for the Normal, Chi\-squared, F, Gamma continuous distributions and the discrete Poisson, Geometric, and Binomial distributions.
```
library(mosaic)
xpnorm(-1)
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -1) = P(Z <= -1) = 0.1587
```
```
## P(X > -1) = P(Z > -1) = 0.8413
```
```
##
```
```
## [1] 0.1586553
```
### 3\.2\.3 `q`\-function
In class, we will also find ourselves asking for the quantiles of a distribution. Percentiles are by definition 1/100, 2/100, etc but if I am interested in something that isn’t and even division of 100, we get fancy can call them quantiles. This is a small semantic quibble, but we ought to be precise. That being said, I won’t correct somebody if they call these percentiles. For example, I might want to find the 0\.30 quantile, which is the value such that 30% of the distribution is less than it, and 70% is greater. Mathematically, I wish to find the value \\(z\\) such that \\(P(Z\<z)\=0\.30\\).
To find this value in the tables in a book, we use the table in reverse. R gives us a handy way to do this with the `qnorm()` function and the mosaic package provides a nice visualization using the augmented `xqnorm()`. Below, I specify that I’m using a function in the `mosaic` package by specifying it via `PackageName::FunctionName()` but that isn’t strictly necessary but can improve readability of your code.
```
mosaic::xqnorm(0.30) # Give me the value along with a pretty picture
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -0.5244005) = 0.3
```
```
## P(X > -0.5244005) = 0.7
```
```
##
```
```
## [1] -0.5244005
```
```
qnorm(.30) # No pretty picture, just the value
```
```
## [1] -0.5244005
```
### 3\.2\.4 r\-function
Finally, I often want to be able to generate random data from a particular distribution. R does this with the r\-function. The first argument to this function the number of random variables to draw and any remaining arguments are the parameters of the distribution.
```
rnorm(5, mean=20, sd=2)
```
```
## [1] 20.53183 18.78752 15.21955 19.25521 17.99265
```
```
rbinom(4, size=10, prob=.8)
```
```
## [1] 8 9 8 6
```
### 3\.2\.1 d\-function
The purpose of the d\-function is to calculate the height of a probability mass function or a density function (The “d” actually stands for density). Notice that for discrete distributions, this is the probability of observing that particular value, while for continuous distributions, the height doesn’t have a nice physical interpretation.
We start with an example of the Binomial distribution. For \\(X\\sim Binomial\\left(n\=10,\\pi\=.2\\right)\\) suppose we wanted to know \\(P(X\=0\)\\)? We know the probability mass function is \\\[P\\left(X\=x\\right)\={n \\choose x}\\pi^{x}\\left(1\-\\pi\\right)^{n\-x}\\] thus \\\[P\\left(X\=0\\right) \= {10 \\choose 0}\\,0\.2^{0}\\left(0\.8\\right)^{10} \= 1\\cdot1\\cdot0\.8^{10} \\approx 0\.107\\] but that calculation is fairly tedious. To get R to do the same calculation, we just need the height of the probability mass function at \\(0\\). To do this calculation, we need to know the x value we are interested in along with the distribution parameters \\(n\\) and \\(\\pi\\).
The first thing we should do is check the help file for the binomial distribution functions to see what parameters are needed and what they are named.
```
?dbinom
```
The help file shows us the parameters \\(n\\) and \\(\\pi\\) are called size and prob respectively. So to calculate the probability that \\(X\=0\\) we would use the following command:
```
dbinom(0, size=10, prob=.2)
```
```
## [1] 0.1073742
```
### 3\.2\.2 p\-function
Often we are interested in the probability of observing some value or anything less (In probability theory, we call this the cumulative density function or CDF). P\-values will be calculated this way, so we want a nice easy way to do this.
To start our example with the binomial distribution, again let \\(X\\sim Binomial\\left(n\=10,\\pi\=0\.2\\right)\\). Suppose I want to know what the probability of observing a 0, 1, or 2? That is, what is \\(P\\left(X\\le2\\right)\\)? I could just find the probability of each and add them up.
```
dbinom(0, size=10, prob=.2) + # P(X==0) +
dbinom(1, size=10, prob=.2) + # P(X==1) +
dbinom(2, size=10, prob=.2) # P(X==2)
```
```
## [1] 0.6777995
```
but this would get tedious for binomial distributions with a large number of trials. The shortcut is to use the `pbinom()` function.
```
pbinom(2, size=10, prob=.2)
```
```
## [1] 0.6777995
```
For discrete distributions, you must be careful because R will give you the probability of less than or equal to 2\. If you wanted less than two, you should use `dbinom(1,10,.2)`.
The normal distribution works similarly. Suppose for \\(Z\\sim N\\left(0,1\\right)\\) and we wanted to know \\(P\\left(Z\\le\-1\\right)\\)?
The answer is easily found via `pnorm()`.
```
pnorm(-1)
```
```
## [1] 0.1586553
```
Notice for continuous random variables, the probability \\(P\\left(Z\=\-1\\right)\=0\\) so we can ignore the issue of “less than” vs “less than or equal to”.
Often times we will want to know the probability of greater than some value. That is, we might want to find \\(P\\left(Z \\ge \-1\\right)\\). For the normal distribution, there are a number of tricks we could use. Notably \\\[P\\left(Z\\ge\-1\\right) \= P\\left(Z\\le1\\right)\=1\-P\\left(Z\<\-1\\right)\\] but sometimes I’m lazy and would like to tell R to give me the area to the right instead of area to the left (which is the default). This can be done by setting the argument \\(lower.tail\=FALSE\\).
The `mosaic` package includes an augmented version of the `pnorm()` function called `xpnorm()` that calculates the same number but includes some extra information and produces a pretty graph to help us understand what we just calculated and do the tedious “1 minus” calculation to find the upper area. Fortunately this x\-variant exists for the Normal, Chi\-squared, F, Gamma continuous distributions and the discrete Poisson, Geometric, and Binomial distributions.
```
library(mosaic)
xpnorm(-1)
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -1) = P(Z <= -1) = 0.1587
```
```
## P(X > -1) = P(Z > -1) = 0.8413
```
```
##
```
```
## [1] 0.1586553
```
### 3\.2\.3 `q`\-function
In class, we will also find ourselves asking for the quantiles of a distribution. Percentiles are by definition 1/100, 2/100, etc but if I am interested in something that isn’t and even division of 100, we get fancy can call them quantiles. This is a small semantic quibble, but we ought to be precise. That being said, I won’t correct somebody if they call these percentiles. For example, I might want to find the 0\.30 quantile, which is the value such that 30% of the distribution is less than it, and 70% is greater. Mathematically, I wish to find the value \\(z\\) such that \\(P(Z\<z)\=0\.30\\).
To find this value in the tables in a book, we use the table in reverse. R gives us a handy way to do this with the `qnorm()` function and the mosaic package provides a nice visualization using the augmented `xqnorm()`. Below, I specify that I’m using a function in the `mosaic` package by specifying it via `PackageName::FunctionName()` but that isn’t strictly necessary but can improve readability of your code.
```
mosaic::xqnorm(0.30) # Give me the value along with a pretty picture
```
```
##
```
```
## If X ~ N(0, 1), then
```
```
## P(X <= -0.5244005) = 0.3
```
```
## P(X > -0.5244005) = 0.7
```
```
##
```
```
## [1] -0.5244005
```
```
qnorm(.30) # No pretty picture, just the value
```
```
## [1] -0.5244005
```
### 3\.2\.4 r\-function
Finally, I often want to be able to generate random data from a particular distribution. R does this with the r\-function. The first argument to this function the number of random variables to draw and any remaining arguments are the parameters of the distribution.
```
rnorm(5, mean=20, sd=2)
```
```
## [1] 20.53183 18.78752 15.21955 19.25521 17.99265
```
```
rbinom(4, size=10, prob=.8)
```
```
## [1] 8 9 8 6
```
3\.3 Exercises
--------------
1. We will examine how to use the probability mass functions (a.k.a. d\-functions) and cumulative probability function (a.k.a. p\-function) for the Poisson distribution.
1. Create a graph of the distribution of a Poisson random variable with rate parameter \\(\\lambda\=2\\) using the mosaic function `plotDist()`.
2. Calculate the probability that a Poisson random variable (with rate parameter \\(\\lambda\=2\\) ) is exactly equal to 3 using the `dpois()` function. Be sure that this value matches the graphed distribution in part (a).
3. For a Poisson random variable with rate parameter \\(\\lambda\=2\\), calculate the probability it is less than or equal to 3, by summing the four values returned by the Poisson `d`\-function.
4. Perform the same calculation as the previous question but using the cumulative probability function `ppois()`.
2. We will examine how to use the cumulative probability functions (a.k.a. p\-functions) for the normal and exponential distributions.
1. Use the mosaic function `plotDist()` to produce a graph of the standard normal distribution (that is a normal distribution with mean \\(\\mu\=0\\) and standard deviation \\(\\sigma\=1\\).
2. For a standard normal, use the `pnorm()` function or its `mosaic` augmented version `xpnorm()` to calculate
1. \\(P\\left(Z\<\-1\\right)\\)
2. \\(P\\left(Z\\ge1\.5\\right)\\)
3. Use the mosaic function `plotDist()` to produce a graph of an exponential distribution with rate parameter 2\.
4. Suppose that \\(Y\\sim Exp\\left(2\\right)\\), as above, use the `pexp()` function to calculate \\(P\\left(Y \\le 1 \\right)\\). (Unfortunately there isn’t a mosaic augmented `xpexp()` function.)
3. We next examine how to use the quantile functions for the normal and exponential distributions using R’s q\-functions.
1. Find the value of a standard normal distribution (\\(\\mu\=0\\), \\(\\sigma\=1\\)) such that 5% of the distribution is to the left of the value using the `qnorm()` function or the mosaic augmented version `xqnorm()`.
2. Find the value of an exponential distribution with rate 2 such that 60% of the distribution is less than it using the `qexp()` function.
4. Finally we will look at generating random deviates from a distribution.
1. Generate a single value from a uniform distribution with minimum 0, and maximum 1 using the `runif()` function. Repeat this step several times and confirm you are getting different values each time.
2. Generate a sample of size 20 from the same uniform distribution and save it as the vector `x` using the following:
```
x <- runif(20, min=0, max=1)
```
Then produce a histogram of the sample using the function `hist()`
```
hist(x)
```
3. Generate a sample of 2000 from a normal distribution with `mean=10` and standard deviation `sd=2` using the `rnorm()` function. Create a histogram the the resulting sample.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/4-data-types.html |
Chapter 4 Data Types
====================
There are some basic data types that are commonly used.
1. Integers \- These are the integer numbers \\(\\left(\\dots,\-2,\-1,0,1,2,\\dots\\right)\\). To convert a numeric value to an integer you may use the function `as.integer()`.
2. Numeric \- These could be any number (whole number or decimal). To convert another type to numeric you may use the function `as.numeric()`.
3. Strings \- These are a collection of characters (example: Storing a student’s last name). To convert another type to a string, use `as.character()`.
4. Factors \- These are strings that can only values from a finite set. For example we might wish to store a variable that records home department of a student. Since the department can only come from a finite set of possibilities, I would use a factor. Factors are categorical variables, but R calls them factors instead of categorical variable. A vector of values of another type can always be converted to a factor using the `as.factor()` command. For converting numeric values to factors, I will often use the function `cut()`.
5. Logicals \- This is a special case of a factor that can only take on the values `TRUE` and `FALSE`. (Be careful to always capitalize `TRUE` and `FALSE`. Because R is case\-sensitive, TRUE is not the same as true.) Using the function `as.logical()` you can convert numeric values to `TRUE` and `FALSE` where `0` is `FALSE` and anything else is `TRUE`.
Depending on the command, R will coerce your data if necessary, but it is a good habit to do the coercion yourself. If a variable is a number, R will automatically assume that it is continuous numerical variable. If it is a character string, then R will assume it is a factor when doing any statistical analysis.
To find the type of an object, the `str()` command gives the type, and if the type is complicated, it describes the structure of the object.
4\.1 Integers and Numerics
--------------------------
Integers and numerics are exactly what they sound like. Integers can take on whole number values, while numerics can take on any decimal value. The reason that there are two separate data types is that integers require less memory to store than numerics. For most users, the distinction can be ignored.
```
x <- c(1,2,1,2,1)
# show that x is of type 'numeric'
str(x) # the str() command show the STRucture of the object
```
```
## num [1:5] 1 2 1 2 1
```
4\.2 Character Strings
----------------------
In R, we can think of collections of letters and numbers as a single entity called a string. Other programming languages think of strings as vectors of letters, but R does not so you can’t just pull off the first character using vector tricks. In practice, there are no limits as to how long string can be.
```
x <- "Goodnight Moon"
# Notice x is of type character (chr)
str(x)
```
```
## chr "Goodnight Moon"
```
```
# R doesn't care if I use single quotes or double quotes, but don't mix them...
y <- 'Hop on Pop!'
# we can make a vector of character strings
Books <- c(x, y, 'Where the Wild Things Are')
Books
```
```
## [1] "Goodnight Moon" "Hop on Pop!"
## [3] "Where the Wild Things Are"
```
Character strings can also contain numbers and if the character string is in the correct format for a number, we can convert it to a number.
```
x <- '5.2'
str(x) # x really is a character string
```
```
## chr "5.2"
```
```
x
```
```
## [1] "5.2"
```
```
as.numeric(x)
```
```
## [1] 5.2
```
If we try an operation that only makes sense on numeric types (like addition) then R complain unless we first convert it. There are places where R will try to coerce an object to another data type but it happens inconsistently and you should just do the conversion yourself
```
x+1
```
```
## Error in x + 1: non-numeric argument to binary operator
```
```
as.numeric(x) + 1
```
```
## [1] 6.2
```
4\.3 Factors
------------
Factors are how R keeps track of categorical variables. R does this in a two step pattern. First it figures out how many categories there are and remembers which category an observation belongs two and second, it keeps a vector character strings that correspond to the names of each of the categories.
```
# A charater vector
y <- c('B','B','A','A','C')
y
```
```
## [1] "B" "B" "A" "A" "C"
```
```
# convert the vector of characters into a vector of factors
z <- factor(y)
str(z)
```
```
## Factor w/ 3 levels "A","B","C": 2 2 1 1 3
```
Notice that the vector `z` is actually the combination of group assignment vector `2,2,1,1,3` and the group names vector `“A”,”B”,”C”`. So we could convert z to a vector of numerics or to a vector of character strings.
```
as.numeric(z)
```
```
## [1] 2 2 1 1 3
```
```
as.character(z)
```
```
## [1] "B" "B" "A" "A" "C"
```
Often we need to know what possible groups there are, and this is done using the `levels()` command.
```
levels(z)
```
```
## [1] "A" "B" "C"
```
Notice that the order of the group names was done alphabetically, which we did not chose. This ordering of the levels has implications when we do an analysis or make a plot and R will always display information about the factor levels using this order. It would be nice to be able to change the order. Also it would be really nice to give more descriptive names to the groups rather than just the group code in my raw data. I find it is usually easiest to just convert the vector to a character vector, and then convert it back using the `levels=` argument to define the order of the groups, and labels to define the modified names.
```
z <- factor(z, # vector of data levels to convert
levels=c('B','A','C'), # Order of the levels
labels=c("B Group", "A Group", "C Group")) # Pretty labels to use
z
```
```
## [1] B Group B Group A Group A Group C Group
## Levels: B Group A Group C Group
```
For the Iris data, the species are ordered alphabetically. We might want to re\-order how they appear in a graphs to place Versicolor first. The `Species` names are not capitalized, and perhaps I would like them to begin with a capital letter.
```
iris$Species <- factor( iris$Species,
levels = c('versicolor','setosa','virginica'),
labels = c('Versicolor','Setosa','Virginica'))
boxplot( Sepal.Length ~ Species, data=iris)
```
Often we wish to take a continuous numerical vector and transform it into a factor. The function `cut()` takes a vector of numerical data and creates a factor based on your give cut\-points.
```
# Define a continuous vector to convert to a factor
x <- 1:10
# divide range of x into three groups of equal length
cut(x, breaks=3)
```
```
## [1] (0.991,4] (0.991,4] (0.991,4] (0.991,4] (4,7] (4,7] (4,7]
## [8] (7,10] (7,10] (7,10]
## Levels: (0.991,4] (4,7] (7,10]
```
```
# divide x into four groups, where I specify all 5 break points
# Notice that the the outside breakpoints must include all the data points.
# That is, the smallest break must be smaller than all the data, and the largest
# must be larger (or equal) to all the data.
cut(x, breaks = c(0, 2.5, 5.0, 7.5, 10))
```
```
## [1] (0,2.5] (0,2.5] (2.5,5] (2.5,5] (2.5,5] (5,7.5] (5,7.5]
## [8] (7.5,10] (7.5,10] (7.5,10]
## Levels: (0,2.5] (2.5,5] (5,7.5] (7.5,10]
```
```
# divide x into 3 groups, but give them a nicer
# set of group names
cut(x, breaks=3, labels=c('Low','Medium','High'))
```
```
## [1] Low Low Low Low Medium Medium Medium High High High
## Levels: Low Medium High
```
4\.4 Logicals
-------------
Often I wish to know which elements of a vector are equal to some value, or are greater than something. R allows us to make those tests at the vector level.
Very often we need to make a comparison and test if something is equal to something else, or if one thing is bigger than another. To test these, we will use the `<`, `<=`, `==`, `>=`, `>`, and `!=` operators. These can be used similarly to
```
6 < 10 # 6 less than 10?
```
```
## [1] TRUE
```
```
6 == 10 # 6 equal to 10?
```
```
## [1] FALSE
```
```
6 != 10 # 6 not equal to 10?
```
```
## [1] TRUE
```
where we used 6 and 10 just for clarity. The result of each of these is a logical value (a `TRUE` or `FALSE`). In most cases these would be variables you had previously created and were using.
Suppose I have a vector of numbers and I want to get all the values greater than 16\. Using the `>` comparison, I can create a vector of logical values that tells me if the specified value is greater than 16\. The `which()` takes a vector of logicals and returns the indices that are true.
```
x <- -10:10 # a vector of 20 values, (11th element is the 0)
x
```
```
## [1] -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
## [18] 7 8 9 10
```
```
x > 0 # a vector of 20 logicals
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [12] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
```
```
which( x > 0 ) # which vector elements are > 0
```
```
## [1] 12 13 14 15 16 17 18 19 20 21
```
```
x[ which(x>0) ] # Grab the elements > 0
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
On function I find to be occasionally useful is the `is.element(el, set)` function which allows me to figure out which elements of a vector are one of a set of possibilities. For example, I might want to know which elements of the `letters` vector are vowels.
```
letters # this is all 26 english lowercase letters
```
```
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q"
## [18] "r" "s" "t" "u" "v" "w" "x" "y" "z"
```
```
vowels <- c('a','e','i','o','u')
which( is.element(letters, vowels) )
```
```
## [1] 1 5 9 15 21
```
This shows me the vowels occur at the 1st, 5th, 9th, 15th, and 21st elements of the alphabet.
Often I want to make multiple comparisons. For example given a bunch of students and a vector of their GPAs and another vector of their major, maybe I want to find all undergraduate Forestry majors with a GPA greater than 3\.0\. Then, given my set of university students, I want ask two questions: Is their major Forestry, and is their GPA greater than 3\.0\. So I need to combine those two logical results into a single logical that is true if both questions are true.
The command `&` means “and” and `|` means “or”. We can combine two logical values using these two similarly:
```
TRUE & TRUE # both are true so combo so result is true
```
```
## [1] TRUE
```
```
TRUE & FALSE # one true and one false so result is false
```
```
## [1] FALSE
```
```
FALSE & FALSE # both are false so the result is false
```
```
## [1] FALSE
```
```
TRUE | TRUE # at least one is true -> TRUE
```
```
## [1] TRUE
```
```
TRUE | FALSE # at least one is true -> TRUE
```
```
## [1] TRUE
```
```
FALSE | FALSE # neither is true -> FALSE
```
```
## [1] FALSE
```
4\.5 Exercises
--------------
1. Create a vector of character strings with six elements
```
test <- c('red','red','blue','yellow','blue','green')
```
and then
1. Transform the `test` vector just you created into a factor.
2. Use the `levels()` command to determine the levels (and order) of the factor you just created.
3. Transform the factor you just created into integers. Comment on the relationship between the integers and the order of the levels you found in part (b).
4. Use some sort of comparison to create a vector that identifies which factor elements are the red group.
2. Given the vector of ages,
```
ages <- c(17, 18, 16, 20, 22, 23)
```
create a factor that has levels `Minor` or `Adult` where any observation greater than or equal to 18 qualifies as an adult. Also, make sure that the order of the levels is `Minor` first and `Adult` second.
3. Suppose we vectors that give a students name, their GPA, and their major. We want to come up with a list of forestry students with a GPA of greater than 3\.0\.
```
Name <- c('Adam','Benjamin','Caleb','Daniel','Ephriam', 'Frank','Gideon')
GPA <- c(3.2, 3.8, 2.6, 2.3, 3.4, 3.7, 4.0)
Major <- c('Math','Forestry','Biology','Forestry','Forestry','Math','Forestry')
```
1. Create a vector of TRUE/FALSE values that indicate whether the students GPA is greater than 3\.0\.
2. Create a vector of TRUE/FALSE values that indicate whether the students’ major is forestry.
3. Create a vector of TRUE/FALSE values that indicates if a student has a GPA greater than 3\.0 and is a forestry major.
4. Convert the vector of TRUE/FALSE values in part (c) to integer values using the `as.numeric()` function. Which numeric value corresponds to TRUE?
5. Sum (using the `sum()` function) the vector you created to count the number of students with GPA \> 3\.0 and are a forestry major.
4. Make two variables, and call them `a` and `b` where `a=2` and `b=10`. I want to think of these as defining an interval.
1. Define the vector `x <- c(-1, 5, 12)`
2. Using the `&`, come up with a comparison that will test if the value of `x` is in the interval \\(\[a,b]\\). (We want the test to return `TRUE` if \\(a\\le x\\le b\\)). That is, test if `a` is less than `x` and if `x` is less than `b`. Confirm that for x defined above you get the correct vector of logical values.
3. Similarly make a comparison that tests if `x` is outside the interval \\(\[a,b]\\) using the `|` operator. That is, test if `x < a` or `x > b`. I want the test to return TRUE is x is less than a or if x is greater than b. Confirm that for x defined above you get the correct vector of logical values.
4\.1 Integers and Numerics
--------------------------
Integers and numerics are exactly what they sound like. Integers can take on whole number values, while numerics can take on any decimal value. The reason that there are two separate data types is that integers require less memory to store than numerics. For most users, the distinction can be ignored.
```
x <- c(1,2,1,2,1)
# show that x is of type 'numeric'
str(x) # the str() command show the STRucture of the object
```
```
## num [1:5] 1 2 1 2 1
```
4\.2 Character Strings
----------------------
In R, we can think of collections of letters and numbers as a single entity called a string. Other programming languages think of strings as vectors of letters, but R does not so you can’t just pull off the first character using vector tricks. In practice, there are no limits as to how long string can be.
```
x <- "Goodnight Moon"
# Notice x is of type character (chr)
str(x)
```
```
## chr "Goodnight Moon"
```
```
# R doesn't care if I use single quotes or double quotes, but don't mix them...
y <- 'Hop on Pop!'
# we can make a vector of character strings
Books <- c(x, y, 'Where the Wild Things Are')
Books
```
```
## [1] "Goodnight Moon" "Hop on Pop!"
## [3] "Where the Wild Things Are"
```
Character strings can also contain numbers and if the character string is in the correct format for a number, we can convert it to a number.
```
x <- '5.2'
str(x) # x really is a character string
```
```
## chr "5.2"
```
```
x
```
```
## [1] "5.2"
```
```
as.numeric(x)
```
```
## [1] 5.2
```
If we try an operation that only makes sense on numeric types (like addition) then R complain unless we first convert it. There are places where R will try to coerce an object to another data type but it happens inconsistently and you should just do the conversion yourself
```
x+1
```
```
## Error in x + 1: non-numeric argument to binary operator
```
```
as.numeric(x) + 1
```
```
## [1] 6.2
```
4\.3 Factors
------------
Factors are how R keeps track of categorical variables. R does this in a two step pattern. First it figures out how many categories there are and remembers which category an observation belongs two and second, it keeps a vector character strings that correspond to the names of each of the categories.
```
# A charater vector
y <- c('B','B','A','A','C')
y
```
```
## [1] "B" "B" "A" "A" "C"
```
```
# convert the vector of characters into a vector of factors
z <- factor(y)
str(z)
```
```
## Factor w/ 3 levels "A","B","C": 2 2 1 1 3
```
Notice that the vector `z` is actually the combination of group assignment vector `2,2,1,1,3` and the group names vector `“A”,”B”,”C”`. So we could convert z to a vector of numerics or to a vector of character strings.
```
as.numeric(z)
```
```
## [1] 2 2 1 1 3
```
```
as.character(z)
```
```
## [1] "B" "B" "A" "A" "C"
```
Often we need to know what possible groups there are, and this is done using the `levels()` command.
```
levels(z)
```
```
## [1] "A" "B" "C"
```
Notice that the order of the group names was done alphabetically, which we did not chose. This ordering of the levels has implications when we do an analysis or make a plot and R will always display information about the factor levels using this order. It would be nice to be able to change the order. Also it would be really nice to give more descriptive names to the groups rather than just the group code in my raw data. I find it is usually easiest to just convert the vector to a character vector, and then convert it back using the `levels=` argument to define the order of the groups, and labels to define the modified names.
```
z <- factor(z, # vector of data levels to convert
levels=c('B','A','C'), # Order of the levels
labels=c("B Group", "A Group", "C Group")) # Pretty labels to use
z
```
```
## [1] B Group B Group A Group A Group C Group
## Levels: B Group A Group C Group
```
For the Iris data, the species are ordered alphabetically. We might want to re\-order how they appear in a graphs to place Versicolor first. The `Species` names are not capitalized, and perhaps I would like them to begin with a capital letter.
```
iris$Species <- factor( iris$Species,
levels = c('versicolor','setosa','virginica'),
labels = c('Versicolor','Setosa','Virginica'))
boxplot( Sepal.Length ~ Species, data=iris)
```
Often we wish to take a continuous numerical vector and transform it into a factor. The function `cut()` takes a vector of numerical data and creates a factor based on your give cut\-points.
```
# Define a continuous vector to convert to a factor
x <- 1:10
# divide range of x into three groups of equal length
cut(x, breaks=3)
```
```
## [1] (0.991,4] (0.991,4] (0.991,4] (0.991,4] (4,7] (4,7] (4,7]
## [8] (7,10] (7,10] (7,10]
## Levels: (0.991,4] (4,7] (7,10]
```
```
# divide x into four groups, where I specify all 5 break points
# Notice that the the outside breakpoints must include all the data points.
# That is, the smallest break must be smaller than all the data, and the largest
# must be larger (or equal) to all the data.
cut(x, breaks = c(0, 2.5, 5.0, 7.5, 10))
```
```
## [1] (0,2.5] (0,2.5] (2.5,5] (2.5,5] (2.5,5] (5,7.5] (5,7.5]
## [8] (7.5,10] (7.5,10] (7.5,10]
## Levels: (0,2.5] (2.5,5] (5,7.5] (7.5,10]
```
```
# divide x into 3 groups, but give them a nicer
# set of group names
cut(x, breaks=3, labels=c('Low','Medium','High'))
```
```
## [1] Low Low Low Low Medium Medium Medium High High High
## Levels: Low Medium High
```
4\.4 Logicals
-------------
Often I wish to know which elements of a vector are equal to some value, or are greater than something. R allows us to make those tests at the vector level.
Very often we need to make a comparison and test if something is equal to something else, or if one thing is bigger than another. To test these, we will use the `<`, `<=`, `==`, `>=`, `>`, and `!=` operators. These can be used similarly to
```
6 < 10 # 6 less than 10?
```
```
## [1] TRUE
```
```
6 == 10 # 6 equal to 10?
```
```
## [1] FALSE
```
```
6 != 10 # 6 not equal to 10?
```
```
## [1] TRUE
```
where we used 6 and 10 just for clarity. The result of each of these is a logical value (a `TRUE` or `FALSE`). In most cases these would be variables you had previously created and were using.
Suppose I have a vector of numbers and I want to get all the values greater than 16\. Using the `>` comparison, I can create a vector of logical values that tells me if the specified value is greater than 16\. The `which()` takes a vector of logicals and returns the indices that are true.
```
x <- -10:10 # a vector of 20 values, (11th element is the 0)
x
```
```
## [1] -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
## [18] 7 8 9 10
```
```
x > 0 # a vector of 20 logicals
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [12] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
```
```
which( x > 0 ) # which vector elements are > 0
```
```
## [1] 12 13 14 15 16 17 18 19 20 21
```
```
x[ which(x>0) ] # Grab the elements > 0
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
On function I find to be occasionally useful is the `is.element(el, set)` function which allows me to figure out which elements of a vector are one of a set of possibilities. For example, I might want to know which elements of the `letters` vector are vowels.
```
letters # this is all 26 english lowercase letters
```
```
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q"
## [18] "r" "s" "t" "u" "v" "w" "x" "y" "z"
```
```
vowels <- c('a','e','i','o','u')
which( is.element(letters, vowels) )
```
```
## [1] 1 5 9 15 21
```
This shows me the vowels occur at the 1st, 5th, 9th, 15th, and 21st elements of the alphabet.
Often I want to make multiple comparisons. For example given a bunch of students and a vector of their GPAs and another vector of their major, maybe I want to find all undergraduate Forestry majors with a GPA greater than 3\.0\. Then, given my set of university students, I want ask two questions: Is their major Forestry, and is their GPA greater than 3\.0\. So I need to combine those two logical results into a single logical that is true if both questions are true.
The command `&` means “and” and `|` means “or”. We can combine two logical values using these two similarly:
```
TRUE & TRUE # both are true so combo so result is true
```
```
## [1] TRUE
```
```
TRUE & FALSE # one true and one false so result is false
```
```
## [1] FALSE
```
```
FALSE & FALSE # both are false so the result is false
```
```
## [1] FALSE
```
```
TRUE | TRUE # at least one is true -> TRUE
```
```
## [1] TRUE
```
```
TRUE | FALSE # at least one is true -> TRUE
```
```
## [1] TRUE
```
```
FALSE | FALSE # neither is true -> FALSE
```
```
## [1] FALSE
```
4\.5 Exercises
--------------
1. Create a vector of character strings with six elements
```
test <- c('red','red','blue','yellow','blue','green')
```
and then
1. Transform the `test` vector just you created into a factor.
2. Use the `levels()` command to determine the levels (and order) of the factor you just created.
3. Transform the factor you just created into integers. Comment on the relationship between the integers and the order of the levels you found in part (b).
4. Use some sort of comparison to create a vector that identifies which factor elements are the red group.
2. Given the vector of ages,
```
ages <- c(17, 18, 16, 20, 22, 23)
```
create a factor that has levels `Minor` or `Adult` where any observation greater than or equal to 18 qualifies as an adult. Also, make sure that the order of the levels is `Minor` first and `Adult` second.
3. Suppose we vectors that give a students name, their GPA, and their major. We want to come up with a list of forestry students with a GPA of greater than 3\.0\.
```
Name <- c('Adam','Benjamin','Caleb','Daniel','Ephriam', 'Frank','Gideon')
GPA <- c(3.2, 3.8, 2.6, 2.3, 3.4, 3.7, 4.0)
Major <- c('Math','Forestry','Biology','Forestry','Forestry','Math','Forestry')
```
1. Create a vector of TRUE/FALSE values that indicate whether the students GPA is greater than 3\.0\.
2. Create a vector of TRUE/FALSE values that indicate whether the students’ major is forestry.
3. Create a vector of TRUE/FALSE values that indicates if a student has a GPA greater than 3\.0 and is a forestry major.
4. Convert the vector of TRUE/FALSE values in part (c) to integer values using the `as.numeric()` function. Which numeric value corresponds to TRUE?
5. Sum (using the `sum()` function) the vector you created to count the number of students with GPA \> 3\.0 and are a forestry major.
4. Make two variables, and call them `a` and `b` where `a=2` and `b=10`. I want to think of these as defining an interval.
1. Define the vector `x <- c(-1, 5, 12)`
2. Using the `&`, come up with a comparison that will test if the value of `x` is in the interval \\(\[a,b]\\). (We want the test to return `TRUE` if \\(a\\le x\\le b\\)). That is, test if `a` is less than `x` and if `x` is less than `b`. Confirm that for x defined above you get the correct vector of logical values.
3. Similarly make a comparison that tests if `x` is outside the interval \\(\[a,b]\\) using the `|` operator. That is, test if `x < a` or `x > b`. I want the test to return TRUE is x is less than a or if x is greater than b. Confirm that for x defined above you get the correct vector of logical values.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/5-matrices-data-frames-and-lists.html |
Chapter 5 Matrices, Data Frames, and Lists
==========================================
5\.1 Matrices
-------------
We often want to store numerical data in a square or rectangular format and mathematicians will call these “matrices”. These will have two dimensions, rows and columns. To create a matrix in R we can create it directly using the `matrix()` command which requires the data to fill the matrix with, and optionally, some information about the number of rows and columns:
```
W <- matrix( c(1,2,3,4,5,6), nrow=2, ncol=3 )
W
```
```
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
Notice that because we only gave it six values, the information the number of columns is redundant and could be left off and R would figure out how many columns are needed. Next notice that the order that R chose to fill in the matrix was to fill in the first column then the second, and then the third. If we wanted to fill the matrix in order of the rows first, then we’d use the optional `byrow=TRUE` argument.
```
W <- matrix( c(1,2,3,4,5,6), nrow=2, byrow=TRUE )
W
```
```
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
The alternative to the `matrix()` command is we could create two columns as individual vectors and just push them together. Or we could have made three rows and lump them by rows instead. To do this we’ll use a group of functions that bind vectors together. To join two column vectors together, we’ll use `cbind` and to bind rows together we’ll use the `rbind` function
```
a <- c(1,2,3)
b <- c(4,5,6)
cbind(a,b) # Column Bind: a,b are columns in resultant matrix
```
```
## a b
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
rbind(a,b) # Row Bind: a,b are rows in resultant matrix
```
```
## [,1] [,2] [,3]
## a 1 2 3
## b 4 5 6
```
Notice that doing this has provided R with some names for the individual rows and columns. I can change these using the commands `colnames()` and `rownames()`.
```
M <- matrix(1:6, nrow=3, ncol=2, byrow=TRUE)
colnames(M) <- c('Column1', 'Column2') # set column labels
rownames(M) <- c('Row1', 'Row2','Row3') # set row labels
M
```
```
## Column1 Column2
## Row1 1 2
## Row2 3 4
## Row3 5 6
```
This is actually a pretty peculiar way of setting the *attributes* of the object `M` because it looks like we are evaluating a function and assigning some value to the function output. Yes it is weird, but R was developed in the 70s and it seemed like a good idea at the time.
Accessing a particular element of a matrix is done in a similar manner as with vectors, using the `[ ]` notation, but this time we must specify which row and which column. Notice that this scheme always is `[row, col]`.
```
M1 <- matrix(1:6, nrow=3, ncol=2)
M1
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
M1[1,2] # Grab row 1, column 2 value
```
```
## [1] 4
```
```
M1[1, 1:2] # Grab row 1, and columns 1 and 2.
```
```
## [1] 1 4
```
I might want to grab a single row or a single column out of a matrix, which is sometimes referred to as taking a slice of the matrix. I could figure out how long that vector is, but often I’m too lazy. Instead I can just specify the specify the particular row or column I want.
```
M1
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
M1[1, ] # grab the 1st row
```
```
## [1] 1 4
```
```
M1[ ,2] # grab second column (the spaces are optional...)
```
```
## [1] 4 5 6
```
5\.2 Data Frames
----------------
Matrices are great for mathematical operations, but I also want to be able to store data that is numerical. For example I might want to store a categorical variable such as manufacturer brand. To generalize our concept of a matrix to include these types of data, we will create a structure called a `data.frame`. These are very much like a simple Excel spreadsheet where each column represents a different trait or measurement type and each row will represent an individual.
Perhaps the easiest way to create a data frame is to just type the columns of data
```
data <- data.frame(
Name = c('Bob','Jeff','Mary'),
Score = c(90, 75, 92)
)
# Show the data.frame
data
```
```
## Name Score
## 1 Bob 90
## 2 Jeff 75
## 3 Mary 92
```
Because a data frame feels like a matrix, R also allows matrix notation for accessing particular values.
| Format | Result |
| --- | --- |
| `[a,b]` | Element in row `a` and column `b` |
| `[a,]` | All of row `a` |
| `[,b]` | All of column `b` |
Because the columns have meaning and we have given them column names, it is desirable to want to access an element by the name of the column as opposed to the column number.In large Excel spreadsheets I often get annoyed trying to remember which column something was in and muttering “Was total biomass in column P or Q?” A system where I could just name the column Total.Biomass and be done with it is much nicer to work with and I make fewer dumb mistakes.
```
data$Name # The $-sign means to reference a column by its label
```
```
## [1] Bob Jeff Mary
## Levels: Bob Jeff Mary
```
```
data$Name[2] # Notice that data$Name results in a vector, which I can manipulate
```
```
## [1] Jeff
## Levels: Bob Jeff Mary
```
I can mix the `[ ]` notation with the column names. The following is also acceptable:
```
data[, 'Name'] # Grab the column labeled 'Name'
```
```
## [1] Bob Jeff Mary
## Levels: Bob Jeff Mary
```
The next thing we might wish to do is add a new column to a preexisting data frame. There are two ways to do this. First, we could use the `cbind()` function to bind two data frames together. Second we could reference a new column name and assign values to it.
```
Second.score <- data.frame(Score2=c(41,42,43)) # another data.frame
data <- cbind( data, Second.score ) # squish them together
data
```
```
## Name Score Score2
## 1 Bob 90 41
## 2 Jeff 75 42
## 3 Mary 92 43
```
```
# if you assign a value to a column that doesn't exist, R will create it
data$Score3 <- c(61,62,63) # the Score3 column will created
data
```
```
## Name Score Score2 Score3
## 1 Bob 90 41 61
## 2 Jeff 75 42 62
## 3 Mary 92 43 63
```
Data frames are very commonly used and many commonly used functions will take a `data=` argument and all other arguments are assumed to be in the given data frame. Unfortunately this is not universally supported by all functions and you must look at the help file for the function you are interested in.
Data frames are also very restricive in that the shape of the data must be rectangular. If I try to create a new column that doesn’t have enough rows, R will complain.
```
data$Score4 <- c(1,2)
```
```
## Error in `$<-.data.frame`(`*tmp*`, Score4, value = c(1, 2)): replacement has 2 rows, data has 3
```
5\.3 Lists
----------
Data frames are quite useful for storing data but sometimes we’ll need to store a bunch of different pieces of information and it won’t fit neatly as a data frame. The most general form of a data structure is called a list. This can be thought of as a vector of objects where there is no requirement for each element to be the same type of object.
Consider that I might need to store information about person. For example, suppose that I want make an object that holds information about my immediate family. This object should have my spouse’s name (just one name) as well as my siblings. But because I have many siblings, I want the siblings to be a vector of names. Likewise I might also include my pets, but we don’t want any requirement that the number of pets is the same as the number of siblings (or spouses!).
```
wife <- 'Aubrey'
sibs <- c('Tina','Caroline','Brandon','John')
pets <- c('Beau','Tess','Kaylee')
Derek <- list(Spouse=wife, Siblings=sibs, Pets=pets) # Create the list
str(Derek) # show the structure of object
```
```
## List of 3
## $ Spouse : chr "Aubrey"
## $ Siblings: chr [1:4] "Tina" "Caroline" "Brandon" "John"
## $ Pets : chr [1:3] "Beau" "Tess" "Kaylee"
```
Notice that the object `Derek` is a list of three elements. The first is the single string containing my wife’s name. The next is a vector of my siblings’ names and it is a vector of length four. Finally the vector of pets’ names is only of length three.
To access any element of this list we can use an indexing scheme similar to matrices and vectors. The only difference is that we’ll use two square brackets instead of one.
```
Derek[[ 1 ]] # First element of the list is Spouse!
```
```
## [1] "Aubrey"
```
```
Derek[[ 3 ]] # Third element of the list is the vector of pets
```
```
## [1] "Beau" "Tess" "Kaylee"
```
There is a second way I can access elements. For data frames it was convenient to use the notation `DataFrame$ColumnName` and we will use the same convention for lists. Actually a data frame is just a list with the requirement that each list element is a vector and all vectors are of the same length. To access my pets names we can use the following notation:
```
Derek$Pets # Using the '$' notation
```
```
## [1] "Beau" "Tess" "Kaylee"
```
```
Derek[[ 'Pets' ]] # Using the '[[ ]]' notation
```
```
## [1] "Beau" "Tess" "Kaylee"
```
To add something new to the list object, we can just make an assignment in a similar fashion as we did for `data.frame` and just assign a value to a slot that doesn’t (yet!) exist.
```
Derek$Spawn <- c('Elise', 'Casey')
```
We can also add extremely complicated items to my list. Here we’ll add a `data.frame` as another list element.
```
# Recall that we previous had defined a data.frame called "data"
Derek$RandomDataFrame <- data # Assign it to be a list element
str(Derek)
```
```
## List of 5
## $ Spouse : chr "Aubrey"
## $ Siblings : chr [1:4] "Tina" "Caroline" "Brandon" "John"
## $ Pets : chr [1:3] "Beau" "Tess" "Kaylee"
## $ Spawn : chr [1:2] "Elise" "Casey"
## $ RandomDataFrame:'data.frame': 3 obs. of 4 variables:
## ..$ Name : Factor w/ 3 levels "Bob","Jeff","Mary": 1 2 3
## ..$ Score : num [1:3] 90 75 92
## ..$ Score2: num [1:3] 41 42 43
## ..$ Score3: num [1:3] 61 62 63
```
Now we see that the list `Derek` has five elements and some of those elements are pretty complicated. In fact, I could happily have lists of lists and have a very complicated nesting structure.
The place that most users will run into lists is that the output of many statistical procedures will return the results in a list object. When a user asks R to perform a regression, the output returned is a list object, and we’ll need to grab particular information from that object afterwards. For example, the output from a t\-test in R is a list:
```
x <- c(5.1, 4.9, 5.6, 4.2, 4.8, 4.5, 5.3, 5.2) # some toy data
results <- t.test(x, alternative='less', mu=5) # do a t-test
str(results) # examine the resulting object
```
```
## List of 9
## $ statistic : Named num -0.314
## ..- attr(*, "names")= chr "t"
## $ parameter : Named num 7
## ..- attr(*, "names")= chr "df"
## $ p.value : num 0.381
## $ conf.int : atomic [1:2] -Inf 5.25
## ..- attr(*, "conf.level")= num 0.95
## $ estimate : Named num 4.95
## ..- attr(*, "names")= chr "mean of x"
## $ null.value : Named num 5
## ..- attr(*, "names")= chr "mean"
## $ alternative: chr "less"
## $ method : chr "One Sample t-test"
## $ data.name : chr "x"
## - attr(*, "class")= chr "htest"
```
We see that result is actually a list with 9 elements in it. To access the p\-value we could use:
```
results$p.value
```
```
## [1] 0.3813385
```
If I ask R to print the object `results`, it will hide the structure from you and print it in a “pretty” fashion because there is a `print` function defined specifically for objects created by the `t.test()` function.
```
results
```
```
##
## One Sample t-test
##
## data: x
## t = -0.31399, df = 7, p-value = 0.3813
## alternative hypothesis: true mean is less than 5
## 95 percent confidence interval:
## -Inf 5.251691
## sample estimates:
## mean of x
## 4.95
```
5\.4 Exercises
--------------
1. In this problem, we will work with the matrix
\\\[ \\left\[\\begin{array}{ccccc}
2 \& 4 \& 6 \& 8 \& 10\\\\
12 \& 14 \& 16 \& 18 \& 20\\\\
22 \& 24 \& 26 \& 28 \& 30
\\end{array}\\right]\\]
1. Create the matrix in two ways and save the resulting matrix as `M`.
1. Create the matrix using some combination of the `seq()` and `matrix()` commands.
2. Create the same matrix by some combination of multiple `seq()` commands and either the `rbind()` or `cbind()` command.
2. Extract the second row out of `M`.
3. Extract the element in the third row and second column of `M`.
2. Create and manipulate a data frame.
1. Create a `data.frame` named `my.trees` that has the following columns:
* Girth \= c(8\.3, 8\.6, 8\.8, 10\.5, 10\.7, 10\.8, 11\.0\)
* Height\= c(70, 65, 63, 72, 81, 83, 66\)
* Volume\= c(10\.3, 10\.3, 10\.2, 16\.4, 18\.8, 19\.7, 15\.6\)
2. Extract the third observation (i.e. the third row)
3. Extract the Girth column referring to it by name (don’t use whatever order you placed the columns in).
4. Print out a data frame of all the observations *except* for the fourth observation. (i.e. Remove the fourth observation/row.)
5. Use the `which()` command to create a vector of row indices that have a `girth` greater than 10\. Call that vector `index`.
6. Use the `index` vector to create a small data set with just the large girth trees.
7. Use the `index` vector to create a small data set with just the small girth trees.
3. Create and manipulate a list.
1. Create a list named my.test with elements
* x \= c(4,5,6,7,8,9,10\)
* y \= c(34,35,41,40,45,47,51\)
* slope \= 2\.82
* p.value \= 0\.000131
2. Extract the second element in the list.
3. Extract the element named `p.value` from the list.
4. The function `lm()` creates a linear model, which is a general class of model that includes both regression and ANOVA. We will call this on a data frame and examine the results. For this problem, there isn’t much to figure out, but rather the goal is to recognize the data structures being used in common analysis functions.
1. There are many data sets that are included with R and its packages. One of which is the `trees` data which is a data set of \\(n\=31\\) cherry trees. Load this dataset into your current workspace using the command:
```
data(trees) # load trees data.frame
```
2. Examine the data frame using the `str()` command. Look at the help file for the data using the command `help(trees)` or `?trees`.
3. Perform a regression relating the volume of lumber produced to the girth and height of the tree using the following command
```
m <- lm( Volume ~ Girth + Height, data=trees)
```
4. Use the str() command to inspect `m`. Extract the model coefficients from this list.
5. The list m can be passed to other functions. For example, the function `summary()` will take the list and recognize that it was produced by the `lm()` function and produce a summary table in the manner that we are used to seeing. Produce that summary table using the command
```
summary(m)
```
```
##
## Call:
## lm(formula = Volume ~ Girth + Height, data = trees)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.4065 -2.6493 -0.2876 2.2003 8.4847
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -57.9877 8.6382 -6.713 2.75e-07 ***
## Girth 4.7082 0.2643 17.816 < 2e-16 ***
## Height 0.3393 0.1302 2.607 0.0145 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.882 on 28 degrees of freedom
## Multiple R-squared: 0.948, Adjusted R-squared: 0.9442
## F-statistic: 255 on 2 and 28 DF, p-value: < 2.2e-16
```
5\.1 Matrices
-------------
We often want to store numerical data in a square or rectangular format and mathematicians will call these “matrices”. These will have two dimensions, rows and columns. To create a matrix in R we can create it directly using the `matrix()` command which requires the data to fill the matrix with, and optionally, some information about the number of rows and columns:
```
W <- matrix( c(1,2,3,4,5,6), nrow=2, ncol=3 )
W
```
```
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
Notice that because we only gave it six values, the information the number of columns is redundant and could be left off and R would figure out how many columns are needed. Next notice that the order that R chose to fill in the matrix was to fill in the first column then the second, and then the third. If we wanted to fill the matrix in order of the rows first, then we’d use the optional `byrow=TRUE` argument.
```
W <- matrix( c(1,2,3,4,5,6), nrow=2, byrow=TRUE )
W
```
```
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
The alternative to the `matrix()` command is we could create two columns as individual vectors and just push them together. Or we could have made three rows and lump them by rows instead. To do this we’ll use a group of functions that bind vectors together. To join two column vectors together, we’ll use `cbind` and to bind rows together we’ll use the `rbind` function
```
a <- c(1,2,3)
b <- c(4,5,6)
cbind(a,b) # Column Bind: a,b are columns in resultant matrix
```
```
## a b
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
rbind(a,b) # Row Bind: a,b are rows in resultant matrix
```
```
## [,1] [,2] [,3]
## a 1 2 3
## b 4 5 6
```
Notice that doing this has provided R with some names for the individual rows and columns. I can change these using the commands `colnames()` and `rownames()`.
```
M <- matrix(1:6, nrow=3, ncol=2, byrow=TRUE)
colnames(M) <- c('Column1', 'Column2') # set column labels
rownames(M) <- c('Row1', 'Row2','Row3') # set row labels
M
```
```
## Column1 Column2
## Row1 1 2
## Row2 3 4
## Row3 5 6
```
This is actually a pretty peculiar way of setting the *attributes* of the object `M` because it looks like we are evaluating a function and assigning some value to the function output. Yes it is weird, but R was developed in the 70s and it seemed like a good idea at the time.
Accessing a particular element of a matrix is done in a similar manner as with vectors, using the `[ ]` notation, but this time we must specify which row and which column. Notice that this scheme always is `[row, col]`.
```
M1 <- matrix(1:6, nrow=3, ncol=2)
M1
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
M1[1,2] # Grab row 1, column 2 value
```
```
## [1] 4
```
```
M1[1, 1:2] # Grab row 1, and columns 1 and 2.
```
```
## [1] 1 4
```
I might want to grab a single row or a single column out of a matrix, which is sometimes referred to as taking a slice of the matrix. I could figure out how long that vector is, but often I’m too lazy. Instead I can just specify the specify the particular row or column I want.
```
M1
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
```
M1[1, ] # grab the 1st row
```
```
## [1] 1 4
```
```
M1[ ,2] # grab second column (the spaces are optional...)
```
```
## [1] 4 5 6
```
5\.2 Data Frames
----------------
Matrices are great for mathematical operations, but I also want to be able to store data that is numerical. For example I might want to store a categorical variable such as manufacturer brand. To generalize our concept of a matrix to include these types of data, we will create a structure called a `data.frame`. These are very much like a simple Excel spreadsheet where each column represents a different trait or measurement type and each row will represent an individual.
Perhaps the easiest way to create a data frame is to just type the columns of data
```
data <- data.frame(
Name = c('Bob','Jeff','Mary'),
Score = c(90, 75, 92)
)
# Show the data.frame
data
```
```
## Name Score
## 1 Bob 90
## 2 Jeff 75
## 3 Mary 92
```
Because a data frame feels like a matrix, R also allows matrix notation for accessing particular values.
| Format | Result |
| --- | --- |
| `[a,b]` | Element in row `a` and column `b` |
| `[a,]` | All of row `a` |
| `[,b]` | All of column `b` |
Because the columns have meaning and we have given them column names, it is desirable to want to access an element by the name of the column as opposed to the column number.In large Excel spreadsheets I often get annoyed trying to remember which column something was in and muttering “Was total biomass in column P or Q?” A system where I could just name the column Total.Biomass and be done with it is much nicer to work with and I make fewer dumb mistakes.
```
data$Name # The $-sign means to reference a column by its label
```
```
## [1] Bob Jeff Mary
## Levels: Bob Jeff Mary
```
```
data$Name[2] # Notice that data$Name results in a vector, which I can manipulate
```
```
## [1] Jeff
## Levels: Bob Jeff Mary
```
I can mix the `[ ]` notation with the column names. The following is also acceptable:
```
data[, 'Name'] # Grab the column labeled 'Name'
```
```
## [1] Bob Jeff Mary
## Levels: Bob Jeff Mary
```
The next thing we might wish to do is add a new column to a preexisting data frame. There are two ways to do this. First, we could use the `cbind()` function to bind two data frames together. Second we could reference a new column name and assign values to it.
```
Second.score <- data.frame(Score2=c(41,42,43)) # another data.frame
data <- cbind( data, Second.score ) # squish them together
data
```
```
## Name Score Score2
## 1 Bob 90 41
## 2 Jeff 75 42
## 3 Mary 92 43
```
```
# if you assign a value to a column that doesn't exist, R will create it
data$Score3 <- c(61,62,63) # the Score3 column will created
data
```
```
## Name Score Score2 Score3
## 1 Bob 90 41 61
## 2 Jeff 75 42 62
## 3 Mary 92 43 63
```
Data frames are very commonly used and many commonly used functions will take a `data=` argument and all other arguments are assumed to be in the given data frame. Unfortunately this is not universally supported by all functions and you must look at the help file for the function you are interested in.
Data frames are also very restricive in that the shape of the data must be rectangular. If I try to create a new column that doesn’t have enough rows, R will complain.
```
data$Score4 <- c(1,2)
```
```
## Error in `$<-.data.frame`(`*tmp*`, Score4, value = c(1, 2)): replacement has 2 rows, data has 3
```
5\.3 Lists
----------
Data frames are quite useful for storing data but sometimes we’ll need to store a bunch of different pieces of information and it won’t fit neatly as a data frame. The most general form of a data structure is called a list. This can be thought of as a vector of objects where there is no requirement for each element to be the same type of object.
Consider that I might need to store information about person. For example, suppose that I want make an object that holds information about my immediate family. This object should have my spouse’s name (just one name) as well as my siblings. But because I have many siblings, I want the siblings to be a vector of names. Likewise I might also include my pets, but we don’t want any requirement that the number of pets is the same as the number of siblings (or spouses!).
```
wife <- 'Aubrey'
sibs <- c('Tina','Caroline','Brandon','John')
pets <- c('Beau','Tess','Kaylee')
Derek <- list(Spouse=wife, Siblings=sibs, Pets=pets) # Create the list
str(Derek) # show the structure of object
```
```
## List of 3
## $ Spouse : chr "Aubrey"
## $ Siblings: chr [1:4] "Tina" "Caroline" "Brandon" "John"
## $ Pets : chr [1:3] "Beau" "Tess" "Kaylee"
```
Notice that the object `Derek` is a list of three elements. The first is the single string containing my wife’s name. The next is a vector of my siblings’ names and it is a vector of length four. Finally the vector of pets’ names is only of length three.
To access any element of this list we can use an indexing scheme similar to matrices and vectors. The only difference is that we’ll use two square brackets instead of one.
```
Derek[[ 1 ]] # First element of the list is Spouse!
```
```
## [1] "Aubrey"
```
```
Derek[[ 3 ]] # Third element of the list is the vector of pets
```
```
## [1] "Beau" "Tess" "Kaylee"
```
There is a second way I can access elements. For data frames it was convenient to use the notation `DataFrame$ColumnName` and we will use the same convention for lists. Actually a data frame is just a list with the requirement that each list element is a vector and all vectors are of the same length. To access my pets names we can use the following notation:
```
Derek$Pets # Using the '$' notation
```
```
## [1] "Beau" "Tess" "Kaylee"
```
```
Derek[[ 'Pets' ]] # Using the '[[ ]]' notation
```
```
## [1] "Beau" "Tess" "Kaylee"
```
To add something new to the list object, we can just make an assignment in a similar fashion as we did for `data.frame` and just assign a value to a slot that doesn’t (yet!) exist.
```
Derek$Spawn <- c('Elise', 'Casey')
```
We can also add extremely complicated items to my list. Here we’ll add a `data.frame` as another list element.
```
# Recall that we previous had defined a data.frame called "data"
Derek$RandomDataFrame <- data # Assign it to be a list element
str(Derek)
```
```
## List of 5
## $ Spouse : chr "Aubrey"
## $ Siblings : chr [1:4] "Tina" "Caroline" "Brandon" "John"
## $ Pets : chr [1:3] "Beau" "Tess" "Kaylee"
## $ Spawn : chr [1:2] "Elise" "Casey"
## $ RandomDataFrame:'data.frame': 3 obs. of 4 variables:
## ..$ Name : Factor w/ 3 levels "Bob","Jeff","Mary": 1 2 3
## ..$ Score : num [1:3] 90 75 92
## ..$ Score2: num [1:3] 41 42 43
## ..$ Score3: num [1:3] 61 62 63
```
Now we see that the list `Derek` has five elements and some of those elements are pretty complicated. In fact, I could happily have lists of lists and have a very complicated nesting structure.
The place that most users will run into lists is that the output of many statistical procedures will return the results in a list object. When a user asks R to perform a regression, the output returned is a list object, and we’ll need to grab particular information from that object afterwards. For example, the output from a t\-test in R is a list:
```
x <- c(5.1, 4.9, 5.6, 4.2, 4.8, 4.5, 5.3, 5.2) # some toy data
results <- t.test(x, alternative='less', mu=5) # do a t-test
str(results) # examine the resulting object
```
```
## List of 9
## $ statistic : Named num -0.314
## ..- attr(*, "names")= chr "t"
## $ parameter : Named num 7
## ..- attr(*, "names")= chr "df"
## $ p.value : num 0.381
## $ conf.int : atomic [1:2] -Inf 5.25
## ..- attr(*, "conf.level")= num 0.95
## $ estimate : Named num 4.95
## ..- attr(*, "names")= chr "mean of x"
## $ null.value : Named num 5
## ..- attr(*, "names")= chr "mean"
## $ alternative: chr "less"
## $ method : chr "One Sample t-test"
## $ data.name : chr "x"
## - attr(*, "class")= chr "htest"
```
We see that result is actually a list with 9 elements in it. To access the p\-value we could use:
```
results$p.value
```
```
## [1] 0.3813385
```
If I ask R to print the object `results`, it will hide the structure from you and print it in a “pretty” fashion because there is a `print` function defined specifically for objects created by the `t.test()` function.
```
results
```
```
##
## One Sample t-test
##
## data: x
## t = -0.31399, df = 7, p-value = 0.3813
## alternative hypothesis: true mean is less than 5
## 95 percent confidence interval:
## -Inf 5.251691
## sample estimates:
## mean of x
## 4.95
```
5\.4 Exercises
--------------
1. In this problem, we will work with the matrix
\\\[ \\left\[\\begin{array}{ccccc}
2 \& 4 \& 6 \& 8 \& 10\\\\
12 \& 14 \& 16 \& 18 \& 20\\\\
22 \& 24 \& 26 \& 28 \& 30
\\end{array}\\right]\\]
1. Create the matrix in two ways and save the resulting matrix as `M`.
1. Create the matrix using some combination of the `seq()` and `matrix()` commands.
2. Create the same matrix by some combination of multiple `seq()` commands and either the `rbind()` or `cbind()` command.
2. Extract the second row out of `M`.
3. Extract the element in the third row and second column of `M`.
2. Create and manipulate a data frame.
1. Create a `data.frame` named `my.trees` that has the following columns:
* Girth \= c(8\.3, 8\.6, 8\.8, 10\.5, 10\.7, 10\.8, 11\.0\)
* Height\= c(70, 65, 63, 72, 81, 83, 66\)
* Volume\= c(10\.3, 10\.3, 10\.2, 16\.4, 18\.8, 19\.7, 15\.6\)
2. Extract the third observation (i.e. the third row)
3. Extract the Girth column referring to it by name (don’t use whatever order you placed the columns in).
4. Print out a data frame of all the observations *except* for the fourth observation. (i.e. Remove the fourth observation/row.)
5. Use the `which()` command to create a vector of row indices that have a `girth` greater than 10\. Call that vector `index`.
6. Use the `index` vector to create a small data set with just the large girth trees.
7. Use the `index` vector to create a small data set with just the small girth trees.
3. Create and manipulate a list.
1. Create a list named my.test with elements
* x \= c(4,5,6,7,8,9,10\)
* y \= c(34,35,41,40,45,47,51\)
* slope \= 2\.82
* p.value \= 0\.000131
2. Extract the second element in the list.
3. Extract the element named `p.value` from the list.
4. The function `lm()` creates a linear model, which is a general class of model that includes both regression and ANOVA. We will call this on a data frame and examine the results. For this problem, there isn’t much to figure out, but rather the goal is to recognize the data structures being used in common analysis functions.
1. There are many data sets that are included with R and its packages. One of which is the `trees` data which is a data set of \\(n\=31\\) cherry trees. Load this dataset into your current workspace using the command:
```
data(trees) # load trees data.frame
```
2. Examine the data frame using the `str()` command. Look at the help file for the data using the command `help(trees)` or `?trees`.
3. Perform a regression relating the volume of lumber produced to the girth and height of the tree using the following command
```
m <- lm( Volume ~ Girth + Height, data=trees)
```
4. Use the str() command to inspect `m`. Extract the model coefficients from this list.
5. The list m can be passed to other functions. For example, the function `summary()` will take the list and recognize that it was produced by the `lm()` function and produce a summary table in the manner that we are used to seeing. Produce that summary table using the command
```
summary(m)
```
```
##
## Call:
## lm(formula = Volume ~ Girth + Height, data = trees)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.4065 -2.6493 -0.2876 2.2003 8.4847
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -57.9877 8.6382 -6.713 2.75e-07 ***
## Girth 4.7082 0.2643 17.816 < 2e-16 ***
## Height 0.3393 0.1302 2.607 0.0145 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.882 on 28 degrees of freedom
## Multiple R-squared: 0.948, Adjusted R-squared: 0.9442
## F-statistic: 255 on 2 and 28 DF, p-value: < 2.2e-16
```
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/6-importing-data.html |
Chapter 6 Importing Data
========================
Reading data from external sources is necessary. It is most common for data to be in a data\-frame like storage, such as a MS Excel workbook, so we will concentrate on reading data into a `data.frame`.
In the typical way data is organized, we think of each column of data representing some trait or variable that we might be interested in. In general, we might wish to investigate the relationship between variables. In contrast, the rows of our data represent a single object on which the column traits are measured. For example, in a grade book for recording students scores throughout the semester, their is one row for every student and columns for each assignment. A greenhouse experiment dataset will have a row for every plant and columns for treatment type and biomass.
6\.1 Working directory
----------------------
One concept that will be important is to recognize that every time you start up RStudio, it picks an appropriate working directory. This is the directory where it will first look for script files or data files. By default when you double click on an R script or Rmarkdown file to launch RStudio, it will set the working directory to be the directory that the file was in. Similarly, when you knit an Rmarkdown file, the working directory will be set to the directory where the Rmarkdown file is. For both of these reasons, I always program my scripts assuming that paths to any data files will be relative to where where my Rmarkdown file is. To set the working directory explicitly, you can use the GUI tools `Session -> Set Working Directory...`.
The functions that we will use in this lab all accept a character string that denotes the location of the file. This location could be a web address, it could be an absolute path on your computer, or it could be a path relative to the location of your Rmarkdown file.
| `'MyFile.csv'` | Look in the working directory for `MyFile.csv`. |
| --- | --- |
| `'MyFolder/Myfile.csv'` | In the working directory, there is a subdirectory called `MyFolder` and inside that folder there is a filed called `MyFile.csv`. |
6\.2 Comma Separated Data
-------------------------
To consider how data might be stored, we first consider the simplest file format… the comma separated values file. In this file time, each of the “cells” of data are separated by a comma. For example, the data file storing scores for three students might be as follows:
```
Able, Dave, 98, 92, 94
Bowles, Jason, 85, 89, 91
Carr, Jasmine, 81, 96, 97
```
Typically when you open up such a file on a computer with Microsoft Excel installed, Excel will open up the file assuming it is a spreadsheet and put each element in its own cell. However, you can also open the file using a more primitive program (say Notepad in Windows, TextEdit on a Mac) you’ll see the raw form of the data.
Having just the raw data without any sort of column header is problematic (which of the three exams was the final??). Ideally we would have column headers that store the name of the column.
```
LastName, FirstName, Exam1, Exam2, FinalExam
Able, Dave, 98, 92, 94
Bowles, Jason, 85, 89, 91
Carr, Jasmine, 81, 96, 97
```
To see another example, open the “Body Fat” dataset from the Lock\\(^{5}\\) introductory text book at the website \[<http://www.lock5stat.com/datasets/BodyFat.csv>]. The first few rows of the file are as follows:
```
Bodyfat,Age,Weight,Height,Neck,Chest,Abdomen,Ankle,Biceps,Wrist
32.3,41,247.25,73.5,42.1,117,115.6,26.3,37.3,19.7
22.5,31,177.25,71.5,36.2,101.1,92.4,24.6,30.1,18.2
22,42,156.25,69,35.5,97.8,86,24,31.2,17.4
12.3,23,154.25,67.75,36.2,93.1,85.2,21.9,32,17.1
20.5,46,177,70,37.2,99.7,95.6,22.5,29.1,17.7
```
To make R read in the data arranged in this format, we need to tell R three things:
1. Where does the data live? Often this will be the name of a file on your computer, but the file could just as easily live on the internet (provided your computer has internet access).
2. Is the first row data or is it the column names?
3. What character separates the data? Some programs store data using tabs to distinguish between elements, some others use white space. R’s mechanism for reading in data is flexible enough to allow you to specify what the separator is.
The primary function that we’ll use to read data from a file and into R is the function `read.table()`. This function has many optional arguments but the most commonly used ones are outlined in the table below.
| Argument | Default | What it does |
| --- | --- | --- |
| `file` | | A character string denoting the file location |
| `header` | FALSE | Is the first line column headers? |
| `sep` | " " | What character separates columns. " " \=\= any whitespace |
| `skip` | ``` 0 ``` | The number of lines to skip before reading data. This is useful when there are lines of text that describe the data or aren’t actual data |
| `na.strings` | ‘NA’ | What values represent missing data. Can have multiple. E.g. `c('NA', -9999)` |
| `quote` | " and ’ | For character strings, what characters represent quotes. |
To read in the “Body Fat” dataset we could run the R command:
```
BodyFat <- read.table(
file = 'http://www.lock5stat.com/datasets/BodyFat.csv', # where the data lives
header = TRUE, # first line is column names
sep = ',' ) # Data is sparated by commas
str(BodyFat)
```
```
## 'data.frame': 100 obs. of 10 variables:
## $ Bodyfat: num 32.3 22.5 22 12.3 20.5 22.6 28.7 21.3 29.9 21.3 ...
## $ Age : int 41 31 42 23 46 54 43 42 37 41 ...
## $ Weight : num 247 177 156 154 177 ...
## $ Height : num 73.5 71.5 69 67.8 70 ...
## $ Neck : num 42.1 36.2 35.5 36.2 37.2 39.9 37.9 35.3 42.1 39.8 ...
## $ Chest : num 117 101.1 97.8 93.1 99.7 ...
## $ Abdomen: num 115.6 92.4 86 85.2 95.6 ...
## $ Ankle : num 26.3 24.6 24 21.9 22.5 22 23.7 21.9 24.8 25.2 ...
## $ Biceps : num 37.3 30.1 31.2 32 29.1 35.9 32.1 30.7 34.4 37.5 ...
## $ Wrist : num 19.7 18.2 17.4 17.1 17.7 18.9 18.7 17.4 18.4 18.7 ...
```
Looking at the help file for `read.table()` we see that there are variants such as `read.csv()` that sets the default arguments to header and sep more intelligently. Also, there are many options to customize how R responds to different input.
6\.3 MS Excel
-------------
Commonly our data is stored as a MS Excel file. There are two approaches you could use to import the data into R.
1. From within Excel, export the worksheet that contains your data as a comma separated values (.csv) file and proceed using the tools in the previous section.
2. Use functions within R that automatically convert the worksheet into a .csv file and read it in. One package that works nicely for this is the `readxl` package.
I generally prefer using option 2 because all of my collaborators can’t live without Excel and I’ve resigned myself to this. However if you have complicated formulas in your Excel file, it is often times safer to export it as a .csv file to guarantee the data imported into R is correct. Furthermore, other spreadsheet applications (such as Google sheets) requires you to export the data as a .csv file so it is good to know both paths.
Because R can only import a complete worksheet, the desired data worksheet must be free of notes to yourself about how the data was collected, preliminary graphics, or other stuff that isn’t the data. I find it very helpful to have a worksheet in which I describe the sampling procedure and describe what each column means (and give the units!), then a second worksheet where the actual data is, and finally a third worksheet where my “Excel Only” collaborators have created whatever plots and summary statistics they need.
The simplest package for importing Excel files seems to be the package `readxl`. Another package that does this is the XLConnect which does the Excel \-\> .csv conversion using Java. Another package the works well is the `xlsx` package, but it also requires Java to be installed. The nice thing about these two packages is that they also allow you to write Excel files as well. The RODBC package allows R to connect to various databases and it is possible to make it consider an Excel file as an extremely crude database.
The `readxl` package provides a function `read_exel()` that allows us to specify which sheet within the Excel file to read and what character specifies missing data (it assumes a blank cell is missing data if you don’t specifying anything). One annoying change between `read.table()` and `read_excel()` is that the argument for specifying where the file is is different (`path=` instead of `file=`). Another difference between the two is that `read_excel()` does not yet have the capability of handling a path that is a web address.
From GitHub, download the files `Example_1.xls`, `Example_2.xls`, `Example_3.xls` and `Example_4.xls` from the directory \[[https://github.com/dereksonderegger/570L/tree/master/data\-raw](https://github.com/dereksonderegger/570L/tree/master/data-raw)]. Place these files in the same directory that you store your course work or make a subdirectory data to store the files in. Make sure that the working directory that RStudio is using is that same directory (Session \-\> Set Working Directory).
```
# load the library that has the read.xls function.
library(readxl)
# Where does the data live relative to my current working location?
#
# In my directory where this Rmarkdown file lives, I have made a subdirectory
# named 'data-raw' to store all the data files. So the path to my data
# file will be 'data-raw/Example_1.xls'.
# If you stored the files in the same directory as your RMarkdown script, you
# don't have to add any additional information and you can just tell it the
# file name 'Example_1.xls'
# Alternatively I could give the full path to this file starting at the root
# directory which, for me, is '~/GitHub/STA570L_Book/data-raw/Example_1.xls'
# but for Windows users it might be 'Z:/570L/Lab7/Example_1.xls'. This looks
# odd because Windows usually uses a backslash to represent the directory
# structure, but a backslash has special meaning in R and so it wants
# to separate directories via forwardslashes.
# read the first worksheet of the Example_1 file
data.1 <- read_excel( 'data-raw/Example_1.xls' ) # relative to this Rmarkdown file
data.1 <- read_excel('~/GitHub/570L/data-raw/Example_1.xls') # absolute path
# read the second worksheet where the second worksheet is named 'data'
data.2 <- read_excel('data-raw/Example_2.xls', sheet=2 )
data.2 <- read_excel('data-raw/Example_2.xls', sheet='data')
```
There is one additional problem that shows up while reading in Excel files. Blank columns often show up in Excel files because at some point there was some text in a cell that got deleted but a space remains and Excel still thinks there is data in the column. To fix this, you could find the cell with the space in it, or you can select a bunch of columns at the edge and delete the entire columns. Alternatively, you could remove the column after it is read into R using tools we’ll learn when we get to the *Manipulating Data* chapter.
Open up the file `Example_4.xls` in Excel and confirm that the data sheet has name columns out to carb. Read in the data frame using the following code:
```
data.4 <- read_excel('./data-raw/Example_4.xls', sheet='data') # Extra Column Example
str(data.4)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 34 obs. of 14 variables:
## $ model: chr "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
## $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
## $ cyl : num 6 6 4 6 8 6 8 4 4 6 ...
## $ disp : num 160 160 108 258 360 ...
## $ hp : num 110 110 93 110 175 105 245 62 95 123 ...
## $ drat : num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
## $ wt : num 2.62 2.88 2.32 3.21 3.44 ...
## $ qsec : num 16.5 17 18.6 19.4 17 ...
## $ vs : num 0 0 1 1 0 1 0 1 1 1 ...
## $ am : num 1 1 1 0 0 0 0 0 0 0 ...
## $ gear : num 4 4 4 3 3 3 3 4 4 4 ...
## $ carb : num 4 4 1 1 2 1 4 2 2 4 ...
## $ X__1 : logi NA NA NA NA NA NA ...
## $ X__2 : logi NA NA NA NA NA NA ...
```
We notice that after reading in the data, there is an additional column that just has missing data (the NA stands for not available which means that the data is missing) and a row with just a single blank. Go back to the Excel file and go to row 4 column N and notice that the cell isn’t actually blank, there is a space. Delete the space, save the file, and then reload the data into R. You should notice that the extra columns are now gone.
6\.4 Exercises
--------------
1. Download from GitHub the data file `Example_5.xls`. Open it in Excel and figure out which sheet of data we should import into R. At the same time figure out how many initial rows need to be skipped. Import the data set into a data frame and show the structure of the imported data using the `str()` command. Make sure that your data has \\(n\=31\\) observations and the three columns are appropriately named.
6\.1 Working directory
----------------------
One concept that will be important is to recognize that every time you start up RStudio, it picks an appropriate working directory. This is the directory where it will first look for script files or data files. By default when you double click on an R script or Rmarkdown file to launch RStudio, it will set the working directory to be the directory that the file was in. Similarly, when you knit an Rmarkdown file, the working directory will be set to the directory where the Rmarkdown file is. For both of these reasons, I always program my scripts assuming that paths to any data files will be relative to where where my Rmarkdown file is. To set the working directory explicitly, you can use the GUI tools `Session -> Set Working Directory...`.
The functions that we will use in this lab all accept a character string that denotes the location of the file. This location could be a web address, it could be an absolute path on your computer, or it could be a path relative to the location of your Rmarkdown file.
| `'MyFile.csv'` | Look in the working directory for `MyFile.csv`. |
| --- | --- |
| `'MyFolder/Myfile.csv'` | In the working directory, there is a subdirectory called `MyFolder` and inside that folder there is a filed called `MyFile.csv`. |
6\.2 Comma Separated Data
-------------------------
To consider how data might be stored, we first consider the simplest file format… the comma separated values file. In this file time, each of the “cells” of data are separated by a comma. For example, the data file storing scores for three students might be as follows:
```
Able, Dave, 98, 92, 94
Bowles, Jason, 85, 89, 91
Carr, Jasmine, 81, 96, 97
```
Typically when you open up such a file on a computer with Microsoft Excel installed, Excel will open up the file assuming it is a spreadsheet and put each element in its own cell. However, you can also open the file using a more primitive program (say Notepad in Windows, TextEdit on a Mac) you’ll see the raw form of the data.
Having just the raw data without any sort of column header is problematic (which of the three exams was the final??). Ideally we would have column headers that store the name of the column.
```
LastName, FirstName, Exam1, Exam2, FinalExam
Able, Dave, 98, 92, 94
Bowles, Jason, 85, 89, 91
Carr, Jasmine, 81, 96, 97
```
To see another example, open the “Body Fat” dataset from the Lock\\(^{5}\\) introductory text book at the website \[<http://www.lock5stat.com/datasets/BodyFat.csv>]. The first few rows of the file are as follows:
```
Bodyfat,Age,Weight,Height,Neck,Chest,Abdomen,Ankle,Biceps,Wrist
32.3,41,247.25,73.5,42.1,117,115.6,26.3,37.3,19.7
22.5,31,177.25,71.5,36.2,101.1,92.4,24.6,30.1,18.2
22,42,156.25,69,35.5,97.8,86,24,31.2,17.4
12.3,23,154.25,67.75,36.2,93.1,85.2,21.9,32,17.1
20.5,46,177,70,37.2,99.7,95.6,22.5,29.1,17.7
```
To make R read in the data arranged in this format, we need to tell R three things:
1. Where does the data live? Often this will be the name of a file on your computer, but the file could just as easily live on the internet (provided your computer has internet access).
2. Is the first row data or is it the column names?
3. What character separates the data? Some programs store data using tabs to distinguish between elements, some others use white space. R’s mechanism for reading in data is flexible enough to allow you to specify what the separator is.
The primary function that we’ll use to read data from a file and into R is the function `read.table()`. This function has many optional arguments but the most commonly used ones are outlined in the table below.
| Argument | Default | What it does |
| --- | --- | --- |
| `file` | | A character string denoting the file location |
| `header` | FALSE | Is the first line column headers? |
| `sep` | " " | What character separates columns. " " \=\= any whitespace |
| `skip` | ``` 0 ``` | The number of lines to skip before reading data. This is useful when there are lines of text that describe the data or aren’t actual data |
| `na.strings` | ‘NA’ | What values represent missing data. Can have multiple. E.g. `c('NA', -9999)` |
| `quote` | " and ’ | For character strings, what characters represent quotes. |
To read in the “Body Fat” dataset we could run the R command:
```
BodyFat <- read.table(
file = 'http://www.lock5stat.com/datasets/BodyFat.csv', # where the data lives
header = TRUE, # first line is column names
sep = ',' ) # Data is sparated by commas
str(BodyFat)
```
```
## 'data.frame': 100 obs. of 10 variables:
## $ Bodyfat: num 32.3 22.5 22 12.3 20.5 22.6 28.7 21.3 29.9 21.3 ...
## $ Age : int 41 31 42 23 46 54 43 42 37 41 ...
## $ Weight : num 247 177 156 154 177 ...
## $ Height : num 73.5 71.5 69 67.8 70 ...
## $ Neck : num 42.1 36.2 35.5 36.2 37.2 39.9 37.9 35.3 42.1 39.8 ...
## $ Chest : num 117 101.1 97.8 93.1 99.7 ...
## $ Abdomen: num 115.6 92.4 86 85.2 95.6 ...
## $ Ankle : num 26.3 24.6 24 21.9 22.5 22 23.7 21.9 24.8 25.2 ...
## $ Biceps : num 37.3 30.1 31.2 32 29.1 35.9 32.1 30.7 34.4 37.5 ...
## $ Wrist : num 19.7 18.2 17.4 17.1 17.7 18.9 18.7 17.4 18.4 18.7 ...
```
Looking at the help file for `read.table()` we see that there are variants such as `read.csv()` that sets the default arguments to header and sep more intelligently. Also, there are many options to customize how R responds to different input.
6\.3 MS Excel
-------------
Commonly our data is stored as a MS Excel file. There are two approaches you could use to import the data into R.
1. From within Excel, export the worksheet that contains your data as a comma separated values (.csv) file and proceed using the tools in the previous section.
2. Use functions within R that automatically convert the worksheet into a .csv file and read it in. One package that works nicely for this is the `readxl` package.
I generally prefer using option 2 because all of my collaborators can’t live without Excel and I’ve resigned myself to this. However if you have complicated formulas in your Excel file, it is often times safer to export it as a .csv file to guarantee the data imported into R is correct. Furthermore, other spreadsheet applications (such as Google sheets) requires you to export the data as a .csv file so it is good to know both paths.
Because R can only import a complete worksheet, the desired data worksheet must be free of notes to yourself about how the data was collected, preliminary graphics, or other stuff that isn’t the data. I find it very helpful to have a worksheet in which I describe the sampling procedure and describe what each column means (and give the units!), then a second worksheet where the actual data is, and finally a third worksheet where my “Excel Only” collaborators have created whatever plots and summary statistics they need.
The simplest package for importing Excel files seems to be the package `readxl`. Another package that does this is the XLConnect which does the Excel \-\> .csv conversion using Java. Another package the works well is the `xlsx` package, but it also requires Java to be installed. The nice thing about these two packages is that they also allow you to write Excel files as well. The RODBC package allows R to connect to various databases and it is possible to make it consider an Excel file as an extremely crude database.
The `readxl` package provides a function `read_exel()` that allows us to specify which sheet within the Excel file to read and what character specifies missing data (it assumes a blank cell is missing data if you don’t specifying anything). One annoying change between `read.table()` and `read_excel()` is that the argument for specifying where the file is is different (`path=` instead of `file=`). Another difference between the two is that `read_excel()` does not yet have the capability of handling a path that is a web address.
From GitHub, download the files `Example_1.xls`, `Example_2.xls`, `Example_3.xls` and `Example_4.xls` from the directory \[[https://github.com/dereksonderegger/570L/tree/master/data\-raw](https://github.com/dereksonderegger/570L/tree/master/data-raw)]. Place these files in the same directory that you store your course work or make a subdirectory data to store the files in. Make sure that the working directory that RStudio is using is that same directory (Session \-\> Set Working Directory).
```
# load the library that has the read.xls function.
library(readxl)
# Where does the data live relative to my current working location?
#
# In my directory where this Rmarkdown file lives, I have made a subdirectory
# named 'data-raw' to store all the data files. So the path to my data
# file will be 'data-raw/Example_1.xls'.
# If you stored the files in the same directory as your RMarkdown script, you
# don't have to add any additional information and you can just tell it the
# file name 'Example_1.xls'
# Alternatively I could give the full path to this file starting at the root
# directory which, for me, is '~/GitHub/STA570L_Book/data-raw/Example_1.xls'
# but for Windows users it might be 'Z:/570L/Lab7/Example_1.xls'. This looks
# odd because Windows usually uses a backslash to represent the directory
# structure, but a backslash has special meaning in R and so it wants
# to separate directories via forwardslashes.
# read the first worksheet of the Example_1 file
data.1 <- read_excel( 'data-raw/Example_1.xls' ) # relative to this Rmarkdown file
data.1 <- read_excel('~/GitHub/570L/data-raw/Example_1.xls') # absolute path
# read the second worksheet where the second worksheet is named 'data'
data.2 <- read_excel('data-raw/Example_2.xls', sheet=2 )
data.2 <- read_excel('data-raw/Example_2.xls', sheet='data')
```
There is one additional problem that shows up while reading in Excel files. Blank columns often show up in Excel files because at some point there was some text in a cell that got deleted but a space remains and Excel still thinks there is data in the column. To fix this, you could find the cell with the space in it, or you can select a bunch of columns at the edge and delete the entire columns. Alternatively, you could remove the column after it is read into R using tools we’ll learn when we get to the *Manipulating Data* chapter.
Open up the file `Example_4.xls` in Excel and confirm that the data sheet has name columns out to carb. Read in the data frame using the following code:
```
data.4 <- read_excel('./data-raw/Example_4.xls', sheet='data') # Extra Column Example
str(data.4)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 34 obs. of 14 variables:
## $ model: chr "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
## $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
## $ cyl : num 6 6 4 6 8 6 8 4 4 6 ...
## $ disp : num 160 160 108 258 360 ...
## $ hp : num 110 110 93 110 175 105 245 62 95 123 ...
## $ drat : num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
## $ wt : num 2.62 2.88 2.32 3.21 3.44 ...
## $ qsec : num 16.5 17 18.6 19.4 17 ...
## $ vs : num 0 0 1 1 0 1 0 1 1 1 ...
## $ am : num 1 1 1 0 0 0 0 0 0 0 ...
## $ gear : num 4 4 4 3 3 3 3 4 4 4 ...
## $ carb : num 4 4 1 1 2 1 4 2 2 4 ...
## $ X__1 : logi NA NA NA NA NA NA ...
## $ X__2 : logi NA NA NA NA NA NA ...
```
We notice that after reading in the data, there is an additional column that just has missing data (the NA stands for not available which means that the data is missing) and a row with just a single blank. Go back to the Excel file and go to row 4 column N and notice that the cell isn’t actually blank, there is a space. Delete the space, save the file, and then reload the data into R. You should notice that the extra columns are now gone.
6\.4 Exercises
--------------
1. Download from GitHub the data file `Example_5.xls`. Open it in Excel and figure out which sheet of data we should import into R. At the same time figure out how many initial rows need to be skipped. Import the data set into a data frame and show the structure of the imported data using the `str()` command. Make sure that your data has \\(n\=31\\) observations and the three columns are appropriately named.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/7-data-manipulation.html |
Chapter 7 Data Manipulation
===========================
```
# library(tidyverse) # Could load several of Dr Wickham's commonly used packages all at once.
library(dplyr) # or just the one we'll use today.
```
Most of the time, our data is in the form of a data frame and we are interested in exploring the relationships. This chapter explores how to manipulate data frames and methods.
7\.1 Classical functions for summarizing rows and columns
---------------------------------------------------------
### 7\.1\.1 `summary()`
The first method is to calculate some basic summary statistics (minimum, 25th, 50th, 75th percentiles, maximum and mean) of each column. If a column is categorical, the summary function will return the number of observations in each category.
```
# use the iris data set which has both numerical and categorical variables
data( iris )
str(iris) # recall what columns we have
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
# display the summary for each column
summary( iris )
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
## Species
## setosa :50
## versicolor:50
## virginica :50
##
##
##
```
### 7\.1\.2 `apply()`
The summary function is convenient, but we want the ability to pick another function to apply to each column and possibly to each row. To demonstrate this, suppose we have data frame that contains students grades over the semester.
```
# make up some data
grades <- data.frame(
l.name = c('Cox', 'Dorian', 'Kelso', 'Turk'),
Exam1 = c(93, 89, 80, 70),
Exam2 = c(98, 70, 82, 85),
Final = c(96, 85, 81, 92) )
```
The `apply()` function will apply an arbitrary function to each row (or column) of a matrix or a data frame and then aggregate the results into a vector.
```
# Because I can't take the mean of the last names column,
# remove the name column
scores <- grades[,-1]
scores
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
# Summarize each column by calculating the mean.
apply( scores, # what object do I want to apply the function to
MARGIN=2, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## Exam1 Exam2 Final
## 83.00 83.75 88.50
```
To apply a function to the rows, we just change which margin we want. We might want to calculate the average exam score for person.
```
apply( scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## [1] 95.66667 81.33333 81.00000 82.33333
```
This is useful, but it would be more useful to concatenate this as a new column in my grades data frame.
```
average <- apply(
scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
grades <- cbind( grades, average ) # squish together
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
There are several variants of the `apply()` function, and the variant I use most often is the function `sapply()`, which will apply a function to each element of a list or vector and returns a corresponding list or vector of results.
7\.2 Package `dplyr`
--------------------
Many of the tools to manipulate data frames in R were written without a consistent syntax and are difficult use together. To remedy this, Hadley Wickham (the writer of `ggplot2`) introduced a package called plyr which was quite useful. As with many projects, his first version was good but not great and he introduced an improved version that works exclusively with data.frames called `dplyr` which we will investigate. The package `dplyr` strives to provide a convenient and consistent set of functions to handle the most common data frame manipulations and a mechanism for chaining these operations together to perform complex tasks.
The Dr Wickham has put together a very nice introduction to the package that explains in more detail how the various pieces work and I encourage you to read it at some point. \[<http://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html>].
One of the aspects about the `data.frame` object is that R does some simplification for you, but it does not do it in a consistent manner. Somewhat obnoxiously character strings are always converted to factors and subsetting might return a `data.frame` or a `vector` or a `scalar`. This is fine at the command line, but can be problematic when programming. Furthermore, many operations are pretty slow using `data.frame`. To get around this, Dr Wickham introduced a modified version of the `data.frame` called a `tibble`. A `tibble` is a `data.frame` but with a few extra bits. For now we can ignore the differences.
The pipe command `%>%` allows for very readable code. The idea is that the `%>%` operator works by translating the command `a %>% f(b)` to the expression `f(a,b)`. This operator works on any function and was introduced in the `magrittr` package. The beauty of this comes when you have a suite of functions that takes input arguments of the same type as their output.
For example, if we wanted to start with `x`, and first apply function `f()`, then `g()`, and then `h()`, the usual R command would be `h(g(f(x)))` which is hard to read because you have to start reading at the *innermost* set of parentheses. Using the pipe command `%>%`, this sequence of operations becomes `x %>% f() %>% g() %>% h()`.
| Written | Meaning |
| --- | --- |
| `a %>% f(b)` | `f(a,b)` |
| `b %>% f(a, .)` | `f(a, b)` |
| `x %>% f() %>% g()` | `g( f(x) )` |
In `dplyr`, all the functions below take a *data set as its first argument* and *outputs an appropriately modified data set*. This will allow me to chain together commands in a readable fashion. The pipe command works with any function, not just the `dplyr` functions and I often find myself using it all over the place.
### 7\.2\.1 Verbs
The foundational operations to perform on a data set are:
* Subsetting \- Returns a with only particular columns or rows
– `select` \- Selecting a subset of columns by name or column number.
– `filter` \- Selecting a subset of rows from a data frame based on logical expressions.
– `slice` \- Selecting a subset of rows by row number.
* `arrange` \- Re\-ordering the rows of a data frame.
* `mutate` \- Add a new column that is some function of other columns.
* `summarise` \- calculate some summary statistic of a column of data. This collapses a set of rows into a single row.
Each of these operations is a function in the package `dplyr`. These functions all have a similar calling syntax, the first argument is a data set, subsequent arguments describe what to do with the input data frame and you can refer to the columns without using the `df$column` notation. All of these functions will return a data set.
#### 7\.2\.1\.1 Subsetting with `select`, `filter`, and `slice`
These function allows you select certain columns and rows of a data frame.
##### 7\.2\.1\.1\.1 `select()`
Often you only want to work with a small number of columns of a data frame. It is relatively easy to do this using the standard `[,col.name]` notation, but is often pretty tedious.
```
# recall what the grades are
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
I could select the columns Exam columns by hand, or by using an extension of the `:` operator
```
# select( grades, Exam1, Exam2 ) # select from `grades` columns Exam1, Exam2
grades %>% select( Exam1, Exam2 ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
```
grades %>% select( Exam1:Final ) # Columns Exam1 through Final
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
grades %>% select( -Exam1 ) # Negative indexing by name works
```
```
## l.name Exam2 Final average
## 1 Cox 98 96 95.66667
## 2 Dorian 70 85 81.33333
## 3 Kelso 82 81 81.00000
## 4 Turk 85 92 82.33333
```
```
grades %>% select( 1:2 ) # Can select column by column position
```
```
## l.name Exam1
## 1 Cox 93
## 2 Dorian 89
## 3 Kelso 80
## 4 Turk 70
```
The `select()` command has a few other tricks. There are functional calls that describe the columns you wish to select that take advantage of pattern matching. I generally can get by with `starts_with()`, `ends_with()`, and `contains()`, but there is a final operator `matches()` that takes a regular expression.
```
grades %>% select( starts_with('Exam') ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
The `dplyr::select` function is quite handy, but there are several other packages out there that have a `select` function and we can get into trouble with loading other packages with the same function names. If I encounter the `select` function behaving in a weird manner or complaining about an input argument, my first remedy is to be explicit about it is the `dplyr::select()` function by appending the package name at the start.
##### 7\.2\.1\.1\.2 `filter()`
It is common to want to select particular rows where we have some logically expression to pick the rows.
```
# select students with Final grades greater than 90
grades %>% filter(Final > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
```
You can have multiple logical expressions to select rows and they will be logically combined so that only rows that satisfy all of the conditions are selected. The logicals are joined together using `&` (and) operator or the `|` (or) operator and you may explicitly use other logicals. For example a factor column type might be used to select rows where type is either one or two via the following: `type==1 | type==2`.
```
# select students with Final grades above 90 and
# average score also above 90
grades %>% filter(Final > 90, average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
```
# we could also use an "and" condition
grades %>% filter(Final > 90 & average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
##### 7\.2\.1\.1\.3 `slice()`
When you want to filter rows based on row number, this is called slicing.
```
# grab the first 2 rows
grades %>% slice(1:2)
```
```
## # A tibble: 2 x 5
## l.name Exam1 Exam2 Final average
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Cox 93. 98. 96. 95.7
## 2 Dorian 89. 70. 85. 81.3
```
#### 7\.2\.1\.2 `arrange()`
We often need to re\-order the rows of a data frame. For example, we might wish to take our grade book and sort the rows by the average score, or perhaps alphabetically. The `arrange()` function does exactly that. The first argument is the data frame to re\-order, and the subsequent arguments are the columns to sort on. The order of the sorting column determines the precedent… the first sorting column is first used and the second sorting column is only used to break ties.
```
grades %>% arrange(l.name)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
The default sorting is in ascending order, so to sort the grades with the highest scoring person in the first row, we must tell arrange to do it in descending order using `desc(column.name)`.
```
grades %>% arrange(desc(Final))
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
## 3 Dorian 89 70 85 81.33333
## 4 Kelso 80 82 81 81.00000
```
In a more complicated example, consider the following data and we want to order it first by Treatment Level and secondarily by the y\-value. I want the Treatment level in the default ascending order (Low, Medium, High), but the y variable in descending order.
```
# make some data
dd <- data.frame(
Trt = factor(c("High", "Med", "High", "Low"),
levels = c("Low", "Med", "High")),
y = c(8, 3, 9, 9),
z = c(1, 1, 1, 2))
dd
```
```
## Trt y z
## 1 High 8 1
## 2 Med 3 1
## 3 High 9 1
## 4 Low 9 2
```
```
# arrange the rows first by treatment, and then by y (y in descending order)
dd %>% arrange(Trt, desc(y))
```
```
## Trt y z
## 1 Low 9 2
## 2 Med 3 1
## 3 High 9 1
## 4 High 8 1
```
#### 7\.2\.1\.3 mutate()
I often need to create a new column that is some function of the old columns. This was often cumbersome. Consider code to calculate the average grade in my grade book example.
```
grades$average <- (grades$Exam1 + grades$Exam2 + grades$Final) / 3
```
Instead, we could use the `mutate()` function and avoid all the `grades$` nonsense.
```
grades %>% mutate( average = (Exam1 + Exam2 + Final)/3 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
You can do multiple calculations within the same `mutate()` command, and you can even refer to columns that were created in the same `mutate()` command.
```
grades %>% mutate(
average = (Exam1 + Exam2 + Final)/3,
grade = cut(average, c(0, 60, 70, 80, 90, 100), # cut takes numeric variable
c( 'F','D','C','B','A')) ) # and makes a factor
```
```
## l.name Exam1 Exam2 Final average grade
## 1 Cox 93 98 96 95.66667 A
## 2 Dorian 89 70 85 81.33333 B
## 3 Kelso 80 82 81 81.00000 B
## 4 Turk 70 85 92 82.33333 B
```
We might look at this data frame and want to do some rounding. For example, I might want to take each numeric column and round it. In this case, the functions `mutate_at()` and `mutate_if()` allow us to apply a function to a particular column and save the output.
```
# for each column, if it is numeric, apply the round() function to the column
# while using any additional arguments. So round two digits.
grades %>%
mutate_if( is.numeric, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
The `mutate_at()` function works similarly, but we just have to specify with columns.
```
# round columns 2 through 5
grades %>%
mutate_at( 2:5, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
```
# round columns that start with "ave"
grades %>%
mutate_at( vars(starts_with("ave")), round )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 96
## 2 Dorian 89 70 85 81
## 3 Kelso 80 82 81 81
## 4 Turk 70 85 92 82
```
```
# These do not work because they doesn't evaluate to column indices.
# I can only hope that at some point, this syntax works
#
# grades %>%
# mutate_at( starts_with("ave"), round )
#
# grades %>%
# mutate_at( Exam1:average, round, digits=2 )
```
Another situation I often run into is the need to select many columns, and calculate a sum or mean across them. Unfortunately the natural *tidyverse* way of doing this is a bit clumsy and I often resort to the following trick of using the base `apply()` function inside of a mutate command. Remember the `.` represents the data frame passed into the `mutate` function, so in each line we grab the appropriate columns and then stuff the result into `apply` and assign the output of the apply function to the new column.
```
grades %>%
mutate( Exam.Total = select(., Exam1:Final) %>% apply(1, sum) ) %>%
mutate( Exam.Avg = select(., Exam1:Final) %>% apply(1, mean))
```
```
## l.name Exam1 Exam2 Final average Exam.Total Exam.Avg
## 1 Cox 93 98 96 95.66667 287 95.66667
## 2 Dorian 89 70 85 81.33333 244 81.33333
## 3 Kelso 80 82 81 81.00000 243 81.00000
## 4 Turk 70 85 92 82.33333 247 82.33333
```
#### 7\.2\.1\.4 summarise()
By itself, this function is quite boring, but will become useful later on. Its purpose is to calculate summary statistics using any or all of the data columns. Notice that we get to chose the name of the new column. The way to think about this is that we are collapsing information stored in multiple rows into a single row of values.
```
# calculate the mean of exam 1
grades %>% summarise( mean.E1=mean(Exam1))
```
```
## mean.E1
## 1 83
```
We could calculate multiple summary statistics if we like.
```
# calculate the mean and standard deviation
grades %>% summarise( mean.E1=mean(Exam1), stddev.E1=sd(Exam1) )
```
```
## mean.E1 stddev.E1
## 1 83 10.23067
```
If we want to apply the same statistic to each column, we use the `summarise_all()` command. We have to be a little careful here because the function you use has to work on every column (that isn’t part of the grouping structure (see `group_by()`)). There are two variants `summarize_at()` and `summarize_if()` that give you a bit more flexibility.
```
# calculate the mean and stddev of each column - Cannot do this to Names!
grades %>%
select( Exam1:Final ) %>%
summarise_all( funs(mean, sd) )
```
```
## Exam1_mean Exam2_mean Final_mean Exam1_sd Exam2_sd Final_sd
## 1 83 83.75 88.5 10.23067 11.5 6.757712
```
```
grades %>%
summarise_if(is.numeric, funs(Xbar=mean, SD=sd) )
```
```
## Exam1_Xbar Exam2_Xbar Final_Xbar average_Xbar Exam1_SD Exam2_SD Final_SD
## 1 83 83.75 88.5 85.08333 10.23067 11.5 6.757712
## average_SD
## 1 7.078266
```
#### 7\.2\.1\.5 Miscellaneous functions
There are some more function that are useful but aren’t as commonly used. For sampling the functions `sample_n()` and `sample_frac()` will take a sub\-sample of either n rows or of a fraction of the data set. The function `n()` returns the number of rows in the data set. Finally `rename()` will rename a selected column.
### 7\.2\.2 Split, apply, combine
Aside from unifying the syntax behind the common operations, the major strength of the `dplyr` package is the ability to split a data frame into a bunch of sub\-data frames, apply a sequence of one or more of the operations we just described, and then combine results back together. We’ll consider data from an experiment from spinning wool into yarn. This experiment considered two different types of wool (A or B) and three different levels of tension on the thread. The response variable is the number of breaks in the resulting yarn. For each of the 6 `wool:tension` combinations, there are 9 replicated observations per `wool:tension` level.
```
data(warpbreaks)
str(warpbreaks)
```
```
## 'data.frame': 54 obs. of 3 variables:
## $ breaks : num 26 30 54 25 70 52 51 26 67 18 ...
## $ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
## $ tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ...
```
The first we must do is to create a data frame with additional information about how to break the data into sub\-data frames. In this case, I want to break the data up into the 6 wool\-by\-tension combinations. Initially we will just figure out how many rows are in each wool\-by\-tension combination.
```
# group_by: what variable(s) shall we group on.
# n() is a function that returns how many rows are in the
# currently selected sub-dataframe
warpbreaks %>%
group_by( wool, tension) %>% # grouping
summarise(n = n() ) # how many in each group
```
```
## # A tibble: 6 x 3
## # Groups: wool [?]
## wool tension n
## <fct> <fct> <int>
## 1 A L 9
## 2 A M 9
## 3 A H 9
## 4 B L 9
## 5 B M 9
## 6 B H 9
```
The `group_by` function takes a data.frame and returns the same data.frame, but with some extra information so that any subsequent function acts on each unique combination defined in the `group_by`. If you wish to remove this behavior, use `group_by()` to reset the grouping to have no grouping variable.
Using the same `summarise` function, we could calculate the group mean and standard deviation for each wool\-by\-tension group.
```
warpbreaks %>%
group_by(wool, tension) %>%
summarise( n = n(), # I added some formatting to show the
mean.breaks = mean(breaks), # reader I am calculating several
sd.breaks = sd(breaks)) # statistics.
```
```
## # A tibble: 6 x 5
## # Groups: wool [?]
## wool tension n mean.breaks sd.breaks
## <fct> <fct> <int> <dbl> <dbl>
## 1 A L 9 44.6 18.1
## 2 A M 9 24.0 8.66
## 3 A H 9 24.6 10.3
## 4 B L 9 28.2 9.86
## 5 B M 9 28.8 9.43
## 6 B H 9 18.8 4.89
```
If instead of summarizing each split, we might want to just do some calculation and the output should have the same number of rows as the input data frame. In this case I’ll tell `dplyr` that we are mutating the data frame instead of summarizing it. For example, suppose that I want to calculate the residual value \\\[e\_{ijk}\=y\_{ijk}\-\\bar{y}\_{ij\\cdot}\\] where \\(\\bar{y}\_{ij\\cdot}\\) is the mean of each `wool:tension` combination.
```
warpbreaks %>%
group_by(wool, tension) %>% # group by wool:tension
mutate(resid = breaks - mean(breaks)) %>% # mean(breaks) of the group!
head( ) # show the first couple of rows
```
```
## # A tibble: 6 x 4
## # Groups: wool, tension [1]
## breaks wool tension resid
## <dbl> <fct> <fct> <dbl>
## 1 26. A L -18.6
## 2 30. A L -14.6
## 3 54. A L 9.44
## 4 25. A L -19.6
## 5 70. A L 25.4
## 6 52. A L 7.44
```
### 7\.2\.3 Chaining commands together
In the previous examples we have used the `%>%` operator to make the code more readable but to really appreciate this, we should examine the alternative.
Suppose we have the results of a small 5K race. The data given to us is in the order that the runners signed up but we want to calculate the results for each gender, calculate the placings, and the sort the data frame by gender and then place. We can think of this process as having three steps:
1. Splitting
2. Ranking
3. Re\-arranging.
```
# input the initial data
race.results <- data.frame(
name=c('Bob', 'Jeff', 'Rachel', 'Bonnie', 'Derek', 'April','Elise','David'),
time=c(21.23, 19.51, 19.82, 23.45, 20.23, 24.22, 28.83, 15.73),
gender=c('M','M','F','F','M','F','F','M'))
```
We could run all the commands together using the following code:
```
arrange(
mutate(
group_by(
race.results, # using race.results
gender), # group by gender
place = rank( time )), # mutate to calculate the place column
gender, place) # arrange the result by gender and place
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
This is very difficult to read because you have to read the code *from the inside out*.
Another (and slightly more readable) way to complete our task is to save each intermediate step of our process and then use that in the next step:
```
temp.df0 <- race.results %>% group_by( gender)
temp.df1 <- temp.df0 %>% mutate( place = rank(time) )
temp.df2 <- temp.df1 %>% arrange( gender, place )
```
It would be nice if I didn’t have to save all these intermediate results because keeping track of temp1 and temp2 gets pretty annoying if I keep changing the order of how things or calculated or add/subtract steps. This is exactly what `%>%` does for me.
```
race.results %>%
group_by( gender ) %>%
mutate( place = rank(time)) %>%
arrange( gender, place )
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
7\.3 Exercises
--------------
1. The dataset `ChickWeight` tracks the weights of 48 baby chickens (chicks) feed four different diets.
1. Load the dataset using
```
data(ChickWeight)
```
2. Look at the help files for the description of the columns.
3. Remove all the observations except for observations from day 10 or day 20\.
4. Calculate the mean and standard deviation of the chick weights for each diet group on days 10 and 20\.
2. The OpenIntro textbook on statistics includes a data set on body dimensions.
1. Load the file using
```
Body <- read.csv('http://www.openintro.org/stat/data/bdims.csv')
```
2. The column sex is coded as a 1 if the individual is male and 0 if female. This is a non\-intuitive labeling system. Create a new column `sex.MF` that uses labels Male and Female. *Hint: recall either the `factor()` or `cut()` command!*
3. The columns `wgt` and `hgt` measure weight and height in kilograms and centimeters (respectively). Use these to calculate the Body Mass Index (BMI) for each individual where \\\[BMI\=\\frac{Weight\\,(kg)}{\\left\[Height\\,(m)\\right]^{2}}\\]
4. Double check that your calculated BMI column is correct by examining the summary statistics of the column. BMI values should be between 18 to 40 or so. Did you make an error in your calculation?
5. The function `cut` takes a vector of continuous numerical data and creates a factor based on your give cut\-points.
```
# Define a continuous vector to convert to a factor
x <- 1:10
# divide range of x into three groups of equal length
cut(x, breaks=3)
```
```
## [1] (0.991,4] (0.991,4] (0.991,4] (0.991,4] (4,7] (4,7] (4,7]
## [8] (7,10] (7,10] (7,10]
## Levels: (0.991,4] (4,7] (7,10]
```
```
# divide x into four groups, where I specify all 5 break points
cut(x, breaks = c(0, 2.5, 5.0, 7.5, 10))
```
```
## [1] (0,2.5] (0,2.5] (2.5,5] (2.5,5] (2.5,5] (5,7.5] (5,7.5]
## [8] (7.5,10] (7.5,10] (7.5,10]
## Levels: (0,2.5] (2.5,5] (5,7.5] (7.5,10]
```
```
# (0,2.5] (2.5,5] means 2.5 is included in first group
# right=FALSE changes this to make 2.5 included in the second
# divide x into 3 groups, but give them a nicer
# set of group names
cut(x, breaks=3, labels=c('Low','Medium','High'))
```
```
## [1] Low Low Low Low Medium Medium Medium High High High
## Levels: Low Medium High
```
Create a new column of in the data frame that divides the age into decades (10\-19, 20\-29, 30\-39, etc). Notice the oldest person in the study is 67\.
```
Body <- Body %>%
mutate( Age.Grp = cut(age,
breaks=c(10,20,30,40,50,60,70),
right=FALSE))
```
6. Find the average BMI for each Sex\-by\-Age combination.
7\.1 Classical functions for summarizing rows and columns
---------------------------------------------------------
### 7\.1\.1 `summary()`
The first method is to calculate some basic summary statistics (minimum, 25th, 50th, 75th percentiles, maximum and mean) of each column. If a column is categorical, the summary function will return the number of observations in each category.
```
# use the iris data set which has both numerical and categorical variables
data( iris )
str(iris) # recall what columns we have
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
# display the summary for each column
summary( iris )
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
## Species
## setosa :50
## versicolor:50
## virginica :50
##
##
##
```
### 7\.1\.2 `apply()`
The summary function is convenient, but we want the ability to pick another function to apply to each column and possibly to each row. To demonstrate this, suppose we have data frame that contains students grades over the semester.
```
# make up some data
grades <- data.frame(
l.name = c('Cox', 'Dorian', 'Kelso', 'Turk'),
Exam1 = c(93, 89, 80, 70),
Exam2 = c(98, 70, 82, 85),
Final = c(96, 85, 81, 92) )
```
The `apply()` function will apply an arbitrary function to each row (or column) of a matrix or a data frame and then aggregate the results into a vector.
```
# Because I can't take the mean of the last names column,
# remove the name column
scores <- grades[,-1]
scores
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
# Summarize each column by calculating the mean.
apply( scores, # what object do I want to apply the function to
MARGIN=2, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## Exam1 Exam2 Final
## 83.00 83.75 88.50
```
To apply a function to the rows, we just change which margin we want. We might want to calculate the average exam score for person.
```
apply( scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## [1] 95.66667 81.33333 81.00000 82.33333
```
This is useful, but it would be more useful to concatenate this as a new column in my grades data frame.
```
average <- apply(
scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
grades <- cbind( grades, average ) # squish together
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
There are several variants of the `apply()` function, and the variant I use most often is the function `sapply()`, which will apply a function to each element of a list or vector and returns a corresponding list or vector of results.
### 7\.1\.1 `summary()`
The first method is to calculate some basic summary statistics (minimum, 25th, 50th, 75th percentiles, maximum and mean) of each column. If a column is categorical, the summary function will return the number of observations in each category.
```
# use the iris data set which has both numerical and categorical variables
data( iris )
str(iris) # recall what columns we have
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
# display the summary for each column
summary( iris )
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
## Species
## setosa :50
## versicolor:50
## virginica :50
##
##
##
```
### 7\.1\.2 `apply()`
The summary function is convenient, but we want the ability to pick another function to apply to each column and possibly to each row. To demonstrate this, suppose we have data frame that contains students grades over the semester.
```
# make up some data
grades <- data.frame(
l.name = c('Cox', 'Dorian', 'Kelso', 'Turk'),
Exam1 = c(93, 89, 80, 70),
Exam2 = c(98, 70, 82, 85),
Final = c(96, 85, 81, 92) )
```
The `apply()` function will apply an arbitrary function to each row (or column) of a matrix or a data frame and then aggregate the results into a vector.
```
# Because I can't take the mean of the last names column,
# remove the name column
scores <- grades[,-1]
scores
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
# Summarize each column by calculating the mean.
apply( scores, # what object do I want to apply the function to
MARGIN=2, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## Exam1 Exam2 Final
## 83.00 83.75 88.50
```
To apply a function to the rows, we just change which margin we want. We might want to calculate the average exam score for person.
```
apply( scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
```
```
## [1] 95.66667 81.33333 81.00000 82.33333
```
This is useful, but it would be more useful to concatenate this as a new column in my grades data frame.
```
average <- apply(
scores, # what object do I want to apply the function to
MARGIN=1, # rows = 1, columns = 2, (same order as [rows, cols]
FUN=mean # what function do we want to apply
)
grades <- cbind( grades, average ) # squish together
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
There are several variants of the `apply()` function, and the variant I use most often is the function `sapply()`, which will apply a function to each element of a list or vector and returns a corresponding list or vector of results.
7\.2 Package `dplyr`
--------------------
Many of the tools to manipulate data frames in R were written without a consistent syntax and are difficult use together. To remedy this, Hadley Wickham (the writer of `ggplot2`) introduced a package called plyr which was quite useful. As with many projects, his first version was good but not great and he introduced an improved version that works exclusively with data.frames called `dplyr` which we will investigate. The package `dplyr` strives to provide a convenient and consistent set of functions to handle the most common data frame manipulations and a mechanism for chaining these operations together to perform complex tasks.
The Dr Wickham has put together a very nice introduction to the package that explains in more detail how the various pieces work and I encourage you to read it at some point. \[<http://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html>].
One of the aspects about the `data.frame` object is that R does some simplification for you, but it does not do it in a consistent manner. Somewhat obnoxiously character strings are always converted to factors and subsetting might return a `data.frame` or a `vector` or a `scalar`. This is fine at the command line, but can be problematic when programming. Furthermore, many operations are pretty slow using `data.frame`. To get around this, Dr Wickham introduced a modified version of the `data.frame` called a `tibble`. A `tibble` is a `data.frame` but with a few extra bits. For now we can ignore the differences.
The pipe command `%>%` allows for very readable code. The idea is that the `%>%` operator works by translating the command `a %>% f(b)` to the expression `f(a,b)`. This operator works on any function and was introduced in the `magrittr` package. The beauty of this comes when you have a suite of functions that takes input arguments of the same type as their output.
For example, if we wanted to start with `x`, and first apply function `f()`, then `g()`, and then `h()`, the usual R command would be `h(g(f(x)))` which is hard to read because you have to start reading at the *innermost* set of parentheses. Using the pipe command `%>%`, this sequence of operations becomes `x %>% f() %>% g() %>% h()`.
| Written | Meaning |
| --- | --- |
| `a %>% f(b)` | `f(a,b)` |
| `b %>% f(a, .)` | `f(a, b)` |
| `x %>% f() %>% g()` | `g( f(x) )` |
In `dplyr`, all the functions below take a *data set as its first argument* and *outputs an appropriately modified data set*. This will allow me to chain together commands in a readable fashion. The pipe command works with any function, not just the `dplyr` functions and I often find myself using it all over the place.
### 7\.2\.1 Verbs
The foundational operations to perform on a data set are:
* Subsetting \- Returns a with only particular columns or rows
– `select` \- Selecting a subset of columns by name or column number.
– `filter` \- Selecting a subset of rows from a data frame based on logical expressions.
– `slice` \- Selecting a subset of rows by row number.
* `arrange` \- Re\-ordering the rows of a data frame.
* `mutate` \- Add a new column that is some function of other columns.
* `summarise` \- calculate some summary statistic of a column of data. This collapses a set of rows into a single row.
Each of these operations is a function in the package `dplyr`. These functions all have a similar calling syntax, the first argument is a data set, subsequent arguments describe what to do with the input data frame and you can refer to the columns without using the `df$column` notation. All of these functions will return a data set.
#### 7\.2\.1\.1 Subsetting with `select`, `filter`, and `slice`
These function allows you select certain columns and rows of a data frame.
##### 7\.2\.1\.1\.1 `select()`
Often you only want to work with a small number of columns of a data frame. It is relatively easy to do this using the standard `[,col.name]` notation, but is often pretty tedious.
```
# recall what the grades are
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
I could select the columns Exam columns by hand, or by using an extension of the `:` operator
```
# select( grades, Exam1, Exam2 ) # select from `grades` columns Exam1, Exam2
grades %>% select( Exam1, Exam2 ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
```
grades %>% select( Exam1:Final ) # Columns Exam1 through Final
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
grades %>% select( -Exam1 ) # Negative indexing by name works
```
```
## l.name Exam2 Final average
## 1 Cox 98 96 95.66667
## 2 Dorian 70 85 81.33333
## 3 Kelso 82 81 81.00000
## 4 Turk 85 92 82.33333
```
```
grades %>% select( 1:2 ) # Can select column by column position
```
```
## l.name Exam1
## 1 Cox 93
## 2 Dorian 89
## 3 Kelso 80
## 4 Turk 70
```
The `select()` command has a few other tricks. There are functional calls that describe the columns you wish to select that take advantage of pattern matching. I generally can get by with `starts_with()`, `ends_with()`, and `contains()`, but there is a final operator `matches()` that takes a regular expression.
```
grades %>% select( starts_with('Exam') ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
The `dplyr::select` function is quite handy, but there are several other packages out there that have a `select` function and we can get into trouble with loading other packages with the same function names. If I encounter the `select` function behaving in a weird manner or complaining about an input argument, my first remedy is to be explicit about it is the `dplyr::select()` function by appending the package name at the start.
##### 7\.2\.1\.1\.2 `filter()`
It is common to want to select particular rows where we have some logically expression to pick the rows.
```
# select students with Final grades greater than 90
grades %>% filter(Final > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
```
You can have multiple logical expressions to select rows and they will be logically combined so that only rows that satisfy all of the conditions are selected. The logicals are joined together using `&` (and) operator or the `|` (or) operator and you may explicitly use other logicals. For example a factor column type might be used to select rows where type is either one or two via the following: `type==1 | type==2`.
```
# select students with Final grades above 90 and
# average score also above 90
grades %>% filter(Final > 90, average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
```
# we could also use an "and" condition
grades %>% filter(Final > 90 & average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
##### 7\.2\.1\.1\.3 `slice()`
When you want to filter rows based on row number, this is called slicing.
```
# grab the first 2 rows
grades %>% slice(1:2)
```
```
## # A tibble: 2 x 5
## l.name Exam1 Exam2 Final average
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Cox 93. 98. 96. 95.7
## 2 Dorian 89. 70. 85. 81.3
```
#### 7\.2\.1\.2 `arrange()`
We often need to re\-order the rows of a data frame. For example, we might wish to take our grade book and sort the rows by the average score, or perhaps alphabetically. The `arrange()` function does exactly that. The first argument is the data frame to re\-order, and the subsequent arguments are the columns to sort on. The order of the sorting column determines the precedent… the first sorting column is first used and the second sorting column is only used to break ties.
```
grades %>% arrange(l.name)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
The default sorting is in ascending order, so to sort the grades with the highest scoring person in the first row, we must tell arrange to do it in descending order using `desc(column.name)`.
```
grades %>% arrange(desc(Final))
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
## 3 Dorian 89 70 85 81.33333
## 4 Kelso 80 82 81 81.00000
```
In a more complicated example, consider the following data and we want to order it first by Treatment Level and secondarily by the y\-value. I want the Treatment level in the default ascending order (Low, Medium, High), but the y variable in descending order.
```
# make some data
dd <- data.frame(
Trt = factor(c("High", "Med", "High", "Low"),
levels = c("Low", "Med", "High")),
y = c(8, 3, 9, 9),
z = c(1, 1, 1, 2))
dd
```
```
## Trt y z
## 1 High 8 1
## 2 Med 3 1
## 3 High 9 1
## 4 Low 9 2
```
```
# arrange the rows first by treatment, and then by y (y in descending order)
dd %>% arrange(Trt, desc(y))
```
```
## Trt y z
## 1 Low 9 2
## 2 Med 3 1
## 3 High 9 1
## 4 High 8 1
```
#### 7\.2\.1\.3 mutate()
I often need to create a new column that is some function of the old columns. This was often cumbersome. Consider code to calculate the average grade in my grade book example.
```
grades$average <- (grades$Exam1 + grades$Exam2 + grades$Final) / 3
```
Instead, we could use the `mutate()` function and avoid all the `grades$` nonsense.
```
grades %>% mutate( average = (Exam1 + Exam2 + Final)/3 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
You can do multiple calculations within the same `mutate()` command, and you can even refer to columns that were created in the same `mutate()` command.
```
grades %>% mutate(
average = (Exam1 + Exam2 + Final)/3,
grade = cut(average, c(0, 60, 70, 80, 90, 100), # cut takes numeric variable
c( 'F','D','C','B','A')) ) # and makes a factor
```
```
## l.name Exam1 Exam2 Final average grade
## 1 Cox 93 98 96 95.66667 A
## 2 Dorian 89 70 85 81.33333 B
## 3 Kelso 80 82 81 81.00000 B
## 4 Turk 70 85 92 82.33333 B
```
We might look at this data frame and want to do some rounding. For example, I might want to take each numeric column and round it. In this case, the functions `mutate_at()` and `mutate_if()` allow us to apply a function to a particular column and save the output.
```
# for each column, if it is numeric, apply the round() function to the column
# while using any additional arguments. So round two digits.
grades %>%
mutate_if( is.numeric, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
The `mutate_at()` function works similarly, but we just have to specify with columns.
```
# round columns 2 through 5
grades %>%
mutate_at( 2:5, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
```
# round columns that start with "ave"
grades %>%
mutate_at( vars(starts_with("ave")), round )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 96
## 2 Dorian 89 70 85 81
## 3 Kelso 80 82 81 81
## 4 Turk 70 85 92 82
```
```
# These do not work because they doesn't evaluate to column indices.
# I can only hope that at some point, this syntax works
#
# grades %>%
# mutate_at( starts_with("ave"), round )
#
# grades %>%
# mutate_at( Exam1:average, round, digits=2 )
```
Another situation I often run into is the need to select many columns, and calculate a sum or mean across them. Unfortunately the natural *tidyverse* way of doing this is a bit clumsy and I often resort to the following trick of using the base `apply()` function inside of a mutate command. Remember the `.` represents the data frame passed into the `mutate` function, so in each line we grab the appropriate columns and then stuff the result into `apply` and assign the output of the apply function to the new column.
```
grades %>%
mutate( Exam.Total = select(., Exam1:Final) %>% apply(1, sum) ) %>%
mutate( Exam.Avg = select(., Exam1:Final) %>% apply(1, mean))
```
```
## l.name Exam1 Exam2 Final average Exam.Total Exam.Avg
## 1 Cox 93 98 96 95.66667 287 95.66667
## 2 Dorian 89 70 85 81.33333 244 81.33333
## 3 Kelso 80 82 81 81.00000 243 81.00000
## 4 Turk 70 85 92 82.33333 247 82.33333
```
#### 7\.2\.1\.4 summarise()
By itself, this function is quite boring, but will become useful later on. Its purpose is to calculate summary statistics using any or all of the data columns. Notice that we get to chose the name of the new column. The way to think about this is that we are collapsing information stored in multiple rows into a single row of values.
```
# calculate the mean of exam 1
grades %>% summarise( mean.E1=mean(Exam1))
```
```
## mean.E1
## 1 83
```
We could calculate multiple summary statistics if we like.
```
# calculate the mean and standard deviation
grades %>% summarise( mean.E1=mean(Exam1), stddev.E1=sd(Exam1) )
```
```
## mean.E1 stddev.E1
## 1 83 10.23067
```
If we want to apply the same statistic to each column, we use the `summarise_all()` command. We have to be a little careful here because the function you use has to work on every column (that isn’t part of the grouping structure (see `group_by()`)). There are two variants `summarize_at()` and `summarize_if()` that give you a bit more flexibility.
```
# calculate the mean and stddev of each column - Cannot do this to Names!
grades %>%
select( Exam1:Final ) %>%
summarise_all( funs(mean, sd) )
```
```
## Exam1_mean Exam2_mean Final_mean Exam1_sd Exam2_sd Final_sd
## 1 83 83.75 88.5 10.23067 11.5 6.757712
```
```
grades %>%
summarise_if(is.numeric, funs(Xbar=mean, SD=sd) )
```
```
## Exam1_Xbar Exam2_Xbar Final_Xbar average_Xbar Exam1_SD Exam2_SD Final_SD
## 1 83 83.75 88.5 85.08333 10.23067 11.5 6.757712
## average_SD
## 1 7.078266
```
#### 7\.2\.1\.5 Miscellaneous functions
There are some more function that are useful but aren’t as commonly used. For sampling the functions `sample_n()` and `sample_frac()` will take a sub\-sample of either n rows or of a fraction of the data set. The function `n()` returns the number of rows in the data set. Finally `rename()` will rename a selected column.
### 7\.2\.2 Split, apply, combine
Aside from unifying the syntax behind the common operations, the major strength of the `dplyr` package is the ability to split a data frame into a bunch of sub\-data frames, apply a sequence of one or more of the operations we just described, and then combine results back together. We’ll consider data from an experiment from spinning wool into yarn. This experiment considered two different types of wool (A or B) and three different levels of tension on the thread. The response variable is the number of breaks in the resulting yarn. For each of the 6 `wool:tension` combinations, there are 9 replicated observations per `wool:tension` level.
```
data(warpbreaks)
str(warpbreaks)
```
```
## 'data.frame': 54 obs. of 3 variables:
## $ breaks : num 26 30 54 25 70 52 51 26 67 18 ...
## $ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
## $ tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ...
```
The first we must do is to create a data frame with additional information about how to break the data into sub\-data frames. In this case, I want to break the data up into the 6 wool\-by\-tension combinations. Initially we will just figure out how many rows are in each wool\-by\-tension combination.
```
# group_by: what variable(s) shall we group on.
# n() is a function that returns how many rows are in the
# currently selected sub-dataframe
warpbreaks %>%
group_by( wool, tension) %>% # grouping
summarise(n = n() ) # how many in each group
```
```
## # A tibble: 6 x 3
## # Groups: wool [?]
## wool tension n
## <fct> <fct> <int>
## 1 A L 9
## 2 A M 9
## 3 A H 9
## 4 B L 9
## 5 B M 9
## 6 B H 9
```
The `group_by` function takes a data.frame and returns the same data.frame, but with some extra information so that any subsequent function acts on each unique combination defined in the `group_by`. If you wish to remove this behavior, use `group_by()` to reset the grouping to have no grouping variable.
Using the same `summarise` function, we could calculate the group mean and standard deviation for each wool\-by\-tension group.
```
warpbreaks %>%
group_by(wool, tension) %>%
summarise( n = n(), # I added some formatting to show the
mean.breaks = mean(breaks), # reader I am calculating several
sd.breaks = sd(breaks)) # statistics.
```
```
## # A tibble: 6 x 5
## # Groups: wool [?]
## wool tension n mean.breaks sd.breaks
## <fct> <fct> <int> <dbl> <dbl>
## 1 A L 9 44.6 18.1
## 2 A M 9 24.0 8.66
## 3 A H 9 24.6 10.3
## 4 B L 9 28.2 9.86
## 5 B M 9 28.8 9.43
## 6 B H 9 18.8 4.89
```
If instead of summarizing each split, we might want to just do some calculation and the output should have the same number of rows as the input data frame. In this case I’ll tell `dplyr` that we are mutating the data frame instead of summarizing it. For example, suppose that I want to calculate the residual value \\\[e\_{ijk}\=y\_{ijk}\-\\bar{y}\_{ij\\cdot}\\] where \\(\\bar{y}\_{ij\\cdot}\\) is the mean of each `wool:tension` combination.
```
warpbreaks %>%
group_by(wool, tension) %>% # group by wool:tension
mutate(resid = breaks - mean(breaks)) %>% # mean(breaks) of the group!
head( ) # show the first couple of rows
```
```
## # A tibble: 6 x 4
## # Groups: wool, tension [1]
## breaks wool tension resid
## <dbl> <fct> <fct> <dbl>
## 1 26. A L -18.6
## 2 30. A L -14.6
## 3 54. A L 9.44
## 4 25. A L -19.6
## 5 70. A L 25.4
## 6 52. A L 7.44
```
### 7\.2\.3 Chaining commands together
In the previous examples we have used the `%>%` operator to make the code more readable but to really appreciate this, we should examine the alternative.
Suppose we have the results of a small 5K race. The data given to us is in the order that the runners signed up but we want to calculate the results for each gender, calculate the placings, and the sort the data frame by gender and then place. We can think of this process as having three steps:
1. Splitting
2. Ranking
3. Re\-arranging.
```
# input the initial data
race.results <- data.frame(
name=c('Bob', 'Jeff', 'Rachel', 'Bonnie', 'Derek', 'April','Elise','David'),
time=c(21.23, 19.51, 19.82, 23.45, 20.23, 24.22, 28.83, 15.73),
gender=c('M','M','F','F','M','F','F','M'))
```
We could run all the commands together using the following code:
```
arrange(
mutate(
group_by(
race.results, # using race.results
gender), # group by gender
place = rank( time )), # mutate to calculate the place column
gender, place) # arrange the result by gender and place
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
This is very difficult to read because you have to read the code *from the inside out*.
Another (and slightly more readable) way to complete our task is to save each intermediate step of our process and then use that in the next step:
```
temp.df0 <- race.results %>% group_by( gender)
temp.df1 <- temp.df0 %>% mutate( place = rank(time) )
temp.df2 <- temp.df1 %>% arrange( gender, place )
```
It would be nice if I didn’t have to save all these intermediate results because keeping track of temp1 and temp2 gets pretty annoying if I keep changing the order of how things or calculated or add/subtract steps. This is exactly what `%>%` does for me.
```
race.results %>%
group_by( gender ) %>%
mutate( place = rank(time)) %>%
arrange( gender, place )
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
### 7\.2\.1 Verbs
The foundational operations to perform on a data set are:
* Subsetting \- Returns a with only particular columns or rows
– `select` \- Selecting a subset of columns by name or column number.
– `filter` \- Selecting a subset of rows from a data frame based on logical expressions.
– `slice` \- Selecting a subset of rows by row number.
* `arrange` \- Re\-ordering the rows of a data frame.
* `mutate` \- Add a new column that is some function of other columns.
* `summarise` \- calculate some summary statistic of a column of data. This collapses a set of rows into a single row.
Each of these operations is a function in the package `dplyr`. These functions all have a similar calling syntax, the first argument is a data set, subsequent arguments describe what to do with the input data frame and you can refer to the columns without using the `df$column` notation. All of these functions will return a data set.
#### 7\.2\.1\.1 Subsetting with `select`, `filter`, and `slice`
These function allows you select certain columns and rows of a data frame.
##### 7\.2\.1\.1\.1 `select()`
Often you only want to work with a small number of columns of a data frame. It is relatively easy to do this using the standard `[,col.name]` notation, but is often pretty tedious.
```
# recall what the grades are
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
I could select the columns Exam columns by hand, or by using an extension of the `:` operator
```
# select( grades, Exam1, Exam2 ) # select from `grades` columns Exam1, Exam2
grades %>% select( Exam1, Exam2 ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
```
grades %>% select( Exam1:Final ) # Columns Exam1 through Final
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
grades %>% select( -Exam1 ) # Negative indexing by name works
```
```
## l.name Exam2 Final average
## 1 Cox 98 96 95.66667
## 2 Dorian 70 85 81.33333
## 3 Kelso 82 81 81.00000
## 4 Turk 85 92 82.33333
```
```
grades %>% select( 1:2 ) # Can select column by column position
```
```
## l.name Exam1
## 1 Cox 93
## 2 Dorian 89
## 3 Kelso 80
## 4 Turk 70
```
The `select()` command has a few other tricks. There are functional calls that describe the columns you wish to select that take advantage of pattern matching. I generally can get by with `starts_with()`, `ends_with()`, and `contains()`, but there is a final operator `matches()` that takes a regular expression.
```
grades %>% select( starts_with('Exam') ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
The `dplyr::select` function is quite handy, but there are several other packages out there that have a `select` function and we can get into trouble with loading other packages with the same function names. If I encounter the `select` function behaving in a weird manner or complaining about an input argument, my first remedy is to be explicit about it is the `dplyr::select()` function by appending the package name at the start.
##### 7\.2\.1\.1\.2 `filter()`
It is common to want to select particular rows where we have some logically expression to pick the rows.
```
# select students with Final grades greater than 90
grades %>% filter(Final > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
```
You can have multiple logical expressions to select rows and they will be logically combined so that only rows that satisfy all of the conditions are selected. The logicals are joined together using `&` (and) operator or the `|` (or) operator and you may explicitly use other logicals. For example a factor column type might be used to select rows where type is either one or two via the following: `type==1 | type==2`.
```
# select students with Final grades above 90 and
# average score also above 90
grades %>% filter(Final > 90, average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
```
# we could also use an "and" condition
grades %>% filter(Final > 90 & average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
##### 7\.2\.1\.1\.3 `slice()`
When you want to filter rows based on row number, this is called slicing.
```
# grab the first 2 rows
grades %>% slice(1:2)
```
```
## # A tibble: 2 x 5
## l.name Exam1 Exam2 Final average
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Cox 93. 98. 96. 95.7
## 2 Dorian 89. 70. 85. 81.3
```
#### 7\.2\.1\.2 `arrange()`
We often need to re\-order the rows of a data frame. For example, we might wish to take our grade book and sort the rows by the average score, or perhaps alphabetically. The `arrange()` function does exactly that. The first argument is the data frame to re\-order, and the subsequent arguments are the columns to sort on. The order of the sorting column determines the precedent… the first sorting column is first used and the second sorting column is only used to break ties.
```
grades %>% arrange(l.name)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
The default sorting is in ascending order, so to sort the grades with the highest scoring person in the first row, we must tell arrange to do it in descending order using `desc(column.name)`.
```
grades %>% arrange(desc(Final))
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
## 3 Dorian 89 70 85 81.33333
## 4 Kelso 80 82 81 81.00000
```
In a more complicated example, consider the following data and we want to order it first by Treatment Level and secondarily by the y\-value. I want the Treatment level in the default ascending order (Low, Medium, High), but the y variable in descending order.
```
# make some data
dd <- data.frame(
Trt = factor(c("High", "Med", "High", "Low"),
levels = c("Low", "Med", "High")),
y = c(8, 3, 9, 9),
z = c(1, 1, 1, 2))
dd
```
```
## Trt y z
## 1 High 8 1
## 2 Med 3 1
## 3 High 9 1
## 4 Low 9 2
```
```
# arrange the rows first by treatment, and then by y (y in descending order)
dd %>% arrange(Trt, desc(y))
```
```
## Trt y z
## 1 Low 9 2
## 2 Med 3 1
## 3 High 9 1
## 4 High 8 1
```
#### 7\.2\.1\.3 mutate()
I often need to create a new column that is some function of the old columns. This was often cumbersome. Consider code to calculate the average grade in my grade book example.
```
grades$average <- (grades$Exam1 + grades$Exam2 + grades$Final) / 3
```
Instead, we could use the `mutate()` function and avoid all the `grades$` nonsense.
```
grades %>% mutate( average = (Exam1 + Exam2 + Final)/3 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
You can do multiple calculations within the same `mutate()` command, and you can even refer to columns that were created in the same `mutate()` command.
```
grades %>% mutate(
average = (Exam1 + Exam2 + Final)/3,
grade = cut(average, c(0, 60, 70, 80, 90, 100), # cut takes numeric variable
c( 'F','D','C','B','A')) ) # and makes a factor
```
```
## l.name Exam1 Exam2 Final average grade
## 1 Cox 93 98 96 95.66667 A
## 2 Dorian 89 70 85 81.33333 B
## 3 Kelso 80 82 81 81.00000 B
## 4 Turk 70 85 92 82.33333 B
```
We might look at this data frame and want to do some rounding. For example, I might want to take each numeric column and round it. In this case, the functions `mutate_at()` and `mutate_if()` allow us to apply a function to a particular column and save the output.
```
# for each column, if it is numeric, apply the round() function to the column
# while using any additional arguments. So round two digits.
grades %>%
mutate_if( is.numeric, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
The `mutate_at()` function works similarly, but we just have to specify with columns.
```
# round columns 2 through 5
grades %>%
mutate_at( 2:5, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
```
# round columns that start with "ave"
grades %>%
mutate_at( vars(starts_with("ave")), round )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 96
## 2 Dorian 89 70 85 81
## 3 Kelso 80 82 81 81
## 4 Turk 70 85 92 82
```
```
# These do not work because they doesn't evaluate to column indices.
# I can only hope that at some point, this syntax works
#
# grades %>%
# mutate_at( starts_with("ave"), round )
#
# grades %>%
# mutate_at( Exam1:average, round, digits=2 )
```
Another situation I often run into is the need to select many columns, and calculate a sum or mean across them. Unfortunately the natural *tidyverse* way of doing this is a bit clumsy and I often resort to the following trick of using the base `apply()` function inside of a mutate command. Remember the `.` represents the data frame passed into the `mutate` function, so in each line we grab the appropriate columns and then stuff the result into `apply` and assign the output of the apply function to the new column.
```
grades %>%
mutate( Exam.Total = select(., Exam1:Final) %>% apply(1, sum) ) %>%
mutate( Exam.Avg = select(., Exam1:Final) %>% apply(1, mean))
```
```
## l.name Exam1 Exam2 Final average Exam.Total Exam.Avg
## 1 Cox 93 98 96 95.66667 287 95.66667
## 2 Dorian 89 70 85 81.33333 244 81.33333
## 3 Kelso 80 82 81 81.00000 243 81.00000
## 4 Turk 70 85 92 82.33333 247 82.33333
```
#### 7\.2\.1\.4 summarise()
By itself, this function is quite boring, but will become useful later on. Its purpose is to calculate summary statistics using any or all of the data columns. Notice that we get to chose the name of the new column. The way to think about this is that we are collapsing information stored in multiple rows into a single row of values.
```
# calculate the mean of exam 1
grades %>% summarise( mean.E1=mean(Exam1))
```
```
## mean.E1
## 1 83
```
We could calculate multiple summary statistics if we like.
```
# calculate the mean and standard deviation
grades %>% summarise( mean.E1=mean(Exam1), stddev.E1=sd(Exam1) )
```
```
## mean.E1 stddev.E1
## 1 83 10.23067
```
If we want to apply the same statistic to each column, we use the `summarise_all()` command. We have to be a little careful here because the function you use has to work on every column (that isn’t part of the grouping structure (see `group_by()`)). There are two variants `summarize_at()` and `summarize_if()` that give you a bit more flexibility.
```
# calculate the mean and stddev of each column - Cannot do this to Names!
grades %>%
select( Exam1:Final ) %>%
summarise_all( funs(mean, sd) )
```
```
## Exam1_mean Exam2_mean Final_mean Exam1_sd Exam2_sd Final_sd
## 1 83 83.75 88.5 10.23067 11.5 6.757712
```
```
grades %>%
summarise_if(is.numeric, funs(Xbar=mean, SD=sd) )
```
```
## Exam1_Xbar Exam2_Xbar Final_Xbar average_Xbar Exam1_SD Exam2_SD Final_SD
## 1 83 83.75 88.5 85.08333 10.23067 11.5 6.757712
## average_SD
## 1 7.078266
```
#### 7\.2\.1\.5 Miscellaneous functions
There are some more function that are useful but aren’t as commonly used. For sampling the functions `sample_n()` and `sample_frac()` will take a sub\-sample of either n rows or of a fraction of the data set. The function `n()` returns the number of rows in the data set. Finally `rename()` will rename a selected column.
#### 7\.2\.1\.1 Subsetting with `select`, `filter`, and `slice`
These function allows you select certain columns and rows of a data frame.
##### 7\.2\.1\.1\.1 `select()`
Often you only want to work with a small number of columns of a data frame. It is relatively easy to do this using the standard `[,col.name]` notation, but is often pretty tedious.
```
# recall what the grades are
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
I could select the columns Exam columns by hand, or by using an extension of the `:` operator
```
# select( grades, Exam1, Exam2 ) # select from `grades` columns Exam1, Exam2
grades %>% select( Exam1, Exam2 ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
```
grades %>% select( Exam1:Final ) # Columns Exam1 through Final
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
grades %>% select( -Exam1 ) # Negative indexing by name works
```
```
## l.name Exam2 Final average
## 1 Cox 98 96 95.66667
## 2 Dorian 70 85 81.33333
## 3 Kelso 82 81 81.00000
## 4 Turk 85 92 82.33333
```
```
grades %>% select( 1:2 ) # Can select column by column position
```
```
## l.name Exam1
## 1 Cox 93
## 2 Dorian 89
## 3 Kelso 80
## 4 Turk 70
```
The `select()` command has a few other tricks. There are functional calls that describe the columns you wish to select that take advantage of pattern matching. I generally can get by with `starts_with()`, `ends_with()`, and `contains()`, but there is a final operator `matches()` that takes a regular expression.
```
grades %>% select( starts_with('Exam') ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
The `dplyr::select` function is quite handy, but there are several other packages out there that have a `select` function and we can get into trouble with loading other packages with the same function names. If I encounter the `select` function behaving in a weird manner or complaining about an input argument, my first remedy is to be explicit about it is the `dplyr::select()` function by appending the package name at the start.
##### 7\.2\.1\.1\.2 `filter()`
It is common to want to select particular rows where we have some logically expression to pick the rows.
```
# select students with Final grades greater than 90
grades %>% filter(Final > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
```
You can have multiple logical expressions to select rows and they will be logically combined so that only rows that satisfy all of the conditions are selected. The logicals are joined together using `&` (and) operator or the `|` (or) operator and you may explicitly use other logicals. For example a factor column type might be used to select rows where type is either one or two via the following: `type==1 | type==2`.
```
# select students with Final grades above 90 and
# average score also above 90
grades %>% filter(Final > 90, average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
```
# we could also use an "and" condition
grades %>% filter(Final > 90 & average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
##### 7\.2\.1\.1\.3 `slice()`
When you want to filter rows based on row number, this is called slicing.
```
# grab the first 2 rows
grades %>% slice(1:2)
```
```
## # A tibble: 2 x 5
## l.name Exam1 Exam2 Final average
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Cox 93. 98. 96. 95.7
## 2 Dorian 89. 70. 85. 81.3
```
##### 7\.2\.1\.1\.1 `select()`
Often you only want to work with a small number of columns of a data frame. It is relatively easy to do this using the standard `[,col.name]` notation, but is often pretty tedious.
```
# recall what the grades are
grades
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
I could select the columns Exam columns by hand, or by using an extension of the `:` operator
```
# select( grades, Exam1, Exam2 ) # select from `grades` columns Exam1, Exam2
grades %>% select( Exam1, Exam2 ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
```
grades %>% select( Exam1:Final ) # Columns Exam1 through Final
```
```
## Exam1 Exam2 Final
## 1 93 98 96
## 2 89 70 85
## 3 80 82 81
## 4 70 85 92
```
```
grades %>% select( -Exam1 ) # Negative indexing by name works
```
```
## l.name Exam2 Final average
## 1 Cox 98 96 95.66667
## 2 Dorian 70 85 81.33333
## 3 Kelso 82 81 81.00000
## 4 Turk 85 92 82.33333
```
```
grades %>% select( 1:2 ) # Can select column by column position
```
```
## l.name Exam1
## 1 Cox 93
## 2 Dorian 89
## 3 Kelso 80
## 4 Turk 70
```
The `select()` command has a few other tricks. There are functional calls that describe the columns you wish to select that take advantage of pattern matching. I generally can get by with `starts_with()`, `ends_with()`, and `contains()`, but there is a final operator `matches()` that takes a regular expression.
```
grades %>% select( starts_with('Exam') ) # Exam1 and Exam2
```
```
## Exam1 Exam2
## 1 93 98
## 2 89 70
## 3 80 82
## 4 70 85
```
The `dplyr::select` function is quite handy, but there are several other packages out there that have a `select` function and we can get into trouble with loading other packages with the same function names. If I encounter the `select` function behaving in a weird manner or complaining about an input argument, my first remedy is to be explicit about it is the `dplyr::select()` function by appending the package name at the start.
##### 7\.2\.1\.1\.2 `filter()`
It is common to want to select particular rows where we have some logically expression to pick the rows.
```
# select students with Final grades greater than 90
grades %>% filter(Final > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
```
You can have multiple logical expressions to select rows and they will be logically combined so that only rows that satisfy all of the conditions are selected. The logicals are joined together using `&` (and) operator or the `|` (or) operator and you may explicitly use other logicals. For example a factor column type might be used to select rows where type is either one or two via the following: `type==1 | type==2`.
```
# select students with Final grades above 90 and
# average score also above 90
grades %>% filter(Final > 90, average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
```
# we could also use an "and" condition
grades %>% filter(Final > 90 & average > 90)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
```
##### 7\.2\.1\.1\.3 `slice()`
When you want to filter rows based on row number, this is called slicing.
```
# grab the first 2 rows
grades %>% slice(1:2)
```
```
## # A tibble: 2 x 5
## l.name Exam1 Exam2 Final average
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Cox 93. 98. 96. 95.7
## 2 Dorian 89. 70. 85. 81.3
```
#### 7\.2\.1\.2 `arrange()`
We often need to re\-order the rows of a data frame. For example, we might wish to take our grade book and sort the rows by the average score, or perhaps alphabetically. The `arrange()` function does exactly that. The first argument is the data frame to re\-order, and the subsequent arguments are the columns to sort on. The order of the sorting column determines the precedent… the first sorting column is first used and the second sorting column is only used to break ties.
```
grades %>% arrange(l.name)
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
The default sorting is in ascending order, so to sort the grades with the highest scoring person in the first row, we must tell arrange to do it in descending order using `desc(column.name)`.
```
grades %>% arrange(desc(Final))
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Turk 70 85 92 82.33333
## 3 Dorian 89 70 85 81.33333
## 4 Kelso 80 82 81 81.00000
```
In a more complicated example, consider the following data and we want to order it first by Treatment Level and secondarily by the y\-value. I want the Treatment level in the default ascending order (Low, Medium, High), but the y variable in descending order.
```
# make some data
dd <- data.frame(
Trt = factor(c("High", "Med", "High", "Low"),
levels = c("Low", "Med", "High")),
y = c(8, 3, 9, 9),
z = c(1, 1, 1, 2))
dd
```
```
## Trt y z
## 1 High 8 1
## 2 Med 3 1
## 3 High 9 1
## 4 Low 9 2
```
```
# arrange the rows first by treatment, and then by y (y in descending order)
dd %>% arrange(Trt, desc(y))
```
```
## Trt y z
## 1 Low 9 2
## 2 Med 3 1
## 3 High 9 1
## 4 High 8 1
```
#### 7\.2\.1\.3 mutate()
I often need to create a new column that is some function of the old columns. This was often cumbersome. Consider code to calculate the average grade in my grade book example.
```
grades$average <- (grades$Exam1 + grades$Exam2 + grades$Final) / 3
```
Instead, we could use the `mutate()` function and avoid all the `grades$` nonsense.
```
grades %>% mutate( average = (Exam1 + Exam2 + Final)/3 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.66667
## 2 Dorian 89 70 85 81.33333
## 3 Kelso 80 82 81 81.00000
## 4 Turk 70 85 92 82.33333
```
You can do multiple calculations within the same `mutate()` command, and you can even refer to columns that were created in the same `mutate()` command.
```
grades %>% mutate(
average = (Exam1 + Exam2 + Final)/3,
grade = cut(average, c(0, 60, 70, 80, 90, 100), # cut takes numeric variable
c( 'F','D','C','B','A')) ) # and makes a factor
```
```
## l.name Exam1 Exam2 Final average grade
## 1 Cox 93 98 96 95.66667 A
## 2 Dorian 89 70 85 81.33333 B
## 3 Kelso 80 82 81 81.00000 B
## 4 Turk 70 85 92 82.33333 B
```
We might look at this data frame and want to do some rounding. For example, I might want to take each numeric column and round it. In this case, the functions `mutate_at()` and `mutate_if()` allow us to apply a function to a particular column and save the output.
```
# for each column, if it is numeric, apply the round() function to the column
# while using any additional arguments. So round two digits.
grades %>%
mutate_if( is.numeric, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
The `mutate_at()` function works similarly, but we just have to specify with columns.
```
# round columns 2 through 5
grades %>%
mutate_at( 2:5, round, digits=2 )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 95.67
## 2 Dorian 89 70 85 81.33
## 3 Kelso 80 82 81 81.00
## 4 Turk 70 85 92 82.33
```
```
# round columns that start with "ave"
grades %>%
mutate_at( vars(starts_with("ave")), round )
```
```
## l.name Exam1 Exam2 Final average
## 1 Cox 93 98 96 96
## 2 Dorian 89 70 85 81
## 3 Kelso 80 82 81 81
## 4 Turk 70 85 92 82
```
```
# These do not work because they doesn't evaluate to column indices.
# I can only hope that at some point, this syntax works
#
# grades %>%
# mutate_at( starts_with("ave"), round )
#
# grades %>%
# mutate_at( Exam1:average, round, digits=2 )
```
Another situation I often run into is the need to select many columns, and calculate a sum or mean across them. Unfortunately the natural *tidyverse* way of doing this is a bit clumsy and I often resort to the following trick of using the base `apply()` function inside of a mutate command. Remember the `.` represents the data frame passed into the `mutate` function, so in each line we grab the appropriate columns and then stuff the result into `apply` and assign the output of the apply function to the new column.
```
grades %>%
mutate( Exam.Total = select(., Exam1:Final) %>% apply(1, sum) ) %>%
mutate( Exam.Avg = select(., Exam1:Final) %>% apply(1, mean))
```
```
## l.name Exam1 Exam2 Final average Exam.Total Exam.Avg
## 1 Cox 93 98 96 95.66667 287 95.66667
## 2 Dorian 89 70 85 81.33333 244 81.33333
## 3 Kelso 80 82 81 81.00000 243 81.00000
## 4 Turk 70 85 92 82.33333 247 82.33333
```
#### 7\.2\.1\.4 summarise()
By itself, this function is quite boring, but will become useful later on. Its purpose is to calculate summary statistics using any or all of the data columns. Notice that we get to chose the name of the new column. The way to think about this is that we are collapsing information stored in multiple rows into a single row of values.
```
# calculate the mean of exam 1
grades %>% summarise( mean.E1=mean(Exam1))
```
```
## mean.E1
## 1 83
```
We could calculate multiple summary statistics if we like.
```
# calculate the mean and standard deviation
grades %>% summarise( mean.E1=mean(Exam1), stddev.E1=sd(Exam1) )
```
```
## mean.E1 stddev.E1
## 1 83 10.23067
```
If we want to apply the same statistic to each column, we use the `summarise_all()` command. We have to be a little careful here because the function you use has to work on every column (that isn’t part of the grouping structure (see `group_by()`)). There are two variants `summarize_at()` and `summarize_if()` that give you a bit more flexibility.
```
# calculate the mean and stddev of each column - Cannot do this to Names!
grades %>%
select( Exam1:Final ) %>%
summarise_all( funs(mean, sd) )
```
```
## Exam1_mean Exam2_mean Final_mean Exam1_sd Exam2_sd Final_sd
## 1 83 83.75 88.5 10.23067 11.5 6.757712
```
```
grades %>%
summarise_if(is.numeric, funs(Xbar=mean, SD=sd) )
```
```
## Exam1_Xbar Exam2_Xbar Final_Xbar average_Xbar Exam1_SD Exam2_SD Final_SD
## 1 83 83.75 88.5 85.08333 10.23067 11.5 6.757712
## average_SD
## 1 7.078266
```
#### 7\.2\.1\.5 Miscellaneous functions
There are some more function that are useful but aren’t as commonly used. For sampling the functions `sample_n()` and `sample_frac()` will take a sub\-sample of either n rows or of a fraction of the data set. The function `n()` returns the number of rows in the data set. Finally `rename()` will rename a selected column.
### 7\.2\.2 Split, apply, combine
Aside from unifying the syntax behind the common operations, the major strength of the `dplyr` package is the ability to split a data frame into a bunch of sub\-data frames, apply a sequence of one or more of the operations we just described, and then combine results back together. We’ll consider data from an experiment from spinning wool into yarn. This experiment considered two different types of wool (A or B) and three different levels of tension on the thread. The response variable is the number of breaks in the resulting yarn. For each of the 6 `wool:tension` combinations, there are 9 replicated observations per `wool:tension` level.
```
data(warpbreaks)
str(warpbreaks)
```
```
## 'data.frame': 54 obs. of 3 variables:
## $ breaks : num 26 30 54 25 70 52 51 26 67 18 ...
## $ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
## $ tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ...
```
The first we must do is to create a data frame with additional information about how to break the data into sub\-data frames. In this case, I want to break the data up into the 6 wool\-by\-tension combinations. Initially we will just figure out how many rows are in each wool\-by\-tension combination.
```
# group_by: what variable(s) shall we group on.
# n() is a function that returns how many rows are in the
# currently selected sub-dataframe
warpbreaks %>%
group_by( wool, tension) %>% # grouping
summarise(n = n() ) # how many in each group
```
```
## # A tibble: 6 x 3
## # Groups: wool [?]
## wool tension n
## <fct> <fct> <int>
## 1 A L 9
## 2 A M 9
## 3 A H 9
## 4 B L 9
## 5 B M 9
## 6 B H 9
```
The `group_by` function takes a data.frame and returns the same data.frame, but with some extra information so that any subsequent function acts on each unique combination defined in the `group_by`. If you wish to remove this behavior, use `group_by()` to reset the grouping to have no grouping variable.
Using the same `summarise` function, we could calculate the group mean and standard deviation for each wool\-by\-tension group.
```
warpbreaks %>%
group_by(wool, tension) %>%
summarise( n = n(), # I added some formatting to show the
mean.breaks = mean(breaks), # reader I am calculating several
sd.breaks = sd(breaks)) # statistics.
```
```
## # A tibble: 6 x 5
## # Groups: wool [?]
## wool tension n mean.breaks sd.breaks
## <fct> <fct> <int> <dbl> <dbl>
## 1 A L 9 44.6 18.1
## 2 A M 9 24.0 8.66
## 3 A H 9 24.6 10.3
## 4 B L 9 28.2 9.86
## 5 B M 9 28.8 9.43
## 6 B H 9 18.8 4.89
```
If instead of summarizing each split, we might want to just do some calculation and the output should have the same number of rows as the input data frame. In this case I’ll tell `dplyr` that we are mutating the data frame instead of summarizing it. For example, suppose that I want to calculate the residual value \\\[e\_{ijk}\=y\_{ijk}\-\\bar{y}\_{ij\\cdot}\\] where \\(\\bar{y}\_{ij\\cdot}\\) is the mean of each `wool:tension` combination.
```
warpbreaks %>%
group_by(wool, tension) %>% # group by wool:tension
mutate(resid = breaks - mean(breaks)) %>% # mean(breaks) of the group!
head( ) # show the first couple of rows
```
```
## # A tibble: 6 x 4
## # Groups: wool, tension [1]
## breaks wool tension resid
## <dbl> <fct> <fct> <dbl>
## 1 26. A L -18.6
## 2 30. A L -14.6
## 3 54. A L 9.44
## 4 25. A L -19.6
## 5 70. A L 25.4
## 6 52. A L 7.44
```
### 7\.2\.3 Chaining commands together
In the previous examples we have used the `%>%` operator to make the code more readable but to really appreciate this, we should examine the alternative.
Suppose we have the results of a small 5K race. The data given to us is in the order that the runners signed up but we want to calculate the results for each gender, calculate the placings, and the sort the data frame by gender and then place. We can think of this process as having three steps:
1. Splitting
2. Ranking
3. Re\-arranging.
```
# input the initial data
race.results <- data.frame(
name=c('Bob', 'Jeff', 'Rachel', 'Bonnie', 'Derek', 'April','Elise','David'),
time=c(21.23, 19.51, 19.82, 23.45, 20.23, 24.22, 28.83, 15.73),
gender=c('M','M','F','F','M','F','F','M'))
```
We could run all the commands together using the following code:
```
arrange(
mutate(
group_by(
race.results, # using race.results
gender), # group by gender
place = rank( time )), # mutate to calculate the place column
gender, place) # arrange the result by gender and place
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
This is very difficult to read because you have to read the code *from the inside out*.
Another (and slightly more readable) way to complete our task is to save each intermediate step of our process and then use that in the next step:
```
temp.df0 <- race.results %>% group_by( gender)
temp.df1 <- temp.df0 %>% mutate( place = rank(time) )
temp.df2 <- temp.df1 %>% arrange( gender, place )
```
It would be nice if I didn’t have to save all these intermediate results because keeping track of temp1 and temp2 gets pretty annoying if I keep changing the order of how things or calculated or add/subtract steps. This is exactly what `%>%` does for me.
```
race.results %>%
group_by( gender ) %>%
mutate( place = rank(time)) %>%
arrange( gender, place )
```
```
## # A tibble: 8 x 4
## # Groups: gender [2]
## name time gender place
## <fct> <dbl> <fct> <dbl>
## 1 Rachel 19.8 F 1.
## 2 Bonnie 23.4 F 2.
## 3 April 24.2 F 3.
## 4 Elise 28.8 F 4.
## 5 David 15.7 M 1.
## 6 Jeff 19.5 M 2.
## 7 Derek 20.2 M 3.
## 8 Bob 21.2 M 4.
```
7\.3 Exercises
--------------
1. The dataset `ChickWeight` tracks the weights of 48 baby chickens (chicks) feed four different diets.
1. Load the dataset using
```
data(ChickWeight)
```
2. Look at the help files for the description of the columns.
3. Remove all the observations except for observations from day 10 or day 20\.
4. Calculate the mean and standard deviation of the chick weights for each diet group on days 10 and 20\.
2. The OpenIntro textbook on statistics includes a data set on body dimensions.
1. Load the file using
```
Body <- read.csv('http://www.openintro.org/stat/data/bdims.csv')
```
2. The column sex is coded as a 1 if the individual is male and 0 if female. This is a non\-intuitive labeling system. Create a new column `sex.MF` that uses labels Male and Female. *Hint: recall either the `factor()` or `cut()` command!*
3. The columns `wgt` and `hgt` measure weight and height in kilograms and centimeters (respectively). Use these to calculate the Body Mass Index (BMI) for each individual where \\\[BMI\=\\frac{Weight\\,(kg)}{\\left\[Height\\,(m)\\right]^{2}}\\]
4. Double check that your calculated BMI column is correct by examining the summary statistics of the column. BMI values should be between 18 to 40 or so. Did you make an error in your calculation?
5. The function `cut` takes a vector of continuous numerical data and creates a factor based on your give cut\-points.
```
# Define a continuous vector to convert to a factor
x <- 1:10
# divide range of x into three groups of equal length
cut(x, breaks=3)
```
```
## [1] (0.991,4] (0.991,4] (0.991,4] (0.991,4] (4,7] (4,7] (4,7]
## [8] (7,10] (7,10] (7,10]
## Levels: (0.991,4] (4,7] (7,10]
```
```
# divide x into four groups, where I specify all 5 break points
cut(x, breaks = c(0, 2.5, 5.0, 7.5, 10))
```
```
## [1] (0,2.5] (0,2.5] (2.5,5] (2.5,5] (2.5,5] (5,7.5] (5,7.5]
## [8] (7.5,10] (7.5,10] (7.5,10]
## Levels: (0,2.5] (2.5,5] (5,7.5] (7.5,10]
```
```
# (0,2.5] (2.5,5] means 2.5 is included in first group
# right=FALSE changes this to make 2.5 included in the second
# divide x into 3 groups, but give them a nicer
# set of group names
cut(x, breaks=3, labels=c('Low','Medium','High'))
```
```
## [1] Low Low Low Low Medium Medium Medium High High High
## Levels: Low Medium High
```
Create a new column of in the data frame that divides the age into decades (10\-19, 20\-29, 30\-39, etc). Notice the oldest person in the study is 67\.
```
Body <- Body %>%
mutate( Age.Grp = cut(age,
breaks=c(10,20,30,40,50,60,70),
right=FALSE))
```
6. Find the average BMI for each Sex\-by\-Age combination.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/8-data-reshaping.html |
Chapter 8 Data Reshaping
========================
```
# library(tidyr) # for the gather/spread commands
# library(dplyr) # for the join stuff
library(tidyverse) # dplyr, tidyr, ggplot2, etc.
```
Most of the time, our data is in the form of a data frame and we are interested in exploring the relationships. However most procedures in R expect the data to show up in a ‘long’ format where each row is an observation and each column is a covariate. In practice, the data is often not stored like that and the data comes to us with repeated observations included on a single row. This is often done as a memory saving technique or because there is some structure in the data that makes the ‘wide’ format attractive. As a result, we need a way to convert data from ‘wide’ to ‘long’ and vice\-versa.
Next we need a way to squish two data frames together. It is often advantagous to store data that would be be repeated seperately in a different table so that a particular piece of information lives in only one location. This makes the data easier to modify, and more likely to maintain consistence. However, this practice requires that, when necessary, we can add information to a table, that might involve a lot of duplicated rows.
8\.1 `tidyr`
------------
There is a common issue with obtaining data with many columns that you wish were organized as rows. For example, I might have data in a grade book that has several homework scores and I’d like to produce a nice graph that has assignment number on the x\-axis and score on the y\-axis. Unfortunately this is incredibly hard to do when the data is arranged in the following way:
```
grade.book <- rbind(
data.frame(name='Alison', HW.1=8, HW.2=5, HW.3=8, HW.4=4),
data.frame(name='Brandon', HW.1=5, HW.2=3, HW.3=6, HW.4=9),
data.frame(name='Charles', HW.1=9, HW.2=7, HW.3=9, HW.4=10))
grade.book
```
```
## name HW.1 HW.2 HW.3 HW.4
## 1 Alison 8 5 8 4
## 2 Brandon 5 3 6 9
## 3 Charles 9 7 9 10
```
What we want to do is turn this data frame from a *wide* data frame into a *long* data frame. In MS Excel this is called pivoting. Essentially I’d like to create a data frame with three columns: `name`, `assignment`, and `score`. That is to say that each homework datum really has three pieces of information: who it came from, which homework it was, and what the score was. It doesn’t conceptually matter if I store it as 3 rows of 4 columns or 12 rows so long as there is a way to identify how a student scored on a particular homework. So we want to reshape the HW1 to HW4 columns into two columns (assignment and score).
This package was built by the same people that created dplyr and ggplot2 and there is a nice introduction at: \[[http://blog.rstudio.org/2014/07/22/introducing\-tidyr/](http://blog.rstudio.org/2014/07/22/introducing-tidyr/)]
### 8\.1\.1 Verbs
As with the dplyr package, there are two main verbs to remember:
1. `gather` \- Gather multiple columns that are related into two columns that contain the original column name and the value. For example for columns HW1, HW2, HW3 we would gather them into two column HomeworkNumber and Score. In this case, we refer to HomeworkNumber as the key column and Score as the value column. So for any key:value pair you know everything you need.
2. `spread` \- This is the opposite of gather. This takes a key column (or columns) and a results column and forms a new column for each level of the key column(s).
```
# first we gather the score columns into columns we'll name Assesment and Score
tidy.scores <- grade.book %>%
gather( key=Homework, # What should I call the key column
value=Score, # What should I call the values column
HW.1:HW.4 # which columns to apply this to
)
tidy.scores
```
```
## name Homework Score
## 1 Alison HW.1 8
## 2 Brandon HW.1 5
## 3 Charles HW.1 9
## 4 Alison HW.2 5
## 5 Brandon HW.2 3
## 6 Charles HW.2 7
## 7 Alison HW.3 8
## 8 Brandon HW.3 6
## 9 Charles HW.3 9
## 10 Alison HW.4 4
## 11 Brandon HW.4 9
## 12 Charles HW.4 10
```
To spread the key:value pairs out into a matrix, we use the `spread()` command.
```
# Turn the Assessment/Score pair of columns into one column per factor level of Assessment
tidy.scores %>% spread( key=Homework, value=Score )
```
```
## name HW.1 HW.2 HW.3 HW.4
## 1 Alison 8 5 8 4
## 2 Brandon 5 3 6 9
## 3 Charles 9 7 9 10
```
One way to keep straight which is the `key` column is that the key is the category, while `value` is the numerical value or response.
8\.2 Storing Data in Multiple Tables
------------------------------------
In many datasets it is common to store data across multiple tables, usually with the goal of minimizing memory used as well as providing minimal duplication of information so any change that must be made is only made in a single place.
To see the rational why we might do this, consider building a data set of blood donations by a variety of donors across several years. For each blood donation, we will perform some assay and measure certain qualities about the blood and the patients health at the donation.
```
## Donor Hemoglobin Systolic Diastolic
## 1 Derek 17.4 121 80
## 2 Jeff 16.9 145 101
```
But now we have to ask, what happens when we have a donor that has given blood multiple times? In this case we should just have multiple rows per person along with a date column to uniquely identify a particular donation.
```
donations
```
```
## Donor Date Hemoglobin Systolic Diastolic
## 1 Derek 2017-04-14 17.4 120 79
## 2 Derek 2017-06-20 16.5 121 80
## 3 Jeff 2017-08-14 16.9 145 101
```
I would like to include additional information about the donor where that infomation doesn’t change overtime. For example we might want to have information about the donar’s birthdate, sex, blood type. However, I don’t want that information in *every single donation line*. Otherwise if I mistype a birthday and have to correct it, I would have to correct it *everywhere*. For information about the donor, should live in a `donors` table, while information about a particular donation should live in the `donations` table.
Furthermore, there are many Jeffs and Dereks in the world and to maintain a unique identifier (without using Social Security numbers) I will just create a `Donor_ID` code that will uniquely identify a person. Similarly I will create a `Donation_ID` that will uniquely identify a dontation.
```
donors
```
```
## Donor_ID F_Name L_Name B_Type Birth Street City State
## 1 Donor_1 Derek Lee O+ 1976-09-17 7392 Willard Flagstaff AZ
## 2 Donor_2 Jeff Smith A 1974-06-23 873 Vine Bozeman MT
```
```
donations
```
```
## Donation_ID Donor_ID Date Hemoglobin Systolic Diastolic
## 1 Donation_1 Donor_1 2017-04-14 17.4 120 79
## 2 Donation_2 Donor_1 2017-06-20 16.5 121 80
## 3 Donation_3 Donor_2 2017-08-14 16.9 145 101
```
If we have a new donor walk in and give blood, then we’ll have to create a new entry in the `donors` table as well as a new entry in the `donations` table. If an experienced donor gives again, we just have to create a new entry in the donations table.
```
donors
```
```
## Donor_ID F_Name L_Name B_Type Birth Street City State
## 1 Donor_1 Derek Lee O+ 1976-09-17 7392 Willard Flagstaff AZ
## 2 Donor_2 Jeff Smith A 1974-06-23 873 Vine Bozeman MT
## 3 Donor_3 Aubrey Lee O+ 1980-12-15 7392 Willard Flagstaff AZ
```
```
donations
```
```
## Donation_ID Donor_ID Date Hemoglobin Systolic Diastolic
## 1 Donation_1 Donor_1 2017-04-14 17.4 120 79
## 2 Donation_2 Donor_1 2017-06-20 16.5 121 80
## 3 Donation_3 Donor_2 2017-08-14 16.9 145 101
## 4 Donation_4 Donor_1 2017-08-26 17.6 120 79
## 5 Donation_5 Donor_3 2017-08-26 16.1 137 90
```
This data storage set\-up might be flexible enough for us. However what happens if somebody moves? If we don’t want to keep the historical information, then we could just change the person’s `Street_Address`, `City`, and `State` values. If we do want to keep that, then we could create `donor_addresses` table that contains a `Start_Date` and `End_Date` that denotes the period of time that the address was valid.
```
donor_addresses
```
```
## Donor_ID Street City State Start_Date End_Date
## 1 Donor_1 346 Treeline Pullman WA 2015-01-26 2016-06-27
## 2 Donor_1 645 Main Flagstsff AZ 2016-06-28 2017-07-02
## 3 Donor_1 7392 Willard Flagstaff AZ 2017-07-03 <NA>
## 4 Donor_2 873 Vine Bozeman MT 2015-03-17 <NA>
## 5 Donor_3 7392 Willard Flagstaff AZ 2017-06-01 <NA>
```
Given this data structure, we can now easily create new donations as well as store donor information. In the event that we need to change something about a donor, there is only *one* place to make that change.
However, having data spread across multiple tables is challenging because I often want that information squished back together. For example, the blood donations services might want to find all ‘O’ or ‘O\+’ donors in Flagstaff and their current mailing address and send them some notification about blood supplies being low. So we need someway to join the `donors` and `donor_addresses` tables together in a sensible manner.
8\.3 Table Joins
----------------
Often we need to squish together two data frames but they do not have the same number of rows. Consider the case where we have a data frame of observations of fish and a separate data frame that contains information about lake (perhaps surface area, max depth, pH, etc). I want to store them as two separate tables so that when I have to record a lake level observation, I only input it *one* place. This decreases the chance that I make a copy/paste error.
To illustrate the different types of table joins, we’ll consider two different tables.
```
# tibbles are just data.frames that print a bit nicer and don't automatically
# convert character columns into factors. They behave a bit more consistently
# in a wide variety of situations compared to data.frames.
Fish.Data <- tibble(
Lake_ID = c('A','A','B','B','C','C'),
Fish.Weight=rnorm(6, mean=260, sd=25) ) # make up some data
Lake.Data <- tibble(
Lake_ID = c( 'B','C','D'),
Lake_Name = c('Lake Elaine', 'Mormon Lake', 'Lake Mary'),
pH=c(6.5, 6.3, 6.1),
area = c(40, 210, 240),
avg_depth = c(8, 10, 38))
```
```
Fish.Data
```
```
## # A tibble: 6 x 2
## Lake_ID Fish.Weight
## <chr> <dbl>
## 1 A 263.
## 2 A 276.
## 3 B 260.
## 4 B 273.
## 5 C 252.
## 6 C 216.
```
```
Lake.Data
```
```
## # A tibble: 3 x 5
## Lake_ID Lake_Name pH area avg_depth
## <chr> <chr> <dbl> <dbl> <dbl>
## 1 B Lake Elaine 6.50 40. 8.
## 2 C Mormon Lake 6.30 210. 10.
## 3 D Lake Mary 6.10 240. 38.
```
Notice that each of these tables has a column labled `Lake_ID`. When we join these two tables, the row that describes lake `A` should be duplicated for each row in the `Fish.Data` that corresponds with fish caught from lake `A`.
```
full_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 7 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 A 263. <NA> NA NA NA
## 2 A 276. <NA> NA NA NA
## 3 B 260. Lake Elaine 6.50 40. 8.
## 4 B 273. Lake Elaine 6.50 40. 8.
## 5 C 252. Mormon Lake 6.30 210. 10.
## 6 C 216. Mormon Lake 6.30 210. 10.
## 7 D NA Lake Mary 6.10 240. 38.
```
Notice that because we didn’t have any fish caught in lake `D` and we don’t have any Lake information about lake `A`, when we join these two tables, we end up introducing missing observations into the resulting data frame.
The other types of joins govern the behavor or these missing data.
**`left_join(A, B)`** For each row in A, match with a row in B, but don’t create any more rows than what was already in A.
**`inner_join(A,B)`** Only match row values where both data frames have a value.
```
left_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 6 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 A 263. <NA> NA NA NA
## 2 A 276. <NA> NA NA NA
## 3 B 260. Lake Elaine 6.50 40. 8.
## 4 B 273. Lake Elaine 6.50 40. 8.
## 5 C 252. Mormon Lake 6.30 210. 10.
## 6 C 216. Mormon Lake 6.30 210. 10.
```
```
inner_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 4 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 B 260. Lake Elaine 6.50 40. 8.
## 2 B 273. Lake Elaine 6.50 40. 8.
## 3 C 252. Mormon Lake 6.30 210. 10.
## 4 C 216. Mormon Lake 6.30 210. 10.
```
The above examples assumed that the column used to join the two tables was named the same in both tables. This is good practice to try to do, but sometimes you have to work with data where that isn’t the case. In that situation you can use the `by=c("ColName.A"="ColName.B")` syntax where `ColName.A` represents the name of the column in the first data frame and `ColName.B` is the equivalent column in the second data frame.
Finally, the combination of `gather` and `join` allows me to do some very complex calculations across many columns of a data set. For example, I might gather up a set of columns, calculate some summary statistics, and then join the result back to original data set.
```
grade.book %>%
group_by(name) %>%
gather( key=Homework, value=Score, HW.1:HW.4 ) %>%
summarise( HW.avg = mean(Score) ) %>%
left_join( grade.book, . )
```
```
## Joining, by = "name"
```
```
## name HW.1 HW.2 HW.3 HW.4 HW.avg
## 1 Alison 8 5 8 4 6.25
## 2 Brandon 5 3 6 9 5.75
## 3 Charles 9 7 9 10 8.75
```
8\.4 Exercises
--------------
1. Suppose we are given information about the maximum daily temperature from a weather station in Flagstaff, AZ. The file is available at the GitHub site that this book is hosted on.
```
FlagTemp <- read.csv(
'https://github.com/dereksonderegger/570L/raw/master/data-raw/FlagMaxTemp.csv',
header=TRUE, sep=',')
```
This file is in a wide format, where each row represents a month and the columns X1, X2, …, X31 represent the day of the month the observation was made.
1. Convert data set to the long format where the data has only four columns: `Year`, `Month`, `Day`, `Tmax`.
2. Calculate the average monthly maximum temperature for each Month in the dataset (So there will be 365 mean maximum temperatures). *You’ll probably have some issues taking the mean because there are a number of values that are missing and by default R refuses to take means and sums when there is missing data. The argument `na.rm=TRUE` to `mean()` allows you to force R to remove the missing observations before calculating the mean.*
3. Convert the average month maximums back to a wide data format where each line represents a year and there are 12 columns of temperature data (one for each month) along with a column for the year. *There will be a couple of months that still have missing data because the weather station was out of commision for those months and there was NO data for the entire month.*
2. A common task is to take a set of data that has multiple categorical variables and create a table of the number of cases for each combination. An introductory statistics textbook contains a dataset summarizing student surveys from several sections of an intro class. The two variables of interest for us are `Gender` and `Year` which are the students gender and year in college.
1. Download the dataset and correctly order the `Year` variable using the following:
```
Survey <- read.csv('http://www.lock5stat.com/datasets/StudentSurvey.csv', na.strings=c('',' ')) %>%
mutate(Year = factor(Year, levels=c('FirstYear','Sophomore','Junior','Senior')))
```
2. Using some combination of `dplyr` functions, produce a data set with eight rows that contains the number of responses for each gender:year combination. *Notice there are two females that neglected to give their Year and you should remove them first. The function `is.na(Year)` will return logical values indicating if the Year value was missing and you can flip those values using the negation operator `!`. So you might consider using `!is.na(Year)` as the argument to a `filter()` command. Alternatively you could sort on `Year` and remove the first two rows using `slice(-2:-1)`. Next you’ll want to summarize each Year/Gender group using the `n()` function which gives the number of rows in a data set.*
3. Using `tidyr` commands, produce a table of the number of responses in the following form:
| Gender | First Year | Sophmore | Junior | Senior |
| --- | --- | --- | --- | --- |
| **Female** | | | | |
| **Male** | | | | |
3. The package `nycflights13` contains information about all the flights that arrived in or left from New York City in 2013\. This package contains five data tables, but there are three data tables we will work with. The data table `flights` gives information about a particular flight, `airports` gives information about a particular airport, and `airlines` gives information about each airline. Create a table of all the flights on February 14th by Virgin America that has columns for the carrier, destination, departure time, and flight duration. Join this table with the airports information for the destination. Notice that because the column for the destination airport code doesn’t match up between `flights` and `airports`, you’ll have to use the `by=c("TableA.Col"="TableB.Col")` argument where you insert the correct names for `TableA.Col` and `TableB.Col`.
8\.1 `tidyr`
------------
There is a common issue with obtaining data with many columns that you wish were organized as rows. For example, I might have data in a grade book that has several homework scores and I’d like to produce a nice graph that has assignment number on the x\-axis and score on the y\-axis. Unfortunately this is incredibly hard to do when the data is arranged in the following way:
```
grade.book <- rbind(
data.frame(name='Alison', HW.1=8, HW.2=5, HW.3=8, HW.4=4),
data.frame(name='Brandon', HW.1=5, HW.2=3, HW.3=6, HW.4=9),
data.frame(name='Charles', HW.1=9, HW.2=7, HW.3=9, HW.4=10))
grade.book
```
```
## name HW.1 HW.2 HW.3 HW.4
## 1 Alison 8 5 8 4
## 2 Brandon 5 3 6 9
## 3 Charles 9 7 9 10
```
What we want to do is turn this data frame from a *wide* data frame into a *long* data frame. In MS Excel this is called pivoting. Essentially I’d like to create a data frame with three columns: `name`, `assignment`, and `score`. That is to say that each homework datum really has three pieces of information: who it came from, which homework it was, and what the score was. It doesn’t conceptually matter if I store it as 3 rows of 4 columns or 12 rows so long as there is a way to identify how a student scored on a particular homework. So we want to reshape the HW1 to HW4 columns into two columns (assignment and score).
This package was built by the same people that created dplyr and ggplot2 and there is a nice introduction at: \[[http://blog.rstudio.org/2014/07/22/introducing\-tidyr/](http://blog.rstudio.org/2014/07/22/introducing-tidyr/)]
### 8\.1\.1 Verbs
As with the dplyr package, there are two main verbs to remember:
1. `gather` \- Gather multiple columns that are related into two columns that contain the original column name and the value. For example for columns HW1, HW2, HW3 we would gather them into two column HomeworkNumber and Score. In this case, we refer to HomeworkNumber as the key column and Score as the value column. So for any key:value pair you know everything you need.
2. `spread` \- This is the opposite of gather. This takes a key column (or columns) and a results column and forms a new column for each level of the key column(s).
```
# first we gather the score columns into columns we'll name Assesment and Score
tidy.scores <- grade.book %>%
gather( key=Homework, # What should I call the key column
value=Score, # What should I call the values column
HW.1:HW.4 # which columns to apply this to
)
tidy.scores
```
```
## name Homework Score
## 1 Alison HW.1 8
## 2 Brandon HW.1 5
## 3 Charles HW.1 9
## 4 Alison HW.2 5
## 5 Brandon HW.2 3
## 6 Charles HW.2 7
## 7 Alison HW.3 8
## 8 Brandon HW.3 6
## 9 Charles HW.3 9
## 10 Alison HW.4 4
## 11 Brandon HW.4 9
## 12 Charles HW.4 10
```
To spread the key:value pairs out into a matrix, we use the `spread()` command.
```
# Turn the Assessment/Score pair of columns into one column per factor level of Assessment
tidy.scores %>% spread( key=Homework, value=Score )
```
```
## name HW.1 HW.2 HW.3 HW.4
## 1 Alison 8 5 8 4
## 2 Brandon 5 3 6 9
## 3 Charles 9 7 9 10
```
One way to keep straight which is the `key` column is that the key is the category, while `value` is the numerical value or response.
### 8\.1\.1 Verbs
As with the dplyr package, there are two main verbs to remember:
1. `gather` \- Gather multiple columns that are related into two columns that contain the original column name and the value. For example for columns HW1, HW2, HW3 we would gather them into two column HomeworkNumber and Score. In this case, we refer to HomeworkNumber as the key column and Score as the value column. So for any key:value pair you know everything you need.
2. `spread` \- This is the opposite of gather. This takes a key column (or columns) and a results column and forms a new column for each level of the key column(s).
```
# first we gather the score columns into columns we'll name Assesment and Score
tidy.scores <- grade.book %>%
gather( key=Homework, # What should I call the key column
value=Score, # What should I call the values column
HW.1:HW.4 # which columns to apply this to
)
tidy.scores
```
```
## name Homework Score
## 1 Alison HW.1 8
## 2 Brandon HW.1 5
## 3 Charles HW.1 9
## 4 Alison HW.2 5
## 5 Brandon HW.2 3
## 6 Charles HW.2 7
## 7 Alison HW.3 8
## 8 Brandon HW.3 6
## 9 Charles HW.3 9
## 10 Alison HW.4 4
## 11 Brandon HW.4 9
## 12 Charles HW.4 10
```
To spread the key:value pairs out into a matrix, we use the `spread()` command.
```
# Turn the Assessment/Score pair of columns into one column per factor level of Assessment
tidy.scores %>% spread( key=Homework, value=Score )
```
```
## name HW.1 HW.2 HW.3 HW.4
## 1 Alison 8 5 8 4
## 2 Brandon 5 3 6 9
## 3 Charles 9 7 9 10
```
One way to keep straight which is the `key` column is that the key is the category, while `value` is the numerical value or response.
8\.2 Storing Data in Multiple Tables
------------------------------------
In many datasets it is common to store data across multiple tables, usually with the goal of minimizing memory used as well as providing minimal duplication of information so any change that must be made is only made in a single place.
To see the rational why we might do this, consider building a data set of blood donations by a variety of donors across several years. For each blood donation, we will perform some assay and measure certain qualities about the blood and the patients health at the donation.
```
## Donor Hemoglobin Systolic Diastolic
## 1 Derek 17.4 121 80
## 2 Jeff 16.9 145 101
```
But now we have to ask, what happens when we have a donor that has given blood multiple times? In this case we should just have multiple rows per person along with a date column to uniquely identify a particular donation.
```
donations
```
```
## Donor Date Hemoglobin Systolic Diastolic
## 1 Derek 2017-04-14 17.4 120 79
## 2 Derek 2017-06-20 16.5 121 80
## 3 Jeff 2017-08-14 16.9 145 101
```
I would like to include additional information about the donor where that infomation doesn’t change overtime. For example we might want to have information about the donar’s birthdate, sex, blood type. However, I don’t want that information in *every single donation line*. Otherwise if I mistype a birthday and have to correct it, I would have to correct it *everywhere*. For information about the donor, should live in a `donors` table, while information about a particular donation should live in the `donations` table.
Furthermore, there are many Jeffs and Dereks in the world and to maintain a unique identifier (without using Social Security numbers) I will just create a `Donor_ID` code that will uniquely identify a person. Similarly I will create a `Donation_ID` that will uniquely identify a dontation.
```
donors
```
```
## Donor_ID F_Name L_Name B_Type Birth Street City State
## 1 Donor_1 Derek Lee O+ 1976-09-17 7392 Willard Flagstaff AZ
## 2 Donor_2 Jeff Smith A 1974-06-23 873 Vine Bozeman MT
```
```
donations
```
```
## Donation_ID Donor_ID Date Hemoglobin Systolic Diastolic
## 1 Donation_1 Donor_1 2017-04-14 17.4 120 79
## 2 Donation_2 Donor_1 2017-06-20 16.5 121 80
## 3 Donation_3 Donor_2 2017-08-14 16.9 145 101
```
If we have a new donor walk in and give blood, then we’ll have to create a new entry in the `donors` table as well as a new entry in the `donations` table. If an experienced donor gives again, we just have to create a new entry in the donations table.
```
donors
```
```
## Donor_ID F_Name L_Name B_Type Birth Street City State
## 1 Donor_1 Derek Lee O+ 1976-09-17 7392 Willard Flagstaff AZ
## 2 Donor_2 Jeff Smith A 1974-06-23 873 Vine Bozeman MT
## 3 Donor_3 Aubrey Lee O+ 1980-12-15 7392 Willard Flagstaff AZ
```
```
donations
```
```
## Donation_ID Donor_ID Date Hemoglobin Systolic Diastolic
## 1 Donation_1 Donor_1 2017-04-14 17.4 120 79
## 2 Donation_2 Donor_1 2017-06-20 16.5 121 80
## 3 Donation_3 Donor_2 2017-08-14 16.9 145 101
## 4 Donation_4 Donor_1 2017-08-26 17.6 120 79
## 5 Donation_5 Donor_3 2017-08-26 16.1 137 90
```
This data storage set\-up might be flexible enough for us. However what happens if somebody moves? If we don’t want to keep the historical information, then we could just change the person’s `Street_Address`, `City`, and `State` values. If we do want to keep that, then we could create `donor_addresses` table that contains a `Start_Date` and `End_Date` that denotes the period of time that the address was valid.
```
donor_addresses
```
```
## Donor_ID Street City State Start_Date End_Date
## 1 Donor_1 346 Treeline Pullman WA 2015-01-26 2016-06-27
## 2 Donor_1 645 Main Flagstsff AZ 2016-06-28 2017-07-02
## 3 Donor_1 7392 Willard Flagstaff AZ 2017-07-03 <NA>
## 4 Donor_2 873 Vine Bozeman MT 2015-03-17 <NA>
## 5 Donor_3 7392 Willard Flagstaff AZ 2017-06-01 <NA>
```
Given this data structure, we can now easily create new donations as well as store donor information. In the event that we need to change something about a donor, there is only *one* place to make that change.
However, having data spread across multiple tables is challenging because I often want that information squished back together. For example, the blood donations services might want to find all ‘O’ or ‘O\+’ donors in Flagstaff and their current mailing address and send them some notification about blood supplies being low. So we need someway to join the `donors` and `donor_addresses` tables together in a sensible manner.
8\.3 Table Joins
----------------
Often we need to squish together two data frames but they do not have the same number of rows. Consider the case where we have a data frame of observations of fish and a separate data frame that contains information about lake (perhaps surface area, max depth, pH, etc). I want to store them as two separate tables so that when I have to record a lake level observation, I only input it *one* place. This decreases the chance that I make a copy/paste error.
To illustrate the different types of table joins, we’ll consider two different tables.
```
# tibbles are just data.frames that print a bit nicer and don't automatically
# convert character columns into factors. They behave a bit more consistently
# in a wide variety of situations compared to data.frames.
Fish.Data <- tibble(
Lake_ID = c('A','A','B','B','C','C'),
Fish.Weight=rnorm(6, mean=260, sd=25) ) # make up some data
Lake.Data <- tibble(
Lake_ID = c( 'B','C','D'),
Lake_Name = c('Lake Elaine', 'Mormon Lake', 'Lake Mary'),
pH=c(6.5, 6.3, 6.1),
area = c(40, 210, 240),
avg_depth = c(8, 10, 38))
```
```
Fish.Data
```
```
## # A tibble: 6 x 2
## Lake_ID Fish.Weight
## <chr> <dbl>
## 1 A 263.
## 2 A 276.
## 3 B 260.
## 4 B 273.
## 5 C 252.
## 6 C 216.
```
```
Lake.Data
```
```
## # A tibble: 3 x 5
## Lake_ID Lake_Name pH area avg_depth
## <chr> <chr> <dbl> <dbl> <dbl>
## 1 B Lake Elaine 6.50 40. 8.
## 2 C Mormon Lake 6.30 210. 10.
## 3 D Lake Mary 6.10 240. 38.
```
Notice that each of these tables has a column labled `Lake_ID`. When we join these two tables, the row that describes lake `A` should be duplicated for each row in the `Fish.Data` that corresponds with fish caught from lake `A`.
```
full_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 7 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 A 263. <NA> NA NA NA
## 2 A 276. <NA> NA NA NA
## 3 B 260. Lake Elaine 6.50 40. 8.
## 4 B 273. Lake Elaine 6.50 40. 8.
## 5 C 252. Mormon Lake 6.30 210. 10.
## 6 C 216. Mormon Lake 6.30 210. 10.
## 7 D NA Lake Mary 6.10 240. 38.
```
Notice that because we didn’t have any fish caught in lake `D` and we don’t have any Lake information about lake `A`, when we join these two tables, we end up introducing missing observations into the resulting data frame.
The other types of joins govern the behavor or these missing data.
**`left_join(A, B)`** For each row in A, match with a row in B, but don’t create any more rows than what was already in A.
**`inner_join(A,B)`** Only match row values where both data frames have a value.
```
left_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 6 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 A 263. <NA> NA NA NA
## 2 A 276. <NA> NA NA NA
## 3 B 260. Lake Elaine 6.50 40. 8.
## 4 B 273. Lake Elaine 6.50 40. 8.
## 5 C 252. Mormon Lake 6.30 210. 10.
## 6 C 216. Mormon Lake 6.30 210. 10.
```
```
inner_join(Fish.Data, Lake.Data)
```
```
## Joining, by = "Lake_ID"
```
```
## # A tibble: 4 x 6
## Lake_ID Fish.Weight Lake_Name pH area avg_depth
## <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 B 260. Lake Elaine 6.50 40. 8.
## 2 B 273. Lake Elaine 6.50 40. 8.
## 3 C 252. Mormon Lake 6.30 210. 10.
## 4 C 216. Mormon Lake 6.30 210. 10.
```
The above examples assumed that the column used to join the two tables was named the same in both tables. This is good practice to try to do, but sometimes you have to work with data where that isn’t the case. In that situation you can use the `by=c("ColName.A"="ColName.B")` syntax where `ColName.A` represents the name of the column in the first data frame and `ColName.B` is the equivalent column in the second data frame.
Finally, the combination of `gather` and `join` allows me to do some very complex calculations across many columns of a data set. For example, I might gather up a set of columns, calculate some summary statistics, and then join the result back to original data set.
```
grade.book %>%
group_by(name) %>%
gather( key=Homework, value=Score, HW.1:HW.4 ) %>%
summarise( HW.avg = mean(Score) ) %>%
left_join( grade.book, . )
```
```
## Joining, by = "name"
```
```
## name HW.1 HW.2 HW.3 HW.4 HW.avg
## 1 Alison 8 5 8 4 6.25
## 2 Brandon 5 3 6 9 5.75
## 3 Charles 9 7 9 10 8.75
```
8\.4 Exercises
--------------
1. Suppose we are given information about the maximum daily temperature from a weather station in Flagstaff, AZ. The file is available at the GitHub site that this book is hosted on.
```
FlagTemp <- read.csv(
'https://github.com/dereksonderegger/570L/raw/master/data-raw/FlagMaxTemp.csv',
header=TRUE, sep=',')
```
This file is in a wide format, where each row represents a month and the columns X1, X2, …, X31 represent the day of the month the observation was made.
1. Convert data set to the long format where the data has only four columns: `Year`, `Month`, `Day`, `Tmax`.
2. Calculate the average monthly maximum temperature for each Month in the dataset (So there will be 365 mean maximum temperatures). *You’ll probably have some issues taking the mean because there are a number of values that are missing and by default R refuses to take means and sums when there is missing data. The argument `na.rm=TRUE` to `mean()` allows you to force R to remove the missing observations before calculating the mean.*
3. Convert the average month maximums back to a wide data format where each line represents a year and there are 12 columns of temperature data (one for each month) along with a column for the year. *There will be a couple of months that still have missing data because the weather station was out of commision for those months and there was NO data for the entire month.*
2. A common task is to take a set of data that has multiple categorical variables and create a table of the number of cases for each combination. An introductory statistics textbook contains a dataset summarizing student surveys from several sections of an intro class. The two variables of interest for us are `Gender` and `Year` which are the students gender and year in college.
1. Download the dataset and correctly order the `Year` variable using the following:
```
Survey <- read.csv('http://www.lock5stat.com/datasets/StudentSurvey.csv', na.strings=c('',' ')) %>%
mutate(Year = factor(Year, levels=c('FirstYear','Sophomore','Junior','Senior')))
```
2. Using some combination of `dplyr` functions, produce a data set with eight rows that contains the number of responses for each gender:year combination. *Notice there are two females that neglected to give their Year and you should remove them first. The function `is.na(Year)` will return logical values indicating if the Year value was missing and you can flip those values using the negation operator `!`. So you might consider using `!is.na(Year)` as the argument to a `filter()` command. Alternatively you could sort on `Year` and remove the first two rows using `slice(-2:-1)`. Next you’ll want to summarize each Year/Gender group using the `n()` function which gives the number of rows in a data set.*
3. Using `tidyr` commands, produce a table of the number of responses in the following form:
| Gender | First Year | Sophmore | Junior | Senior |
| --- | --- | --- | --- | --- |
| **Female** | | | | |
| **Male** | | | | |
3. The package `nycflights13` contains information about all the flights that arrived in or left from New York City in 2013\. This package contains five data tables, but there are three data tables we will work with. The data table `flights` gives information about a particular flight, `airports` gives information about a particular airport, and `airlines` gives information about each airline. Create a table of all the flights on February 14th by Virgin America that has columns for the carrier, destination, departure time, and flight duration. Join this table with the airports information for the destination. Notice that because the column for the destination airport code doesn’t match up between `flights` and `airports`, you’ll have to use the `by=c("TableA.Col"="TableB.Col")` argument where you insert the correct names for `TableA.Col` and `TableB.Col`.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/9-graphing-using-ggplot2.html |
Chapter 9 Graphing using `ggplot2`
==================================
```
library(ggplot2) # my favorite graphing system
library(dplyr) # data frame manipulations
```
There are three major “systems” of making graphs in R. The basic plotting commands in R are quite effective but the commands do not have a way of being combined in easy ways. Lattice graphics (which the `mosaic` package uses) makes it possible to create some quite complicated graphs but it is very difficult to do make non\-standard graphs. The last package, `ggplot2` tries to not anticipate what the user wants to do, but rather provide the mechanisms for pulling together different graphical concepts and the user gets to decide which elements to combine.
To make the most of `ggplot2` it is important to wrap your mind around “The Grammar of Graphics”. Briefly, the act of building a graph can be broken down into three steps.
1. Define what data we are using.
2. What is the major relationship we wish to examine?
3. In what way should we present that relationship? These relationships can be presented in multiple ways, and the process of creating a good graph relies on building layers upon layers of information. For example, we might start with printing the raw data and then overlay a regression line over the top.
Next, it should be noted that `ggplot2` is designed to act on data frames. It is actually hard to just draw three data points and for simple graphs it might be easier to use the base graphing system in R. However for any real data analysis project, the data will already be in a data frame and this is not an annoyance.
These notes are sufficient for creating simple graphs using `ggplot2`, but are not intended to be exhaustive. There are many places online to get help with `ggplot2`. One very nice resource is the website, [http://www.cookbook\-r.com/Graphs/](http://www.cookbook-r.com/Graphs/), which gives much of the information available in the book R Graphics Cookbook which I highly recommend. Second is just googling your problems and see what you can find on websites such as StackExchange.
One way that `ggplot2` makes it easy to form very complicated graphs is that it provides a large number of basic building blocks that, when stacked upon each other, can produce extremely complicated graphs. A full list is available at [http://docs.ggplot2\.org/current/](http://docs.ggplot2.org/current/) but the following list gives some idea of different building blocks. These different geometries are different ways to display the relationship between variables and can be combined in many interesting ways.
| Geom | Description | Required Aesthetics |
| --- | --- | --- |
| `geom_histogram` | A histogram | `x` |
| `geom_bar` | A barplot | `x` |
| `geom_density` | A density plot of data. (smoothed histogram) | `x` |
| `geom_boxplot` | Boxplots | `x, y` |
| `geom_line` | Draw a line (after sorting x\-values) | `x, y` |
| `geom_path` | Draw a line (without sorting x\-values) | `x, y` |
| `geom_point` | Draw points (for a scatterplot) | `x, y` |
| `geom_smooth` | Add a ribbon that summarizes a scatterplot | `x, y` |
| `geom_ribbon` | Enclose a region, and color the interior | `ymin, ymax` |
| `geom_errorbar` | Error bars | `ymin, ymax` |
| `geom_text` | Add text to a graph | `x, y, label` |
| `geom_label` | Add text to a graph | `x, y, label` |
| `geom_tile` | Create Heat map | `x, y, fill` |
A graph can be built up layer by layer, where:
* Each layer corresponds to a `geom`, each of which requires a dataset and a mapping between an aesthetic and a column of the data set.
+ If you don’t specify either, then the layer inherits everything defined in the `ggplot()` command.
+ You can have different datasets for each layer!
* Layers can be added with a `+`, or you can define two plots and add them together (second one over\-writes anything that conflicts).
9\.1 Basic Graphs
-----------------
### 9\.1\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have a data frame that I have already summarized, `geom_col` will allow you to set the height of the bar by a \\(y\\) column.
### 9\.1\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
Often we want to rescale the y\-axis so that it is in terms of density, which is \\\[density\=\\frac{\\\#\\;observations\\;in\\;bin}{total\\;number\\;observations}\\cdot\\frac{1}{bin\\;width}\\]
To ask `geom_histogram` to calculate the density instead of counts, we simply add an option to the `aes()` list that specifies that the y\-axis should be the density. Notice that this only rescales the y\-axis and the shape of the histogram is identical.
```
ggplot(mpg, aes(x=hwy, y=..density..)) +
geom_histogram(bins=8) # 8 bins
```
### 9\.1\.3 Scatterplots
To start with, we’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in my dataset.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 9\.1\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
9\.2 Fine Tuning
----------------
### 9\.2\.1 Labels
To make a graph more understandable, it is necessary to tweak labels for the axes and add a main title and such. Here we’ll adjust labels in a graph, including the legend labels.
```
# Treat the number of cylinders in a car as a categorical variable (4,6 or 8)
mtcars$cyl <- factor(mtcars$cyl)
ggplot(mtcars, aes(x=wt, y=mpg, col=cyl)) +
geom_point() +
labs( title='Weight vs Miles per Gallon') +
labs( x="Weight in tons (2000 lbs)", y="Miles per Gallon (US)" ) +
labs( color="Cylinders")
```
You could either call the `labs()` command repeatedly with each label, or you could provide multiple arguements to just one `labs()` call.
### 9\.2\.2 Color Scales
Adjusting the color palette for the color scales is not particularly hard, but it isn’t intuitive. You can either set them up using a set of predefined palettes or you can straight up pick the colors. Furthermore we need to recognize that picking colors for a continuous covariate is different than for a factor. In the continuous case, we have to pick a low and high colors and `ggplot` will smoothly transition between the two. In the discrete case with a factor, each factor level gets its own color.
To make these choices, we will use the functions that modify the scales. In particular, if we are modifying the `color` aesthetic, we will use the `scale_color_XXX` functions where the `XXX` gets replaced by something more specific. If we are modifying the `fill` colors, then we will use the `scale_fill_XXX` family of functions.
#### 9\.2\.2\.1 Colors for Factors
We can set the colors manually using the function `scale_color_manual` which expects the name of the colors for each factor level. The order given in the `values` argument corresponds to the order of the levels of the factor.
For a nice list of the named colors you can use, I like to refer to this webpage: [https://www.nceas.ucsb.edu/\~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf)
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values=c('blue', 'darkmagenta', 'aquamarine'))
```
If you want to instead pick a color palette and let the palette pick the colors to be farthest apart based on the number of factor levels, you can use `scale_color_manual` and then have the values chosen by one of the palette functions where you just have to tell it how many levels you have.
```
library(colorspace) # these two packages have some decent
library(grDevices) # color palettes functions.
rainbow(6) # if we have six factor levels, what colors should we use?
```
```
## [1] "#FF0000FF" "#FFFF00FF" "#00FF00FF" "#00FFFFFF" "#0000FFFF" "#FF00FFFF"
```
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values = rainbow(3))
```
#### 9\.2\.2\.2 Colors for continuous values
For this example, we will consider an elevation map of the Maunga Whau volcano in New Zealand. This dataset comes built into R as the matrix `volcano`, but I’ve modified it slightly and saved it to a package I have on github called `dsdata`
```
library(devtools)
install_github('dereksonderegger/dsdata')
```
```
## Downloading GitHub repo dereksonderegger/dsdata@master
## from URL https://api.github.com/repos/dereksonderegger/dsdata/zipball/master
```
```
## Installing dsData
```
```
## '/Library/Frameworks/R.framework/Resources/bin/R' --no-site-file \
## --no-environ --no-save --no-restore --quiet CMD INSTALL \
## '/private/var/folders/d1/drs_scp95wd_s6zsdksk312m0000gn/T/RtmpIFV4tp/devtoolsb6f34c27dfe2/dereksonderegger-dsData-43b2f6d' \
## --library='/Library/Frameworks/R.framework/Versions/3.4/Resources/library' \
## --install-tests
```
```
##
```
```
data('Eden', package='dsData')
```
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_raster()
```
The default gradient isn’t too bad, but we might want to manually chose two colors to smoothly scale between. Because I want to effect the colors I’ve chosen for the `fill` aesthetic, I have to modify this using `scale_fill_XXX`
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient(low = "red", high = "blue")
```
I think we ought to have the blue color come in a little earlier. Also, I want to specify a middle color so that our graph transitions from red to green to blue. To do this, we also have to specify where the middle color should be located along the elevation range.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient2(low = "red", mid='green', high = "blue",
midpoint=135)
```
If we don’t want to specify the colors manually we can, as usual, specify the color palette. The `gradientn` functions allow us to specify a large numbers intermediate colors.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradientn(colours = terrain.colors(5))
```
### 9\.2\.3 Adjusting axes
#### 9\.2\.3\.1 Setting breakpoints
Sometimes the default axis breakpoints aren’t quite what I want and I want to add a number or remove a number. To do this, we will modify the x or y scale. Typically I only have a problem when the axis is continuous, so we will concentrate on that case.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
In this case, suppose that we want the major breakpoints (which have labels) to occur every 5 mpg, and the minor breakpoints (which just have a white line) to occur midway between those (so every 2\.5 mpg).
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5) )
```
If we wanted to adjust the minor breaks, we could do that using the `minor_breaks` argument. If we want to remove the minor breaks completely, we could set the minor breaks to be `NULL`
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5), minor_breaks = NULL )
```
### 9\.2\.4 Zooming in/out
It is often important to be able to force the graph to have a particular range in either the x\-axis or the y\-axis. Given a particular range of interest, there are two ways that we could this:
* Remove all data points that fall outside the range and just plot the reduced dataset. This is accomplished using the `xlim()` and `ylim()` functions, or setting either of those inside another `scale_XXX` function.
* Use all the data to create a graph and just zoom in/out in that graph. This is accomplished using the `coord_cartesian()` function
```
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm')
```
If we want to reset the x\-axis to stop at \\(x\=19\\), and \\(y\=60\\), then we could use the `xlim()` and `ylim()` functions, but this will cause the regression line to be chopped off and it won’t even use that data point when calculating the regression.
```
# Danger! This removes the data points first!
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
xlim( 8, 19 ) + ylim(0, 60)
```
```
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 1 rows containing missing values (geom_point).
```
Alternatively, we could use the `coord_cartesion` function to chop the axes \_after\_everything has been calculated.
```
# Safer! Create the graph and then just zoom in
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
coord_cartesian( xlim=c(8, 19 ), ylim=c(0, 60))
```
9\.3 Cookbook Examples
----------------------
### 9\.3\.1 Scatterplot with prediction ribbons
Often I want to create a scatterplot and then graph the predicted values as a ribbon on top. While it is possible to do this automatically using the `geom_smoother()` function, I prefer not to do this because I don’t have much control over how the model is created.
```
# fit a linear model to the trees dataset
model <- lm( Volume ~ Girth, data=trees )
# add the fitted values and confidence interval values for each observation
# to the original data frame, and call the augmented dataset trees.aug.
trees.aug <- trees %>% cbind( predict(model, interval='confidence', newdata=.) )
# Plot the augmented data. Alpha is the opacity of the ribbon
ggplot(trees.aug, aes(x=Girth, y=Volume)) +
geom_ribbon( aes(ymin=lwr, ymax=upr), alpha=.4, fill='darkgrey' ) +
geom_line( aes(y=fit) ) +
geom_point( aes( y = Volume ) )
```
### 9\.3\.2 Bar Plot
Suppose that you just want make some barplots and add \\(\\pm\\) S.E. bars. This should be really easy to do, but in the base graphics in R, it is a pain. Fortunately in `ggplot2` this is easy. First, define a data frame with the bar heights you want to graph and the \\(\\pm\\) values you wish to use.
```
# Calculate the mean and sd of the Petal Widths for each species
stats <- iris %>%
group_by(Species) %>%
summarize( Mean = mean(Petal.Width), # Mean = ybar
StdErr = sd(Petal.Width)/sqrt(n()) ) %>% # StdErr = s / sqrt(n)
mutate( lwr = Mean - StdErr,
upr = Mean + StdErr )
stats
```
```
## # A tibble: 3 x 5
## Species Mean StdErr lwr upr
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 0.246 0.0149 0.231 0.261
## 2 versicolor 1.33 0.0280 1.30 1.35
## 3 virginica 2.03 0.0388 1.99 2.06
```
Next we take these summary statistics and define the following graph which makes a bar graph of the means and error bars that are \\(\\pm\\) 1 estimated standard deviation of the mean (usually referred to as the standard errors of the means). By default, `geom_bar()` tries to draw a bar plot based on how many observations each group has. What I want, though, is to draw bars of the height I specified, so to do that I have to add `stat='identity'` to specify that it should just use the heights I tell it.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity') +
geom_errorbar( aes(ymin=lwr, ymax=upr) )
```
While this isn’t too bad, we would like to make this a bit more pleasing to look at. Each of the bars is a little too wide and the error bars should be a tad narrower than then bar. Also, the fill color for the bars is too dark. So I’ll change all of these, by setting those attributes *outside of an `aes()` command*.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 )
```
The last thing to notice is that the *order* in which the different layers matter. This is similar to Photoshop or GIS software where the layers added last can obscure prior layers. In the graph below, the lower part of the error bar is obscured by the grey bar.
```
ggplot(stats, aes(x=Species)) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 ) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6)
```
### 9\.3\.3 Distributions
Often I need to plot a distribution and perhaps shade some area in. In this section we’ll give a method for plotting continuous and discrete distributions using `ggplot2`.
#### 9\.3\.3\.1 Continuous distributions
First we need to create a data.frame that contains a sequence of (x,y) pairs that we’ll pass to our graphing program to draw the curve by connecting\-the\-dots, but because the dots will be very close together, the resulting curve looks smooth. For example, lets plot the F\-distribution with parameters \\(\\nu\_{1}\=5\\) and \\(\\nu\_{2}\=30\\).
```
# define 1000 points to do a "connect-the-dots"
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30) )
ggplot(plot.data, aes(x=x, y=density)) +
geom_line() + # just a line
geom_area() # shade in the area under the line
```
This isn’t too bad, but often we want to add some color to two different sections, perhaps we want different colors distinguishing between values \\(\\ge2\.5\\) vs values \\(\<2\.5\\)
```
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30),
Group = ifelse(x <= 2.5, 'Less','Greater') )
ggplot(plot.data, aes(x=x, y=density, fill=Group)) +
geom_area() +
geom_line()
```
#### 9\.3\.3\.2 Discrete distributions
The idea for discrete distributions will be to draw points for the height and then add bars. Lets look at doing this for the Poisson distribution with rate parameter \\(\\lambda\=2\\).
```
plot.data <- data.frame( x=seq(0,10) ) %>%
mutate( probability = dpois(x, lambda=2) )
ggplot(plot.data, aes(x=x)) +
geom_point( aes(y=probability) ) +
geom_linerange(aes(ymax=probability), ymin=0)
```
The key trick here was to set the `ymin` value to always be zero.
9\.4 Exercises
--------------
1. For the dataset trees, which should already be pre\-loaded. Look at the help file using `?trees` for more information about this data set. We wish to build a scatterplot that compares the height and girth of these cherry trees to the volume of lumber that was produced.
1. Create a graph using ggplot2 with Height on the x\-axis, Volume on the y\-axis, and Girth as the either the size of the data point or the color of the data point. Which do you think is a more intuitive representation?
2. Add appropriate labels for the main title and the x and y axes.
2. Consider the following small dataset that represents the number of times per day my wife played “Ring around the Rosy” with my daughter relative to the number of days since she has learned this game. The column `yhat` represents the best fitting line through the data, and `lwr` and `upr` represent a 95% confidence interval for the predicted value on that day.
```
Rosy <- data.frame(
times = c(15, 11, 9, 12, 5, 2, 3),
day = 1:7,
yhat = c(14.36, 12.29, 10.21, 8.14, 6.07, 4.00, 1.93),
lwr = c( 9.54, 8.5, 7.22, 5.47, 3.08, 0.22, -2.89),
upr = c(19.18, 16.07, 13.2, 10.82, 9.06, 7.78, 6.75))
```
1. Using `ggplot()` and `geom_point()`, create a scatterplot with `day` along the x\-axis and `times` along the y\-axis.
2. Add a line to the graph where the x\-values are the `day` values but now the y\-values are the predicted values which we’ve called `yhat`. Notice that you have to set the aesthetic y\=times for the points and y\=yhat for the line. Because each `geom_` will accept an `aes()` command, you can specify the `y` attribute to be different for different layers of the graph.
3. Add a ribbon that represents the confidence region of the regression line. The `geom_ribbon()` function requires an `x`, `ymin`, and `ymax` columns to be defined. For examples of using `geom_ribbon()` see the online documentation: [http://docs.ggplot2\.org/current/geom\_ribbon.html](http://docs.ggplot2.org/current/geom_ribbon.html).
```
ggplot(Rosy, aes(x=day)) +
geom_point(aes(y=times)) +
geom_line( aes(y=yhat)) +
geom_ribbon( aes(ymin=lwr, ymax=upr), fill='salmon')
```
4. What happened when you added the ribbon? Did some points get hidden? If so, why?
5. Reorder the statements that created the graph so that the ribbon is on the bottom and the data points are on top and the regression line is visible.
6. The color of the ribbon fill is ugly. Use Google to find a list of named colors available to `ggplot2`. For example, I googled “ggplot2 named colors” and found the following link: [http://sape.inf.usi.ch/quick\-reference/ggplot2/colour](http://sape.inf.usi.ch/quick-reference/ggplot2/colour). Choose a color for the fill that is pleasing to you.
7. Add labels for the x\-axis and y\-axis that are appropriate along with a main title.
3. The R package `babynames` contains a single dataset that lists the number of children registered with Social Security with a particular name along with the proportion out of all children born in a given year. The dataset covers the from 1880 to the present. We want to plot the relative popularity of the names ‘Elise’ and ‘Casey’.
1. Load the package. If it is not found on your computer, download the package from CRAN.
```
library(babynames)
data("babynames")
```
2. Read the help file for the data set `babynames` to get a sense of the columns
3. Create a small dataset that only has the names ‘Elise’ and ‘Casey’.
4. Make a plot where the x\-axis is the year and the y\-axis is the proportion of babies given the names. Use a line to display this relationship and distinguish the two names by color. Notice this graph is a bit ugly because there is a lot of year\-to\-year variability that we should smooth over.
5. We’ll use dplyr to collapse the individual years into decades using the following code:
```
small <- babynames %>%
filter( name=='Elise' | name=='Casey') %>%
mutate( decade = cut(year, breaks = seq(1869,2019,by=10) )) %>%
group_by(name, decade) %>%
summarise( prop = mean(prop),
year = min(year))
```
6. Now draw the same graph you had in part (d).
7. Next we’ll create an area plot where the height is the total proportion of the both names and the colors split up the proportion.
```
ggplot(small, aes(x=year, y=prop, fill=name)) +
geom_area()
```
This is a pretty neat graph as it show the relative popularity of the name over time and can easily be expanded to many many names. In fact, there is a wonderful website that takes this same data and allows you select the names quite nicely: <http://www.babynamewizard.com/voyager>. My wife and I used this a lot while figuring out what to name our children. Notice that this site really uses the same graph type we just built but there are a few extra neat interactivity tricks.
9\.1 Basic Graphs
-----------------
### 9\.1\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have a data frame that I have already summarized, `geom_col` will allow you to set the height of the bar by a \\(y\\) column.
### 9\.1\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
Often we want to rescale the y\-axis so that it is in terms of density, which is \\\[density\=\\frac{\\\#\\;observations\\;in\\;bin}{total\\;number\\;observations}\\cdot\\frac{1}{bin\\;width}\\]
To ask `geom_histogram` to calculate the density instead of counts, we simply add an option to the `aes()` list that specifies that the y\-axis should be the density. Notice that this only rescales the y\-axis and the shape of the histogram is identical.
```
ggplot(mpg, aes(x=hwy, y=..density..)) +
geom_histogram(bins=8) # 8 bins
```
### 9\.1\.3 Scatterplots
To start with, we’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in my dataset.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 9\.1\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
### 9\.1\.1 Bar Charts
Bar charts and histograms are how we think about displaying informtion about a single covariate. That is to say, we are not trying to make a graph of the relationship between \\(x\\) and \\(y\\), but rather understanding what values of \\(x\\) are present and how frequently they show up.
For displaying a categorical variable on the x\-axis, a bar chart is a good option. Here we consider a data set that gives the fuel efficiency of different classes of vehicles in two different years. This is a subset of data that the EPA makes available on <http://fueleconomy.gov>. It contains only model which had a new release every year between 1999 and 2008 and therefore represents the most popular cars sold in the US. It includes information for each model for years 1999 and 2008\. The dataset is included in the `ggplot2` package as `mpg`.
```
data(mpg, package='ggplot2') # load the dataset
str(mpg)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 234 obs. of 11 variables:
## $ manufacturer: chr "audi" "audi" "audi" "audi" ...
## $ model : chr "a4" "a4" "a4" "a4" ...
## $ displ : num 1.8 1.8 2 2 2.8 2.8 3.1 1.8 1.8 2 ...
## $ year : int 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 ...
## $ cyl : int 4 4 4 4 6 6 6 4 4 4 ...
## $ trans : chr "auto(l5)" "manual(m5)" "manual(m6)" "auto(av)" ...
## $ drv : chr "f" "f" "f" "f" ...
## $ cty : int 18 21 20 21 16 18 18 18 16 20 ...
## $ hwy : int 29 29 31 30 26 26 27 26 25 28 ...
## $ fl : chr "p" "p" "p" "p" ...
## $ class : chr "compact" "compact" "compact" "compact" ...
```
First we could summarize the data by how many models there are in the different classes.
```
ggplot(data=mpg, aes(x=class)) +
geom_bar()
```
1. The data set we wish to use is specified using `data=mpg`. This is the first argument defined in the function, so you could skip the `data=` part if the input data.frame is the first argument.
2. The column in the data that we wish to investigate is defined in the `aes(x=class)` part. This means the x\-axis will be the car’s class, which is indicated by the column named `class`.
3. The way we want to display this information is using a bar chart.
By default, the `geom_bar()` just counts the number of cases and displays how many observations were in each factor level. If I have a data frame that I have already summarized, `geom_col` will allow you to set the height of the bar by a \\(y\\) column.
### 9\.1\.2 Histograms
Histograms also focus on a single variable and give how frequently particular ranges of the data occur.
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Just as `geom_bar` by default calculated the number of observations in each level of my factor of interest, `geom_histogram` breaks up the x\-axis into distinct bins (by default, 30 bins), and then counts how many observations fall into each bin, and displys the number as a bar. To change the number of bins, we could either tell it the number of bins (e.g. `bins=20`) or the width of each bin (e.g. `binwidth=4`).
```
ggplot(mpg, aes(x=hwy)) +
geom_histogram(bins=8) # 8 bins
```
Often we want to rescale the y\-axis so that it is in terms of density, which is \\\[density\=\\frac{\\\#\\;observations\\;in\\;bin}{total\\;number\\;observations}\\cdot\\frac{1}{bin\\;width}\\]
To ask `geom_histogram` to calculate the density instead of counts, we simply add an option to the `aes()` list that specifies that the y\-axis should be the density. Notice that this only rescales the y\-axis and the shape of the histogram is identical.
```
ggplot(mpg, aes(x=hwy, y=..density..)) +
geom_histogram(bins=8) # 8 bins
```
### 9\.1\.3 Scatterplots
To start with, we’ll make a very simple scatterplot using the `iris` dataset that will make a scatterplot of `Sepal.Length` versus `Petal.Length`, which are two columns in my dataset.
```
data(iris) # load the iris dataset that comes with R
str(iris) # what columns do we have to play with...
```
```
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( )
```
1. The data set we wish to use is specified using `data=iris`.
2. The relationship we want to explore is `x=Sepal.Length` and `y=Petal.Length`. This means the x\-axis will be the Sepal Length and the y\-axis will be the Petal Length.
3. The way we want to display this relationship is through graphing 1 point for every observation.
We can define other attributes that might reflect other aspects of the data. For example, we might want for the of the data point to change dynamically based on the species of iris.
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length, color=Species) ) +
geom_point( )
```
The `aes()` command inside the previous section of code is quite mysterious. The way to think about the `aes()` is that it gives you a way to define relationships that are data dependent. In the previous graph, the x\-value and y\-value for each point was defined dynamically by the data, as was the color. If we just wanted all the data points to be colored blue and larger, then the following code would do that
```
ggplot( data=iris, aes(x=Sepal.Length, y=Petal.Length) ) +
geom_point( color='blue', size=4 )
```
The important part isn’t that color and size were defined in the `geom_point()` but that they were defined outside of an `aes()` function!
1. Anything set inside an `aes()` command will be of the form `attribute=Column_Name` and will change based on the data.
2. Anything set outside an `aes()` command will be in the form `attribute=value` and will be fixed.
### 9\.1\.4 Box Plots
Boxplots are a common way to show a categorical variable on the x\-axis and continuous on the y\-axis.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
The boxes show the \\(25^{th}\\), \\(50^{th}\\), and \\(75^{th}\\) percentile and the lines coming off the box extend to the smallest and largest non\-outlier observation.
9\.2 Fine Tuning
----------------
### 9\.2\.1 Labels
To make a graph more understandable, it is necessary to tweak labels for the axes and add a main title and such. Here we’ll adjust labels in a graph, including the legend labels.
```
# Treat the number of cylinders in a car as a categorical variable (4,6 or 8)
mtcars$cyl <- factor(mtcars$cyl)
ggplot(mtcars, aes(x=wt, y=mpg, col=cyl)) +
geom_point() +
labs( title='Weight vs Miles per Gallon') +
labs( x="Weight in tons (2000 lbs)", y="Miles per Gallon (US)" ) +
labs( color="Cylinders")
```
You could either call the `labs()` command repeatedly with each label, or you could provide multiple arguements to just one `labs()` call.
### 9\.2\.2 Color Scales
Adjusting the color palette for the color scales is not particularly hard, but it isn’t intuitive. You can either set them up using a set of predefined palettes or you can straight up pick the colors. Furthermore we need to recognize that picking colors for a continuous covariate is different than for a factor. In the continuous case, we have to pick a low and high colors and `ggplot` will smoothly transition between the two. In the discrete case with a factor, each factor level gets its own color.
To make these choices, we will use the functions that modify the scales. In particular, if we are modifying the `color` aesthetic, we will use the `scale_color_XXX` functions where the `XXX` gets replaced by something more specific. If we are modifying the `fill` colors, then we will use the `scale_fill_XXX` family of functions.
#### 9\.2\.2\.1 Colors for Factors
We can set the colors manually using the function `scale_color_manual` which expects the name of the colors for each factor level. The order given in the `values` argument corresponds to the order of the levels of the factor.
For a nice list of the named colors you can use, I like to refer to this webpage: [https://www.nceas.ucsb.edu/\~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf)
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values=c('blue', 'darkmagenta', 'aquamarine'))
```
If you want to instead pick a color palette and let the palette pick the colors to be farthest apart based on the number of factor levels, you can use `scale_color_manual` and then have the values chosen by one of the palette functions where you just have to tell it how many levels you have.
```
library(colorspace) # these two packages have some decent
library(grDevices) # color palettes functions.
rainbow(6) # if we have six factor levels, what colors should we use?
```
```
## [1] "#FF0000FF" "#FFFF00FF" "#00FF00FF" "#00FFFFFF" "#0000FFFF" "#FF00FFFF"
```
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values = rainbow(3))
```
#### 9\.2\.2\.2 Colors for continuous values
For this example, we will consider an elevation map of the Maunga Whau volcano in New Zealand. This dataset comes built into R as the matrix `volcano`, but I’ve modified it slightly and saved it to a package I have on github called `dsdata`
```
library(devtools)
install_github('dereksonderegger/dsdata')
```
```
## Downloading GitHub repo dereksonderegger/dsdata@master
## from URL https://api.github.com/repos/dereksonderegger/dsdata/zipball/master
```
```
## Installing dsData
```
```
## '/Library/Frameworks/R.framework/Resources/bin/R' --no-site-file \
## --no-environ --no-save --no-restore --quiet CMD INSTALL \
## '/private/var/folders/d1/drs_scp95wd_s6zsdksk312m0000gn/T/RtmpIFV4tp/devtoolsb6f34c27dfe2/dereksonderegger-dsData-43b2f6d' \
## --library='/Library/Frameworks/R.framework/Versions/3.4/Resources/library' \
## --install-tests
```
```
##
```
```
data('Eden', package='dsData')
```
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_raster()
```
The default gradient isn’t too bad, but we might want to manually chose two colors to smoothly scale between. Because I want to effect the colors I’ve chosen for the `fill` aesthetic, I have to modify this using `scale_fill_XXX`
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient(low = "red", high = "blue")
```
I think we ought to have the blue color come in a little earlier. Also, I want to specify a middle color so that our graph transitions from red to green to blue. To do this, we also have to specify where the middle color should be located along the elevation range.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient2(low = "red", mid='green', high = "blue",
midpoint=135)
```
If we don’t want to specify the colors manually we can, as usual, specify the color palette. The `gradientn` functions allow us to specify a large numbers intermediate colors.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradientn(colours = terrain.colors(5))
```
### 9\.2\.3 Adjusting axes
#### 9\.2\.3\.1 Setting breakpoints
Sometimes the default axis breakpoints aren’t quite what I want and I want to add a number or remove a number. To do this, we will modify the x or y scale. Typically I only have a problem when the axis is continuous, so we will concentrate on that case.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
In this case, suppose that we want the major breakpoints (which have labels) to occur every 5 mpg, and the minor breakpoints (which just have a white line) to occur midway between those (so every 2\.5 mpg).
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5) )
```
If we wanted to adjust the minor breaks, we could do that using the `minor_breaks` argument. If we want to remove the minor breaks completely, we could set the minor breaks to be `NULL`
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5), minor_breaks = NULL )
```
### 9\.2\.4 Zooming in/out
It is often important to be able to force the graph to have a particular range in either the x\-axis or the y\-axis. Given a particular range of interest, there are two ways that we could this:
* Remove all data points that fall outside the range and just plot the reduced dataset. This is accomplished using the `xlim()` and `ylim()` functions, or setting either of those inside another `scale_XXX` function.
* Use all the data to create a graph and just zoom in/out in that graph. This is accomplished using the `coord_cartesian()` function
```
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm')
```
If we want to reset the x\-axis to stop at \\(x\=19\\), and \\(y\=60\\), then we could use the `xlim()` and `ylim()` functions, but this will cause the regression line to be chopped off and it won’t even use that data point when calculating the regression.
```
# Danger! This removes the data points first!
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
xlim( 8, 19 ) + ylim(0, 60)
```
```
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 1 rows containing missing values (geom_point).
```
Alternatively, we could use the `coord_cartesion` function to chop the axes \_after\_everything has been calculated.
```
# Safer! Create the graph and then just zoom in
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
coord_cartesian( xlim=c(8, 19 ), ylim=c(0, 60))
```
### 9\.2\.1 Labels
To make a graph more understandable, it is necessary to tweak labels for the axes and add a main title and such. Here we’ll adjust labels in a graph, including the legend labels.
```
# Treat the number of cylinders in a car as a categorical variable (4,6 or 8)
mtcars$cyl <- factor(mtcars$cyl)
ggplot(mtcars, aes(x=wt, y=mpg, col=cyl)) +
geom_point() +
labs( title='Weight vs Miles per Gallon') +
labs( x="Weight in tons (2000 lbs)", y="Miles per Gallon (US)" ) +
labs( color="Cylinders")
```
You could either call the `labs()` command repeatedly with each label, or you could provide multiple arguements to just one `labs()` call.
### 9\.2\.2 Color Scales
Adjusting the color palette for the color scales is not particularly hard, but it isn’t intuitive. You can either set them up using a set of predefined palettes or you can straight up pick the colors. Furthermore we need to recognize that picking colors for a continuous covariate is different than for a factor. In the continuous case, we have to pick a low and high colors and `ggplot` will smoothly transition between the two. In the discrete case with a factor, each factor level gets its own color.
To make these choices, we will use the functions that modify the scales. In particular, if we are modifying the `color` aesthetic, we will use the `scale_color_XXX` functions where the `XXX` gets replaced by something more specific. If we are modifying the `fill` colors, then we will use the `scale_fill_XXX` family of functions.
#### 9\.2\.2\.1 Colors for Factors
We can set the colors manually using the function `scale_color_manual` which expects the name of the colors for each factor level. The order given in the `values` argument corresponds to the order of the levels of the factor.
For a nice list of the named colors you can use, I like to refer to this webpage: [https://www.nceas.ucsb.edu/\~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf)
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values=c('blue', 'darkmagenta', 'aquamarine'))
```
If you want to instead pick a color palette and let the palette pick the colors to be farthest apart based on the number of factor levels, you can use `scale_color_manual` and then have the values chosen by one of the palette functions where you just have to tell it how many levels you have.
```
library(colorspace) # these two packages have some decent
library(grDevices) # color palettes functions.
rainbow(6) # if we have six factor levels, what colors should we use?
```
```
## [1] "#FF0000FF" "#FFFF00FF" "#00FF00FF" "#00FFFFFF" "#0000FFFF" "#FF00FFFF"
```
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values = rainbow(3))
```
#### 9\.2\.2\.2 Colors for continuous values
For this example, we will consider an elevation map of the Maunga Whau volcano in New Zealand. This dataset comes built into R as the matrix `volcano`, but I’ve modified it slightly and saved it to a package I have on github called `dsdata`
```
library(devtools)
install_github('dereksonderegger/dsdata')
```
```
## Downloading GitHub repo dereksonderegger/dsdata@master
## from URL https://api.github.com/repos/dereksonderegger/dsdata/zipball/master
```
```
## Installing dsData
```
```
## '/Library/Frameworks/R.framework/Resources/bin/R' --no-site-file \
## --no-environ --no-save --no-restore --quiet CMD INSTALL \
## '/private/var/folders/d1/drs_scp95wd_s6zsdksk312m0000gn/T/RtmpIFV4tp/devtoolsb6f34c27dfe2/dereksonderegger-dsData-43b2f6d' \
## --library='/Library/Frameworks/R.framework/Versions/3.4/Resources/library' \
## --install-tests
```
```
##
```
```
data('Eden', package='dsData')
```
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_raster()
```
The default gradient isn’t too bad, but we might want to manually chose two colors to smoothly scale between. Because I want to effect the colors I’ve chosen for the `fill` aesthetic, I have to modify this using `scale_fill_XXX`
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient(low = "red", high = "blue")
```
I think we ought to have the blue color come in a little earlier. Also, I want to specify a middle color so that our graph transitions from red to green to blue. To do this, we also have to specify where the middle color should be located along the elevation range.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient2(low = "red", mid='green', high = "blue",
midpoint=135)
```
If we don’t want to specify the colors manually we can, as usual, specify the color palette. The `gradientn` functions allow us to specify a large numbers intermediate colors.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradientn(colours = terrain.colors(5))
```
#### 9\.2\.2\.1 Colors for Factors
We can set the colors manually using the function `scale_color_manual` which expects the name of the colors for each factor level. The order given in the `values` argument corresponds to the order of the levels of the factor.
For a nice list of the named colors you can use, I like to refer to this webpage: [https://www.nceas.ucsb.edu/\~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/colorPaletteCheatsheet.pdf)
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values=c('blue', 'darkmagenta', 'aquamarine'))
```
If you want to instead pick a color palette and let the palette pick the colors to be farthest apart based on the number of factor levels, you can use `scale_color_manual` and then have the values chosen by one of the palette functions where you just have to tell it how many levels you have.
```
library(colorspace) # these two packages have some decent
library(grDevices) # color palettes functions.
rainbow(6) # if we have six factor levels, what colors should we use?
```
```
## [1] "#FF0000FF" "#FFFF00FF" "#00FF00FF" "#00FFFFFF" "#0000FFFF" "#FF00FFFF"
```
```
ggplot(iris, aes(x=Sepal.Width, y=Sepal.Length, color=Species)) +
geom_point() +
scale_color_manual(values = rainbow(3))
```
#### 9\.2\.2\.2 Colors for continuous values
For this example, we will consider an elevation map of the Maunga Whau volcano in New Zealand. This dataset comes built into R as the matrix `volcano`, but I’ve modified it slightly and saved it to a package I have on github called `dsdata`
```
library(devtools)
install_github('dereksonderegger/dsdata')
```
```
## Downloading GitHub repo dereksonderegger/dsdata@master
## from URL https://api.github.com/repos/dereksonderegger/dsdata/zipball/master
```
```
## Installing dsData
```
```
## '/Library/Frameworks/R.framework/Resources/bin/R' --no-site-file \
## --no-environ --no-save --no-restore --quiet CMD INSTALL \
## '/private/var/folders/d1/drs_scp95wd_s6zsdksk312m0000gn/T/RtmpIFV4tp/devtoolsb6f34c27dfe2/dereksonderegger-dsData-43b2f6d' \
## --library='/Library/Frameworks/R.framework/Versions/3.4/Resources/library' \
## --install-tests
```
```
##
```
```
data('Eden', package='dsData')
```
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_raster()
```
The default gradient isn’t too bad, but we might want to manually chose two colors to smoothly scale between. Because I want to effect the colors I’ve chosen for the `fill` aesthetic, I have to modify this using `scale_fill_XXX`
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient(low = "red", high = "blue")
```
I think we ought to have the blue color come in a little earlier. Also, I want to specify a middle color so that our graph transitions from red to green to blue. To do this, we also have to specify where the middle color should be located along the elevation range.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradient2(low = "red", mid='green', high = "blue",
midpoint=135)
```
If we don’t want to specify the colors manually we can, as usual, specify the color palette. The `gradientn` functions allow us to specify a large numbers intermediate colors.
```
ggplot( Eden, aes(x=x, y=y, fill=elevation)) +
geom_tile() +
scale_fill_gradientn(colours = terrain.colors(5))
```
### 9\.2\.3 Adjusting axes
#### 9\.2\.3\.1 Setting breakpoints
Sometimes the default axis breakpoints aren’t quite what I want and I want to add a number or remove a number. To do this, we will modify the x or y scale. Typically I only have a problem when the axis is continuous, so we will concentrate on that case.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
In this case, suppose that we want the major breakpoints (which have labels) to occur every 5 mpg, and the minor breakpoints (which just have a white line) to occur midway between those (so every 2\.5 mpg).
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5) )
```
If we wanted to adjust the minor breaks, we could do that using the `minor_breaks` argument. If we want to remove the minor breaks completely, we could set the minor breaks to be `NULL`
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5), minor_breaks = NULL )
```
#### 9\.2\.3\.1 Setting breakpoints
Sometimes the default axis breakpoints aren’t quite what I want and I want to add a number or remove a number. To do this, we will modify the x or y scale. Typically I only have a problem when the axis is continuous, so we will concentrate on that case.
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot()
```
In this case, suppose that we want the major breakpoints (which have labels) to occur every 5 mpg, and the minor breakpoints (which just have a white line) to occur midway between those (so every 2\.5 mpg).
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5) )
```
If we wanted to adjust the minor breaks, we could do that using the `minor_breaks` argument. If we want to remove the minor breaks completely, we could set the minor breaks to be `NULL`
```
ggplot(mpg, aes(x=class, y=hwy)) +
geom_boxplot() +
scale_y_continuous( breaks = seq(10, 45, by=5), minor_breaks = NULL )
```
### 9\.2\.4 Zooming in/out
It is often important to be able to force the graph to have a particular range in either the x\-axis or the y\-axis. Given a particular range of interest, there are two ways that we could this:
* Remove all data points that fall outside the range and just plot the reduced dataset. This is accomplished using the `xlim()` and `ylim()` functions, or setting either of those inside another `scale_XXX` function.
* Use all the data to create a graph and just zoom in/out in that graph. This is accomplished using the `coord_cartesian()` function
```
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm')
```
If we want to reset the x\-axis to stop at \\(x\=19\\), and \\(y\=60\\), then we could use the `xlim()` and `ylim()` functions, but this will cause the regression line to be chopped off and it won’t even use that data point when calculating the regression.
```
# Danger! This removes the data points first!
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
xlim( 8, 19 ) + ylim(0, 60)
```
```
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 1 rows containing missing values (geom_point).
```
Alternatively, we could use the `coord_cartesion` function to chop the axes \_after\_everything has been calculated.
```
# Safer! Create the graph and then just zoom in
ggplot(trees, aes(x=Girth, y=Volume)) +
geom_point() +
geom_smooth(method='lm') +
coord_cartesian( xlim=c(8, 19 ), ylim=c(0, 60))
```
9\.3 Cookbook Examples
----------------------
### 9\.3\.1 Scatterplot with prediction ribbons
Often I want to create a scatterplot and then graph the predicted values as a ribbon on top. While it is possible to do this automatically using the `geom_smoother()` function, I prefer not to do this because I don’t have much control over how the model is created.
```
# fit a linear model to the trees dataset
model <- lm( Volume ~ Girth, data=trees )
# add the fitted values and confidence interval values for each observation
# to the original data frame, and call the augmented dataset trees.aug.
trees.aug <- trees %>% cbind( predict(model, interval='confidence', newdata=.) )
# Plot the augmented data. Alpha is the opacity of the ribbon
ggplot(trees.aug, aes(x=Girth, y=Volume)) +
geom_ribbon( aes(ymin=lwr, ymax=upr), alpha=.4, fill='darkgrey' ) +
geom_line( aes(y=fit) ) +
geom_point( aes( y = Volume ) )
```
### 9\.3\.2 Bar Plot
Suppose that you just want make some barplots and add \\(\\pm\\) S.E. bars. This should be really easy to do, but in the base graphics in R, it is a pain. Fortunately in `ggplot2` this is easy. First, define a data frame with the bar heights you want to graph and the \\(\\pm\\) values you wish to use.
```
# Calculate the mean and sd of the Petal Widths for each species
stats <- iris %>%
group_by(Species) %>%
summarize( Mean = mean(Petal.Width), # Mean = ybar
StdErr = sd(Petal.Width)/sqrt(n()) ) %>% # StdErr = s / sqrt(n)
mutate( lwr = Mean - StdErr,
upr = Mean + StdErr )
stats
```
```
## # A tibble: 3 x 5
## Species Mean StdErr lwr upr
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 0.246 0.0149 0.231 0.261
## 2 versicolor 1.33 0.0280 1.30 1.35
## 3 virginica 2.03 0.0388 1.99 2.06
```
Next we take these summary statistics and define the following graph which makes a bar graph of the means and error bars that are \\(\\pm\\) 1 estimated standard deviation of the mean (usually referred to as the standard errors of the means). By default, `geom_bar()` tries to draw a bar plot based on how many observations each group has. What I want, though, is to draw bars of the height I specified, so to do that I have to add `stat='identity'` to specify that it should just use the heights I tell it.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity') +
geom_errorbar( aes(ymin=lwr, ymax=upr) )
```
While this isn’t too bad, we would like to make this a bit more pleasing to look at. Each of the bars is a little too wide and the error bars should be a tad narrower than then bar. Also, the fill color for the bars is too dark. So I’ll change all of these, by setting those attributes *outside of an `aes()` command*.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 )
```
The last thing to notice is that the *order* in which the different layers matter. This is similar to Photoshop or GIS software where the layers added last can obscure prior layers. In the graph below, the lower part of the error bar is obscured by the grey bar.
```
ggplot(stats, aes(x=Species)) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 ) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6)
```
### 9\.3\.3 Distributions
Often I need to plot a distribution and perhaps shade some area in. In this section we’ll give a method for plotting continuous and discrete distributions using `ggplot2`.
#### 9\.3\.3\.1 Continuous distributions
First we need to create a data.frame that contains a sequence of (x,y) pairs that we’ll pass to our graphing program to draw the curve by connecting\-the\-dots, but because the dots will be very close together, the resulting curve looks smooth. For example, lets plot the F\-distribution with parameters \\(\\nu\_{1}\=5\\) and \\(\\nu\_{2}\=30\\).
```
# define 1000 points to do a "connect-the-dots"
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30) )
ggplot(plot.data, aes(x=x, y=density)) +
geom_line() + # just a line
geom_area() # shade in the area under the line
```
This isn’t too bad, but often we want to add some color to two different sections, perhaps we want different colors distinguishing between values \\(\\ge2\.5\\) vs values \\(\<2\.5\\)
```
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30),
Group = ifelse(x <= 2.5, 'Less','Greater') )
ggplot(plot.data, aes(x=x, y=density, fill=Group)) +
geom_area() +
geom_line()
```
#### 9\.3\.3\.2 Discrete distributions
The idea for discrete distributions will be to draw points for the height and then add bars. Lets look at doing this for the Poisson distribution with rate parameter \\(\\lambda\=2\\).
```
plot.data <- data.frame( x=seq(0,10) ) %>%
mutate( probability = dpois(x, lambda=2) )
ggplot(plot.data, aes(x=x)) +
geom_point( aes(y=probability) ) +
geom_linerange(aes(ymax=probability), ymin=0)
```
The key trick here was to set the `ymin` value to always be zero.
### 9\.3\.1 Scatterplot with prediction ribbons
Often I want to create a scatterplot and then graph the predicted values as a ribbon on top. While it is possible to do this automatically using the `geom_smoother()` function, I prefer not to do this because I don’t have much control over how the model is created.
```
# fit a linear model to the trees dataset
model <- lm( Volume ~ Girth, data=trees )
# add the fitted values and confidence interval values for each observation
# to the original data frame, and call the augmented dataset trees.aug.
trees.aug <- trees %>% cbind( predict(model, interval='confidence', newdata=.) )
# Plot the augmented data. Alpha is the opacity of the ribbon
ggplot(trees.aug, aes(x=Girth, y=Volume)) +
geom_ribbon( aes(ymin=lwr, ymax=upr), alpha=.4, fill='darkgrey' ) +
geom_line( aes(y=fit) ) +
geom_point( aes( y = Volume ) )
```
### 9\.3\.2 Bar Plot
Suppose that you just want make some barplots and add \\(\\pm\\) S.E. bars. This should be really easy to do, but in the base graphics in R, it is a pain. Fortunately in `ggplot2` this is easy. First, define a data frame with the bar heights you want to graph and the \\(\\pm\\) values you wish to use.
```
# Calculate the mean and sd of the Petal Widths for each species
stats <- iris %>%
group_by(Species) %>%
summarize( Mean = mean(Petal.Width), # Mean = ybar
StdErr = sd(Petal.Width)/sqrt(n()) ) %>% # StdErr = s / sqrt(n)
mutate( lwr = Mean - StdErr,
upr = Mean + StdErr )
stats
```
```
## # A tibble: 3 x 5
## Species Mean StdErr lwr upr
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 0.246 0.0149 0.231 0.261
## 2 versicolor 1.33 0.0280 1.30 1.35
## 3 virginica 2.03 0.0388 1.99 2.06
```
Next we take these summary statistics and define the following graph which makes a bar graph of the means and error bars that are \\(\\pm\\) 1 estimated standard deviation of the mean (usually referred to as the standard errors of the means). By default, `geom_bar()` tries to draw a bar plot based on how many observations each group has. What I want, though, is to draw bars of the height I specified, so to do that I have to add `stat='identity'` to specify that it should just use the heights I tell it.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity') +
geom_errorbar( aes(ymin=lwr, ymax=upr) )
```
While this isn’t too bad, we would like to make this a bit more pleasing to look at. Each of the bars is a little too wide and the error bars should be a tad narrower than then bar. Also, the fill color for the bars is too dark. So I’ll change all of these, by setting those attributes *outside of an `aes()` command*.
```
ggplot(stats, aes(x=Species)) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 )
```
The last thing to notice is that the *order* in which the different layers matter. This is similar to Photoshop or GIS software where the layers added last can obscure prior layers. In the graph below, the lower part of the error bar is obscured by the grey bar.
```
ggplot(stats, aes(x=Species)) +
geom_errorbar( aes(ymin=lwr, ymax=upr), color='red', width=.4 ) +
geom_bar( aes(y=Mean), stat='identity', fill='grey', width=.6)
```
### 9\.3\.3 Distributions
Often I need to plot a distribution and perhaps shade some area in. In this section we’ll give a method for plotting continuous and discrete distributions using `ggplot2`.
#### 9\.3\.3\.1 Continuous distributions
First we need to create a data.frame that contains a sequence of (x,y) pairs that we’ll pass to our graphing program to draw the curve by connecting\-the\-dots, but because the dots will be very close together, the resulting curve looks smooth. For example, lets plot the F\-distribution with parameters \\(\\nu\_{1}\=5\\) and \\(\\nu\_{2}\=30\\).
```
# define 1000 points to do a "connect-the-dots"
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30) )
ggplot(plot.data, aes(x=x, y=density)) +
geom_line() + # just a line
geom_area() # shade in the area under the line
```
This isn’t too bad, but often we want to add some color to two different sections, perhaps we want different colors distinguishing between values \\(\\ge2\.5\\) vs values \\(\<2\.5\\)
```
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30),
Group = ifelse(x <= 2.5, 'Less','Greater') )
ggplot(plot.data, aes(x=x, y=density, fill=Group)) +
geom_area() +
geom_line()
```
#### 9\.3\.3\.2 Discrete distributions
The idea for discrete distributions will be to draw points for the height and then add bars. Lets look at doing this for the Poisson distribution with rate parameter \\(\\lambda\=2\\).
```
plot.data <- data.frame( x=seq(0,10) ) %>%
mutate( probability = dpois(x, lambda=2) )
ggplot(plot.data, aes(x=x)) +
geom_point( aes(y=probability) ) +
geom_linerange(aes(ymax=probability), ymin=0)
```
The key trick here was to set the `ymin` value to always be zero.
#### 9\.3\.3\.1 Continuous distributions
First we need to create a data.frame that contains a sequence of (x,y) pairs that we’ll pass to our graphing program to draw the curve by connecting\-the\-dots, but because the dots will be very close together, the resulting curve looks smooth. For example, lets plot the F\-distribution with parameters \\(\\nu\_{1}\=5\\) and \\(\\nu\_{2}\=30\\).
```
# define 1000 points to do a "connect-the-dots"
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30) )
ggplot(plot.data, aes(x=x, y=density)) +
geom_line() + # just a line
geom_area() # shade in the area under the line
```
This isn’t too bad, but often we want to add some color to two different sections, perhaps we want different colors distinguishing between values \\(\\ge2\.5\\) vs values \\(\<2\.5\\)
```
plot.data <- data.frame( x=seq(0,10, length=1000) ) %>%
mutate( density = df(x, 5, 30),
Group = ifelse(x <= 2.5, 'Less','Greater') )
ggplot(plot.data, aes(x=x, y=density, fill=Group)) +
geom_area() +
geom_line()
```
#### 9\.3\.3\.2 Discrete distributions
The idea for discrete distributions will be to draw points for the height and then add bars. Lets look at doing this for the Poisson distribution with rate parameter \\(\\lambda\=2\\).
```
plot.data <- data.frame( x=seq(0,10) ) %>%
mutate( probability = dpois(x, lambda=2) )
ggplot(plot.data, aes(x=x)) +
geom_point( aes(y=probability) ) +
geom_linerange(aes(ymax=probability), ymin=0)
```
The key trick here was to set the `ymin` value to always be zero.
9\.4 Exercises
--------------
1. For the dataset trees, which should already be pre\-loaded. Look at the help file using `?trees` for more information about this data set. We wish to build a scatterplot that compares the height and girth of these cherry trees to the volume of lumber that was produced.
1. Create a graph using ggplot2 with Height on the x\-axis, Volume on the y\-axis, and Girth as the either the size of the data point or the color of the data point. Which do you think is a more intuitive representation?
2. Add appropriate labels for the main title and the x and y axes.
2. Consider the following small dataset that represents the number of times per day my wife played “Ring around the Rosy” with my daughter relative to the number of days since she has learned this game. The column `yhat` represents the best fitting line through the data, and `lwr` and `upr` represent a 95% confidence interval for the predicted value on that day.
```
Rosy <- data.frame(
times = c(15, 11, 9, 12, 5, 2, 3),
day = 1:7,
yhat = c(14.36, 12.29, 10.21, 8.14, 6.07, 4.00, 1.93),
lwr = c( 9.54, 8.5, 7.22, 5.47, 3.08, 0.22, -2.89),
upr = c(19.18, 16.07, 13.2, 10.82, 9.06, 7.78, 6.75))
```
1. Using `ggplot()` and `geom_point()`, create a scatterplot with `day` along the x\-axis and `times` along the y\-axis.
2. Add a line to the graph where the x\-values are the `day` values but now the y\-values are the predicted values which we’ve called `yhat`. Notice that you have to set the aesthetic y\=times for the points and y\=yhat for the line. Because each `geom_` will accept an `aes()` command, you can specify the `y` attribute to be different for different layers of the graph.
3. Add a ribbon that represents the confidence region of the regression line. The `geom_ribbon()` function requires an `x`, `ymin`, and `ymax` columns to be defined. For examples of using `geom_ribbon()` see the online documentation: [http://docs.ggplot2\.org/current/geom\_ribbon.html](http://docs.ggplot2.org/current/geom_ribbon.html).
```
ggplot(Rosy, aes(x=day)) +
geom_point(aes(y=times)) +
geom_line( aes(y=yhat)) +
geom_ribbon( aes(ymin=lwr, ymax=upr), fill='salmon')
```
4. What happened when you added the ribbon? Did some points get hidden? If so, why?
5. Reorder the statements that created the graph so that the ribbon is on the bottom and the data points are on top and the regression line is visible.
6. The color of the ribbon fill is ugly. Use Google to find a list of named colors available to `ggplot2`. For example, I googled “ggplot2 named colors” and found the following link: [http://sape.inf.usi.ch/quick\-reference/ggplot2/colour](http://sape.inf.usi.ch/quick-reference/ggplot2/colour). Choose a color for the fill that is pleasing to you.
7. Add labels for the x\-axis and y\-axis that are appropriate along with a main title.
3. The R package `babynames` contains a single dataset that lists the number of children registered with Social Security with a particular name along with the proportion out of all children born in a given year. The dataset covers the from 1880 to the present. We want to plot the relative popularity of the names ‘Elise’ and ‘Casey’.
1. Load the package. If it is not found on your computer, download the package from CRAN.
```
library(babynames)
data("babynames")
```
2. Read the help file for the data set `babynames` to get a sense of the columns
3. Create a small dataset that only has the names ‘Elise’ and ‘Casey’.
4. Make a plot where the x\-axis is the year and the y\-axis is the proportion of babies given the names. Use a line to display this relationship and distinguish the two names by color. Notice this graph is a bit ugly because there is a lot of year\-to\-year variability that we should smooth over.
5. We’ll use dplyr to collapse the individual years into decades using the following code:
```
small <- babynames %>%
filter( name=='Elise' | name=='Casey') %>%
mutate( decade = cut(year, breaks = seq(1869,2019,by=10) )) %>%
group_by(name, decade) %>%
summarise( prop = mean(prop),
year = min(year))
```
6. Now draw the same graph you had in part (d).
7. Next we’ll create an area plot where the height is the total proportion of the both names and the colors split up the proportion.
```
ggplot(small, aes(x=year, y=prop, fill=name)) +
geom_area()
```
This is a pretty neat graph as it show the relative popularity of the name over time and can easily be expanded to many many names. In fact, there is a wonderful website that takes this same data and allows you select the names quite nicely: <http://www.babynamewizard.com/voyager>. My wife and I used this a lot while figuring out what to name our children. Notice that this site really uses the same graph type we just built but there are a few extra neat interactivity tricks.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/10-more-ggplot2.html |
Chapter 10 More `ggplot2`
=========================
10\.1 Faceting
--------------
The goal with faceting is to make many panels of graphics where each panel represents the same relationship between variables, but something changes between each panel. For example using the `iris` dataset we could look at the relationship between `Sepal.Length` and `Petal.Length` either with all the data in one graph, or one panel per species.
```
library(ggplot2)
ggplot(iris, aes(x=Sepal.Length, y=Sepal.Width)) +
geom_point() +
facet_grid( . ~ Species )
```
The line `facet_grid( formula )` tells `ggplot2` to make panels, and the formula tells how to orient the panels. Recall that a formula is always in the order `y ~ x` and because I want the species to change as we go across the page, but don’t have anything I want to change vertically we use `. ~ Species` to represent that. If we had wanted three graphs stacked then we could use `Species ~ .`.
For a second example, we look at a dataset that examines the amount a waiter was tipped by 244 parties. Covariates that were measured include the day of the week, size of the party, total amount of the bill, amount tipped, whether there were smokers in the group and the gender of the person paying the bill
```
data(tips, package='reshape')
head(tips)
```
```
## total_bill tip sex smoker day time size
## 1 16.99 1.01 Female No Sun Dinner 2
## 2 10.34 1.66 Male No Sun Dinner 3
## 3 21.01 3.50 Male No Sun Dinner 3
## 4 23.68 3.31 Male No Sun Dinner 2
## 5 24.59 3.61 Female No Sun Dinner 4
## 6 25.29 4.71 Male No Sun Dinner 4
```
It is easy to look at the relationship between the size of the bill and the percent tipped.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point()
```
Next we ask if there is a difference in tipping percent based on gender or day of the week by plotting this relationship for each combination of gender and day.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_grid( sex ~ day )
```
Sometimes we want multiple rows and columns of facets, but there is only one categorical variable with many levels. In that case we use facet\_wrap which takes a one\-sided formula.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_wrap( ~ day )
```
Finally we can allow the x and y scales to vary between the panels by setting “free”, “free\_x”, or “free\_y”. In the following code, the y\-axis scale changes between the gender groups.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_grid( sex ~ day, scales="free_y" )
```
10\.2 Modifying Scales
----------------------
Often it is useful to modify the scales that we have on the x or y axis. In particular we might want to display some modified version of a variable.
### 10\.2\.1 Log scales
For this example, we’ll use the `ACS` data from the `Lock5Data` package that has information about `Income` (in thousands of dollars) and `Age`. Lets make a scatterplot of the data.
```
library(Lock5Data)
data(ACS)
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This is an ugly graph because six observations dominate the graph and the bulk of the data (income \< $100,000\) is squished together. One solution is to plot income on the \\(\\log\_{10}\\) scale. There are a couple ways to do this. The simplest way is to just do a transformation on the column of data.
```
ggplot(ACS, aes(x=Age, y=log10(Income))) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This works quite well to see the trend of peak earning happening in a persons 40s and 50s, but the scale is difficult for me to understand (what does \\(\\log\_{10}\\left(X\\right)\=1\\) mean here? Oh right, that is \\(10^{1}\=X\\) so that is the $10,000 line). It would be really nice if we could do the transformation but have the labels on the original scale.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10()
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
Now the y\-axis is in the original units (thousands of dollars) but obnoxiously we only have two labeled values. Lets define the major break points (the white lines that have numerical labels) to be at 1,10,100 thousand dollars in salary. Likewise we will tell `ggplot2` to set minor break points at 1 to 10 thousand dollars (with steps of 1 thousand dollars) and then 10 thousand to 100 thousand but with step sizes of 10 thousand, and finally minor breaks above 100 thousand being in steps of 100 thousand.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10(breaks=c(1,10,100),
minor=c(1:10,
seq( 10, 100,by=10 ),
seq(100,1000,by=100))) +
ylab('Income (1000s of dollars)')
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
### 10\.2\.2 Arbitrary transformations
The function `scale_y_log10()` is actually just a wrapper to the `scale_y_continuous()` function with a predefined transformation. If you want to rescale function using some other function (say the inverse, or square\-root, or \\(\\log\_{2}\\) ) you can use the `scale_y_continuous()` function (for the x\-axis there is a corresponding `scale_x_????`) family of functions. There is a whole list of transformations built into ggplot2 that work (transformations include “asn”, “atanh”, “boxcox”, “exp”, “identity”, “log”, “log10”, “log1p”, “log2”, “logit”, “probability”, “probit”, “reciprocal”, “reverse” and “sqrt”). If you need a custom function, that can be done by defining a new transformation via the trans\_new() function.
10\.3 Multi\-plot
-----------------
There are times that you must create a graphic that is composed of several sub\-graphs and think of it as one object. Unfortunately the mechanism that `ggplot2` gives for this is cumbersome and it is usually easier to use a function called `multiplot`. The explaination I’ve heard about why this function wasn’t included in ggplot2 is that you should think about faceting first and only resort to `multiplot` if you have to. The function `multiplot` is included in a couple of packages, e.g. `Rmisc`, but I always just google ‘ggplot2 multiplot’ to get to the webpage \[[http://www.cookbook\-r.com/Graphs/Multiple\_graphs\_on\_one\_page\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Multiple_graphs_on_one_page_(ggplot2)/)]
```
# This example uses the ChickWeight dataset, which comes with ggplot2
# First plot
p1 <- ggplot(ChickWeight, aes(x=Time, y=weight, colour=Diet, group=Chick)) +
geom_line() +
ggtitle("Growth curve for individual chicks")
# Second plot
p2 <- ggplot(ChickWeight, aes(x=Time, y=weight, colour=Diet)) +
geom_point(alpha=.3) +
geom_smooth(alpha=.2, size=1) +
ggtitle("Fitted growth curve per diet")
# Third plot
p3 <- ggplot(subset(ChickWeight, Time==21), aes(x=weight, colour=Diet)) +
geom_density() +
ggtitle("Final weight, by diet")
```
Suppose that I want to layout these three plots in an arrangement like so:
\\\[\\textrm{layout}\=\\left\[\\begin{array}{ccc}
1 \& 2 \& 2\\\\
1 \& 2 \& 2\\\\
1 \& 3 \& 3
\\end{array}\\right]\\]
where plot 1 is a tall, skinny plot on the left, plot 2 is more squarish, and plot 3 is short on the bottom right. This sort of table arrangement can be quite flexible if you have many rows and many columns, but generally we can get by with something with only a couple rows/columns.
```
my.layout = cbind( c(1,1,1), c(2,2,3), c(2,2,3) )
Rmisc::multiplot( p1, p2, p3, layout=my.layout) # Package::FunctionName
```
10\.4 Themes
------------
A great deal of thought went into the default settings of ggplot2 to maximize the visual clarity of the graphs. However some people believe the defaults for many of the tiny graphical settings are poor. You can modify each of these but it is often easier to modify them all at once by selecting a different theme. The ggplot2 package includes several, `theme_bw()`, and `theme_minimal()` being the two that I use most often. Below are a few to examples:
```
Rmisc::multiplot( p1 + theme_bw(), # Black and white
p1 + theme_minimal(),
p1 + theme_dark(),
p1 + theme_light(),
cols=2 ) # two columns of graphs
```
There are more themes in the package `ggthemes`
```
library(ggthemes)
Rmisc::multiplot( p1 + theme_stata(), # Black and white
p1 + theme_economist(),
p1 + theme_fivethirtyeight(),
p1 + theme_excel(),
cols=2 ) # two columns of graphs
```
Almost everything you want to modify can be modified within the theme and you should check out the `ggplot2` documentation for more information and examples of how to modify different elements. \[[http://docs.ggplot2\.org/current/theme.html](http://docs.ggplot2.org/current/theme.html)]
10\.5 Exercises
---------------
1. We’ll next make some density plots that relate several factors towards the birthweight of a child.
1. Load the `MASS` library, which includes the dataset `birthwt` which contains information about 189 babies and their mothers.
2. Add better labels to the `race` and `smoke` variables using the following:
```
library(MASS)
library(dplyr)
birthwt <- birthwt %>% mutate(
race = factor(race, labels=c('White','Black','Other')),
smoke = factor(smoke, labels=c('No Smoke', 'Smoke')))
```
3. Graph a histogram of the birthweights `bwt` using `ggplot(birthwt, aes(x=bwt)) + geom_histogram()`.
4. Make separate graphs that denote whether a mother smoked during pregnancy using the `facet_grid()` command.
5. Perhaps race matters in relation to smoking. Make our grid of graphs vary with smoking status changing vertically, and race changing horizontally (that is the formula in `facet_grid()` should have smoking be the y variable and race as the x).
6. Remove `race` from the facet grid, (so go back to the graph you had in part d). I’d like to next add an estimated density line to the graphs, but to do that, I need to first change the y\-axis to be density (instead of counts), which we do by using `aes(y=..density..)` in the `ggplot()` aesthetics command.
7. Next we can add the estimated smooth density using the `geom_density()` command.
8. To really make this look nice, lets change the fill color of the histograms to be something less dark, lets use `fill='cornsilk'` and `color='grey60'`. To play with different colors that have names, check out the following: \[[http://www.stat.columbia.edu/\~tzheng/files/Rcolor.pdf](http://www.stat.columbia.edu/~tzheng/files/Rcolor.pdf)].
9. Change the order in which the histogram and the density line are added to the plot. Does it matter and which do you prefer?
2. Load the dataset `ChickWeight` and remind yourself what the data was using ?ChickWeight. Using `facet_wrap()`, produce a scatter plot of weight vs age for each chick. Use color to distinguish the four different `Diet` treatments.
10\.1 Faceting
--------------
The goal with faceting is to make many panels of graphics where each panel represents the same relationship between variables, but something changes between each panel. For example using the `iris` dataset we could look at the relationship between `Sepal.Length` and `Petal.Length` either with all the data in one graph, or one panel per species.
```
library(ggplot2)
ggplot(iris, aes(x=Sepal.Length, y=Sepal.Width)) +
geom_point() +
facet_grid( . ~ Species )
```
The line `facet_grid( formula )` tells `ggplot2` to make panels, and the formula tells how to orient the panels. Recall that a formula is always in the order `y ~ x` and because I want the species to change as we go across the page, but don’t have anything I want to change vertically we use `. ~ Species` to represent that. If we had wanted three graphs stacked then we could use `Species ~ .`.
For a second example, we look at a dataset that examines the amount a waiter was tipped by 244 parties. Covariates that were measured include the day of the week, size of the party, total amount of the bill, amount tipped, whether there were smokers in the group and the gender of the person paying the bill
```
data(tips, package='reshape')
head(tips)
```
```
## total_bill tip sex smoker day time size
## 1 16.99 1.01 Female No Sun Dinner 2
## 2 10.34 1.66 Male No Sun Dinner 3
## 3 21.01 3.50 Male No Sun Dinner 3
## 4 23.68 3.31 Male No Sun Dinner 2
## 5 24.59 3.61 Female No Sun Dinner 4
## 6 25.29 4.71 Male No Sun Dinner 4
```
It is easy to look at the relationship between the size of the bill and the percent tipped.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point()
```
Next we ask if there is a difference in tipping percent based on gender or day of the week by plotting this relationship for each combination of gender and day.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_grid( sex ~ day )
```
Sometimes we want multiple rows and columns of facets, but there is only one categorical variable with many levels. In that case we use facet\_wrap which takes a one\-sided formula.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_wrap( ~ day )
```
Finally we can allow the x and y scales to vary between the panels by setting “free”, “free\_x”, or “free\_y”. In the following code, the y\-axis scale changes between the gender groups.
```
ggplot(tips, aes(x = total_bill, y = tip / total_bill )) +
geom_point() +
facet_grid( sex ~ day, scales="free_y" )
```
10\.2 Modifying Scales
----------------------
Often it is useful to modify the scales that we have on the x or y axis. In particular we might want to display some modified version of a variable.
### 10\.2\.1 Log scales
For this example, we’ll use the `ACS` data from the `Lock5Data` package that has information about `Income` (in thousands of dollars) and `Age`. Lets make a scatterplot of the data.
```
library(Lock5Data)
data(ACS)
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This is an ugly graph because six observations dominate the graph and the bulk of the data (income \< $100,000\) is squished together. One solution is to plot income on the \\(\\log\_{10}\\) scale. There are a couple ways to do this. The simplest way is to just do a transformation on the column of data.
```
ggplot(ACS, aes(x=Age, y=log10(Income))) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This works quite well to see the trend of peak earning happening in a persons 40s and 50s, but the scale is difficult for me to understand (what does \\(\\log\_{10}\\left(X\\right)\=1\\) mean here? Oh right, that is \\(10^{1}\=X\\) so that is the $10,000 line). It would be really nice if we could do the transformation but have the labels on the original scale.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10()
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
Now the y\-axis is in the original units (thousands of dollars) but obnoxiously we only have two labeled values. Lets define the major break points (the white lines that have numerical labels) to be at 1,10,100 thousand dollars in salary. Likewise we will tell `ggplot2` to set minor break points at 1 to 10 thousand dollars (with steps of 1 thousand dollars) and then 10 thousand to 100 thousand but with step sizes of 10 thousand, and finally minor breaks above 100 thousand being in steps of 100 thousand.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10(breaks=c(1,10,100),
minor=c(1:10,
seq( 10, 100,by=10 ),
seq(100,1000,by=100))) +
ylab('Income (1000s of dollars)')
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
### 10\.2\.2 Arbitrary transformations
The function `scale_y_log10()` is actually just a wrapper to the `scale_y_continuous()` function with a predefined transformation. If you want to rescale function using some other function (say the inverse, or square\-root, or \\(\\log\_{2}\\) ) you can use the `scale_y_continuous()` function (for the x\-axis there is a corresponding `scale_x_????`) family of functions. There is a whole list of transformations built into ggplot2 that work (transformations include “asn”, “atanh”, “boxcox”, “exp”, “identity”, “log”, “log10”, “log1p”, “log2”, “logit”, “probability”, “probit”, “reciprocal”, “reverse” and “sqrt”). If you need a custom function, that can be done by defining a new transformation via the trans\_new() function.
### 10\.2\.1 Log scales
For this example, we’ll use the `ACS` data from the `Lock5Data` package that has information about `Income` (in thousands of dollars) and `Age`. Lets make a scatterplot of the data.
```
library(Lock5Data)
data(ACS)
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This is an ugly graph because six observations dominate the graph and the bulk of the data (income \< $100,000\) is squished together. One solution is to plot income on the \\(\\log\_{10}\\) scale. There are a couple ways to do this. The simplest way is to just do a transformation on the column of data.
```
ggplot(ACS, aes(x=Age, y=log10(Income))) +
geom_point()
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
This works quite well to see the trend of peak earning happening in a persons 40s and 50s, but the scale is difficult for me to understand (what does \\(\\log\_{10}\\left(X\\right)\=1\\) mean here? Oh right, that is \\(10^{1}\=X\\) so that is the $10,000 line). It would be really nice if we could do the transformation but have the labels on the original scale.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10()
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
Now the y\-axis is in the original units (thousands of dollars) but obnoxiously we only have two labeled values. Lets define the major break points (the white lines that have numerical labels) to be at 1,10,100 thousand dollars in salary. Likewise we will tell `ggplot2` to set minor break points at 1 to 10 thousand dollars (with steps of 1 thousand dollars) and then 10 thousand to 100 thousand but with step sizes of 10 thousand, and finally minor breaks above 100 thousand being in steps of 100 thousand.
```
ggplot(ACS, aes(x=Age, y=Income)) +
geom_point() +
scale_y_log10(breaks=c(1,10,100),
minor=c(1:10,
seq( 10, 100,by=10 ),
seq(100,1000,by=100))) +
ylab('Income (1000s of dollars)')
```
```
## Warning: Transformation introduced infinite values in continuous y-axis
```
```
## Warning: Removed 175 rows containing missing values (geom_point).
```
### 10\.2\.2 Arbitrary transformations
The function `scale_y_log10()` is actually just a wrapper to the `scale_y_continuous()` function with a predefined transformation. If you want to rescale function using some other function (say the inverse, or square\-root, or \\(\\log\_{2}\\) ) you can use the `scale_y_continuous()` function (for the x\-axis there is a corresponding `scale_x_????`) family of functions. There is a whole list of transformations built into ggplot2 that work (transformations include “asn”, “atanh”, “boxcox”, “exp”, “identity”, “log”, “log10”, “log1p”, “log2”, “logit”, “probability”, “probit”, “reciprocal”, “reverse” and “sqrt”). If you need a custom function, that can be done by defining a new transformation via the trans\_new() function.
10\.3 Multi\-plot
-----------------
There are times that you must create a graphic that is composed of several sub\-graphs and think of it as one object. Unfortunately the mechanism that `ggplot2` gives for this is cumbersome and it is usually easier to use a function called `multiplot`. The explaination I’ve heard about why this function wasn’t included in ggplot2 is that you should think about faceting first and only resort to `multiplot` if you have to. The function `multiplot` is included in a couple of packages, e.g. `Rmisc`, but I always just google ‘ggplot2 multiplot’ to get to the webpage \[[http://www.cookbook\-r.com/Graphs/Multiple\_graphs\_on\_one\_page\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Multiple_graphs_on_one_page_(ggplot2)/)]
```
# This example uses the ChickWeight dataset, which comes with ggplot2
# First plot
p1 <- ggplot(ChickWeight, aes(x=Time, y=weight, colour=Diet, group=Chick)) +
geom_line() +
ggtitle("Growth curve for individual chicks")
# Second plot
p2 <- ggplot(ChickWeight, aes(x=Time, y=weight, colour=Diet)) +
geom_point(alpha=.3) +
geom_smooth(alpha=.2, size=1) +
ggtitle("Fitted growth curve per diet")
# Third plot
p3 <- ggplot(subset(ChickWeight, Time==21), aes(x=weight, colour=Diet)) +
geom_density() +
ggtitle("Final weight, by diet")
```
Suppose that I want to layout these three plots in an arrangement like so:
\\\[\\textrm{layout}\=\\left\[\\begin{array}{ccc}
1 \& 2 \& 2\\\\
1 \& 2 \& 2\\\\
1 \& 3 \& 3
\\end{array}\\right]\\]
where plot 1 is a tall, skinny plot on the left, plot 2 is more squarish, and plot 3 is short on the bottom right. This sort of table arrangement can be quite flexible if you have many rows and many columns, but generally we can get by with something with only a couple rows/columns.
```
my.layout = cbind( c(1,1,1), c(2,2,3), c(2,2,3) )
Rmisc::multiplot( p1, p2, p3, layout=my.layout) # Package::FunctionName
```
10\.4 Themes
------------
A great deal of thought went into the default settings of ggplot2 to maximize the visual clarity of the graphs. However some people believe the defaults for many of the tiny graphical settings are poor. You can modify each of these but it is often easier to modify them all at once by selecting a different theme. The ggplot2 package includes several, `theme_bw()`, and `theme_minimal()` being the two that I use most often. Below are a few to examples:
```
Rmisc::multiplot( p1 + theme_bw(), # Black and white
p1 + theme_minimal(),
p1 + theme_dark(),
p1 + theme_light(),
cols=2 ) # two columns of graphs
```
There are more themes in the package `ggthemes`
```
library(ggthemes)
Rmisc::multiplot( p1 + theme_stata(), # Black and white
p1 + theme_economist(),
p1 + theme_fivethirtyeight(),
p1 + theme_excel(),
cols=2 ) # two columns of graphs
```
Almost everything you want to modify can be modified within the theme and you should check out the `ggplot2` documentation for more information and examples of how to modify different elements. \[[http://docs.ggplot2\.org/current/theme.html](http://docs.ggplot2.org/current/theme.html)]
10\.5 Exercises
---------------
1. We’ll next make some density plots that relate several factors towards the birthweight of a child.
1. Load the `MASS` library, which includes the dataset `birthwt` which contains information about 189 babies and their mothers.
2. Add better labels to the `race` and `smoke` variables using the following:
```
library(MASS)
library(dplyr)
birthwt <- birthwt %>% mutate(
race = factor(race, labels=c('White','Black','Other')),
smoke = factor(smoke, labels=c('No Smoke', 'Smoke')))
```
3. Graph a histogram of the birthweights `bwt` using `ggplot(birthwt, aes(x=bwt)) + geom_histogram()`.
4. Make separate graphs that denote whether a mother smoked during pregnancy using the `facet_grid()` command.
5. Perhaps race matters in relation to smoking. Make our grid of graphs vary with smoking status changing vertically, and race changing horizontally (that is the formula in `facet_grid()` should have smoking be the y variable and race as the x).
6. Remove `race` from the facet grid, (so go back to the graph you had in part d). I’d like to next add an estimated density line to the graphs, but to do that, I need to first change the y\-axis to be density (instead of counts), which we do by using `aes(y=..density..)` in the `ggplot()` aesthetics command.
7. Next we can add the estimated smooth density using the `geom_density()` command.
8. To really make this look nice, lets change the fill color of the histograms to be something less dark, lets use `fill='cornsilk'` and `color='grey60'`. To play with different colors that have names, check out the following: \[[http://www.stat.columbia.edu/\~tzheng/files/Rcolor.pdf](http://www.stat.columbia.edu/~tzheng/files/Rcolor.pdf)].
9. Change the order in which the histogram and the density line are added to the plot. Does it matter and which do you prefer?
2. Load the dataset `ChickWeight` and remind yourself what the data was using ?ChickWeight. Using `facet_wrap()`, produce a scatter plot of weight vs age for each chick. Use color to distinguish the four different `Diet` treatments.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/11-flow-control.html |
Chapter 11 Flow Control
=======================
Often it is necessary to write scripts that perform different action depending on the data or to automate a task that must be repeated many times. To address these issues we will introduce the if statement and its closely related cousin if else. To address repeated tasks we will define two types of loops, a while loop and a for loop.
11\.1 Decision statements
-------------------------
An if statement takes on the following two formats
```
# Simplest version
if( logical ){
expression # can be many lines of code
}
# Including the optional else
if( logical ){
expression
}else{
expression
}
```
where the else part is optional.
Suppose that I have a piece of code that generates a random variable from the Binomial distribution with one sample (essentially just flipping a coin) but I’d like to label it head or tails instead of one or zero.
```
# Flip the coin, and we get a 0 or 1
result <- rbinom(n=1, size=1, prob=0.5)
result
```
```
## [1] 0
```
```
# convert the 0/1 to Tail/Head
if( result == 0 ){
result <- 'Tail'
}else{
result <- 'Head'
}
result
```
```
## [1] "Tail"
```
What is happening is that the test expression inside the `if()` is evaluated and if it is true, then the subsequent statement is executed. If the test expression is false, the next statement is skipped. The way the R language is defined, only the first statement after the if statement is executed (or skipped) depending on the test expression. If we want multiple statements to be executed (or skipped), we will wrap those expressions in curly brackets `{ }`. I find it easier to follow the `if else` logic when I see the curly brackets so I use them even when there is only one expression to be executed. Also notice that the RStudio editor indents the code that might be skipped to try help give you a hint that it will be conditionally evaluated.
```
# Flip the coin, and we get a 0 or 1
result <- rbinom(n=1, size=1, prob=0.5)
result
```
```
## [1] 1
```
```
# convert the 0/1 to Tail/Head
if( result == 0 ){
result <- 'Tail'
print(" in the if statement, got a Tail! ")
}else{
result <- 'Head'
print("In the else part!")
}
```
```
## [1] "In the else part!"
```
```
result
```
```
## [1] "Head"
```
Run this code several times until you get both cases several times.
Finally we can nest if else statements together to allow you to write code that has many different execution routes.
```
# randomly grab a number between 0,5 and round it up to 1,2, ..., 5
birth.order <- ceiling( runif(1, 0,5) )
if( birth.order == 1 ){
print('The first child had more rules to follow')
}else if( birth.order == 2 ){
print('The second child was ignored')
}else if( birth.order == 3 ){
print('The third child was spoiled')
}else{
# if birth.order is anything other than 1, 2 or 3
print('No more unfounded generalizations!')
}
```
```
## [1] "The first child had more rules to follow"
```
To provide a more statistically interesting example of when we might use an if else statement, consider the calculation of a p\-value in a 1\-sample t\-test with a two\-sided alternative. Recall the calculate was:
* If the test statistic t is negative, then p\-value \= \\(2\*P\\left(T\_{df} \\le t \\right)\\)
* If the test statistic t is positive, then p\-value \= \\(2\*P\\left(T\_{df} \\ge t \\right)\\).
```
# create some fake data
n <- 20 # suppose this had a sample size of 20
x <- rnorm(n, mean=2, sd=1)
# testing H0: mu = 0 vs Ha: mu =/= 0
t <- ( mean(x) - 0 ) / ( sd(x)/sqrt(n) )
df <- n-1
if( t < 0 ){
p.value <- 2 * pt(t, df)
}else{
p.value <- 2 * (1 - pt(t, df))
}
# print the resulting p-value
p.value
```
```
## [1] 6.717254e-07
```
This sort of logic is necessary for the calculation of p\-values and so something similar is found somewhere inside the `t.test()` function.
When my code expressions in the if/else sections are short, I can use the command `ifelse()` that is a little more space efficient and responds correctly to vectors. The syntax is `ifelse( logical.expression, TrueValue, FalseValue )`.
```
x <- 1:10
ifelse( x <= 5, 'Small Value', 'Large Value')
```
```
## [1] "Small Value" "Small Value" "Small Value" "Small Value" "Small Value"
## [6] "Large Value" "Large Value" "Large Value" "Large Value" "Large Value"
```
11\.2 Loops
-----------
It is often desirable to write code that does the same thing over and over, relieving you of the burden of repetitive tasks. To do this we’ll need a way to tell the computer to repeat some section of code over and over. However we’ll usually want something small to change each time through the loop and some way to tell the computer how many times to run the loop or when to stop repeating.
### 11\.2\.1 `while` Loops
The basic form of a `while` loop is as follows:
```
# while loop with 1 line
while( logical )
expression # One line of R-code
# while loop with multiple lines to be repeated
while( logical ){
expression1 # multiple lines of R code
expression2
}
```
The computer will first evaluate the test expression. If it is true, it will execute the code once. It will then evaluate the test expression again to see if it is still true, and if so it will execute the code section a third time. The computer will continue with this process until the test expression finally evaluates as false.
```
x <- 2
while( x < 100 ){
x <- 2*x
print(x)
}
```
```
## [1] 4
## [1] 8
## [1] 16
## [1] 32
## [1] 64
## [1] 128
```
It is very common to forget to update the variable used in the test expression. In that case the test expression will never be false and the computer will never stop. This unfortunate situation is called an *infinite loop*.
```
# Example of an infinite loop! Do not Run!
x <- 1
while( x < 10 ){
print(x)
}
```
### 11\.2\.2 `for` Loops
Often we know ahead of time exactly how many times we should go through the loop. We could use a `while` loop, but there is also a second construct called a `for` loop that is quite useful.
The format of a for loop is as follows:
```
for( index in vector )
expression
for( index in vector ){
expression1
expression2
}
```
where the `index` variable will take on each value in `vector` in succession and then statement will be evaluated. As always, statement can be multiple statements wrapped in curly brackets {}.
```
for( i in 1:5 ){
print(i)
}
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
What is happening is that `i` starts out as the first element of the vector `c(1,2,3,4,5)`, in this case, `i` starts out as 1\. After `i` is assigned, the statements in the curly brackets are then evaluated. Once we get to the end of those statements, i is reassigned to the next element of the vector `c(1,2,3,4,5)`. This process is repeated until `i` has been assigned to each element of the given vector. It is somewhat traditional to use `i` and `j` and the index variables, but they could be anything.
We can use this loop to calculate the first \\(10\\) elements of the Fibonacci sequence. Recall that the Fibonacci sequence is defined by \\(F\_{n}\=F\_{n\-1}\+F\_{n\-2}\\) where \\(F\_{1}\=0\\) and \\(F\_{2}\=1\\).
```
F <- rep(0, 10) # initialize a vector of zeros
F[1] <- 0 # F[1] should be zero
F[2] <- 1 # F[2] should be 1
cat('F = ', F, '\n') # concatenate for pretty output; Just for show
```
```
## F = 0 1 0 0 0 0 0 0 0 0
```
```
for( n in 3:10 ){
F[n] <- F[n-1] + F[n-2] # define based on the prior two values
cat('F = ', F, '\n') # show the current step of the loop
}
```
```
## F = 0 1 1 0 0 0 0 0 0 0
## F = 0 1 1 2 0 0 0 0 0 0
## F = 0 1 1 2 3 0 0 0 0 0
## F = 0 1 1 2 3 5 0 0 0 0
## F = 0 1 1 2 3 5 8 0 0 0
## F = 0 1 1 2 3 5 8 13 0 0
## F = 0 1 1 2 3 5 8 13 21 0
## F = 0 1 1 2 3 5 8 13 21 34
```
For a more statistical case where we might want to perform a loop, we can consider the creation of the bootstrap estimate of a sampling distribution.
```
library(dplyr)
library(ggplot2)
SampDist <- data.frame() # Make a data frame to store the means
for( i in 1:1000 ){
SampDist <- trees %>%
sample_frac(replace=TRUE) %>%
dplyr::summarise(xbar=mean(Height)) %>% # 1x1 data frame
rbind( SampDist )
}
ggplot(SampDist, aes(x=xbar)) +
geom_histogram()
```
11\.3 Exercises
---------------
1. The \\(Uniform\\left(a,b\\right)\\) distribution is defined on x \\(\\in \[a,b]\\) and represents a random variable that takes on any value of between `a` and `b` with equal probability. Technically since there are an infinite number of values between `a` and `b`, each value has a probability of 0 of being selected and I should say each interval of width \\(d\\) has equal probability. It has the density function \\\[f\\left(x\\right)\=\\begin{cases}
\\frac{1}{b\-a} \& \\;\\;\\;\\;a\\le x\\le b\\\\
0 \& \\;\\;\\;\\;\\textrm{otherwise}
\\end{cases}\\]
The R function `dunif()`
```
a <- 4 # The min and max values we will use for this example
b <- 10 # Could be anything, but we need to pick something
x <- runif(n=1, 0,10) # one random value between 0 and 10
# what is value of f(x) at the randomly selected x value?
dunif(x, a, b)
```
```
## [1] 0.1666667
```
evaluates this density function for the above defined values of x, a, and b. Somewhere in that function, there is a chunk of code that evaluates the density for arbitrary values of \\(x\\). Run this code a few times and notice sometimes the result is \\(0\\) and sometimes it is \\(1/(10\-4\)\=0\.16666667\\).
Write a sequence of statements that utilizes an if statements to appropriately calculate the density of x assuming that `a`, `b` , and `x` are given to you, but your code won’t know if `x` is between `a` and `b`. That is, your code needs to figure out if it is and give either `1/(b-a)` or `0`.
1. We could write a set of if/else statements
```
a <- 4
b <- 10
x <- runif(n=1, 0,10) # one random value between 0 and 10
x
if( x < a ){
result <- ???
}else if( x <= b ){
result <- ???
}else{
result <- ???
}
```
Replace the `???` with the appropriate value, either 0 or \\(1/\\left(b\-a\\right)\\).
2. We could perform the logical comparison all in one comparison. Recall that we can use `&` to mean “and” and `|` to mean “or”. In the following two code chunks, replace the `???` with either `&` or `|` to make the appropriate result.
1. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
if( (a<=x) & (x<=b) ){
result <- 1/(b-a)
}else{
result <- 0
}
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
2. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
if( (x<a) ??? (b<x) ){
result <- 0
}else{
result <- 1/(b-a)
}
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
3. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
result <- ifelse( a<x & x<b, ???, ??? )
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
2. I often want to repeat some section of code some number of times. For example, I might want to create a bunch plots that compare the density of a t\-distribution with specified degrees of freedom to a standard normal distribution.
```
library(ggplot2)
df <- 4
N <- 1000
x <- seq(-4, 4, length=N)
data <- data.frame(
x = c(x,x),
y = c(dnorm(x), dt(x, df)),
type = c( rep('Normal',N), rep('T',N) ) )
# make a nice graph
myplot <- ggplot(data, aes(x=x, y=y, color=type, linetype=type)) +
geom_line() +
labs(title = paste('Std Normal vs t with', df, 'degrees of freedom'))
# actually print the nice graph we made
print(myplot)
```
1. Use a for loop to create similar graphs for degrees of freedom \\(2,3,4,\\dots,29,30\\).
2. In retrospect, perhaps we didn’t need to produce all of those. Rewrite your loop so that we only produce graphs for \\(\\left\\{ 2,3,4,5,10,15,20,25,30\\right\\}\\) degrees of freedom. *Hint: you can just modify the vector in the `for` statement to include the desired degrees of freedom.*
3. The `for` loop usually is the most natural one to use, but occasionally we have occasions where it is too cumbersome and a different sort of loop is appropriate. One example is taking a random sample from a truncated distribution. For example, I might want to take a sample from a normal distribution with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) but for some reason need the answer to be larger than zero. One solution is to just sample from the given normal distribution until I get a value that is bigger than zero.
```
mu <- 0
sigma <- 1
x <- rnorm(1, mean=mu, sd=sigma)
# start the while loop checking if x < 0
# generate a new x value
# end the while loop
```
Replace the comments in the above code so that x is a random observation from the truncated normal distribution.
11\.1 Decision statements
-------------------------
An if statement takes on the following two formats
```
# Simplest version
if( logical ){
expression # can be many lines of code
}
# Including the optional else
if( logical ){
expression
}else{
expression
}
```
where the else part is optional.
Suppose that I have a piece of code that generates a random variable from the Binomial distribution with one sample (essentially just flipping a coin) but I’d like to label it head or tails instead of one or zero.
```
# Flip the coin, and we get a 0 or 1
result <- rbinom(n=1, size=1, prob=0.5)
result
```
```
## [1] 0
```
```
# convert the 0/1 to Tail/Head
if( result == 0 ){
result <- 'Tail'
}else{
result <- 'Head'
}
result
```
```
## [1] "Tail"
```
What is happening is that the test expression inside the `if()` is evaluated and if it is true, then the subsequent statement is executed. If the test expression is false, the next statement is skipped. The way the R language is defined, only the first statement after the if statement is executed (or skipped) depending on the test expression. If we want multiple statements to be executed (or skipped), we will wrap those expressions in curly brackets `{ }`. I find it easier to follow the `if else` logic when I see the curly brackets so I use them even when there is only one expression to be executed. Also notice that the RStudio editor indents the code that might be skipped to try help give you a hint that it will be conditionally evaluated.
```
# Flip the coin, and we get a 0 or 1
result <- rbinom(n=1, size=1, prob=0.5)
result
```
```
## [1] 1
```
```
# convert the 0/1 to Tail/Head
if( result == 0 ){
result <- 'Tail'
print(" in the if statement, got a Tail! ")
}else{
result <- 'Head'
print("In the else part!")
}
```
```
## [1] "In the else part!"
```
```
result
```
```
## [1] "Head"
```
Run this code several times until you get both cases several times.
Finally we can nest if else statements together to allow you to write code that has many different execution routes.
```
# randomly grab a number between 0,5 and round it up to 1,2, ..., 5
birth.order <- ceiling( runif(1, 0,5) )
if( birth.order == 1 ){
print('The first child had more rules to follow')
}else if( birth.order == 2 ){
print('The second child was ignored')
}else if( birth.order == 3 ){
print('The third child was spoiled')
}else{
# if birth.order is anything other than 1, 2 or 3
print('No more unfounded generalizations!')
}
```
```
## [1] "The first child had more rules to follow"
```
To provide a more statistically interesting example of when we might use an if else statement, consider the calculation of a p\-value in a 1\-sample t\-test with a two\-sided alternative. Recall the calculate was:
* If the test statistic t is negative, then p\-value \= \\(2\*P\\left(T\_{df} \\le t \\right)\\)
* If the test statistic t is positive, then p\-value \= \\(2\*P\\left(T\_{df} \\ge t \\right)\\).
```
# create some fake data
n <- 20 # suppose this had a sample size of 20
x <- rnorm(n, mean=2, sd=1)
# testing H0: mu = 0 vs Ha: mu =/= 0
t <- ( mean(x) - 0 ) / ( sd(x)/sqrt(n) )
df <- n-1
if( t < 0 ){
p.value <- 2 * pt(t, df)
}else{
p.value <- 2 * (1 - pt(t, df))
}
# print the resulting p-value
p.value
```
```
## [1] 6.717254e-07
```
This sort of logic is necessary for the calculation of p\-values and so something similar is found somewhere inside the `t.test()` function.
When my code expressions in the if/else sections are short, I can use the command `ifelse()` that is a little more space efficient and responds correctly to vectors. The syntax is `ifelse( logical.expression, TrueValue, FalseValue )`.
```
x <- 1:10
ifelse( x <= 5, 'Small Value', 'Large Value')
```
```
## [1] "Small Value" "Small Value" "Small Value" "Small Value" "Small Value"
## [6] "Large Value" "Large Value" "Large Value" "Large Value" "Large Value"
```
11\.2 Loops
-----------
It is often desirable to write code that does the same thing over and over, relieving you of the burden of repetitive tasks. To do this we’ll need a way to tell the computer to repeat some section of code over and over. However we’ll usually want something small to change each time through the loop and some way to tell the computer how many times to run the loop or when to stop repeating.
### 11\.2\.1 `while` Loops
The basic form of a `while` loop is as follows:
```
# while loop with 1 line
while( logical )
expression # One line of R-code
# while loop with multiple lines to be repeated
while( logical ){
expression1 # multiple lines of R code
expression2
}
```
The computer will first evaluate the test expression. If it is true, it will execute the code once. It will then evaluate the test expression again to see if it is still true, and if so it will execute the code section a third time. The computer will continue with this process until the test expression finally evaluates as false.
```
x <- 2
while( x < 100 ){
x <- 2*x
print(x)
}
```
```
## [1] 4
## [1] 8
## [1] 16
## [1] 32
## [1] 64
## [1] 128
```
It is very common to forget to update the variable used in the test expression. In that case the test expression will never be false and the computer will never stop. This unfortunate situation is called an *infinite loop*.
```
# Example of an infinite loop! Do not Run!
x <- 1
while( x < 10 ){
print(x)
}
```
### 11\.2\.2 `for` Loops
Often we know ahead of time exactly how many times we should go through the loop. We could use a `while` loop, but there is also a second construct called a `for` loop that is quite useful.
The format of a for loop is as follows:
```
for( index in vector )
expression
for( index in vector ){
expression1
expression2
}
```
where the `index` variable will take on each value in `vector` in succession and then statement will be evaluated. As always, statement can be multiple statements wrapped in curly brackets {}.
```
for( i in 1:5 ){
print(i)
}
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
What is happening is that `i` starts out as the first element of the vector `c(1,2,3,4,5)`, in this case, `i` starts out as 1\. After `i` is assigned, the statements in the curly brackets are then evaluated. Once we get to the end of those statements, i is reassigned to the next element of the vector `c(1,2,3,4,5)`. This process is repeated until `i` has been assigned to each element of the given vector. It is somewhat traditional to use `i` and `j` and the index variables, but they could be anything.
We can use this loop to calculate the first \\(10\\) elements of the Fibonacci sequence. Recall that the Fibonacci sequence is defined by \\(F\_{n}\=F\_{n\-1}\+F\_{n\-2}\\) where \\(F\_{1}\=0\\) and \\(F\_{2}\=1\\).
```
F <- rep(0, 10) # initialize a vector of zeros
F[1] <- 0 # F[1] should be zero
F[2] <- 1 # F[2] should be 1
cat('F = ', F, '\n') # concatenate for pretty output; Just for show
```
```
## F = 0 1 0 0 0 0 0 0 0 0
```
```
for( n in 3:10 ){
F[n] <- F[n-1] + F[n-2] # define based on the prior two values
cat('F = ', F, '\n') # show the current step of the loop
}
```
```
## F = 0 1 1 0 0 0 0 0 0 0
## F = 0 1 1 2 0 0 0 0 0 0
## F = 0 1 1 2 3 0 0 0 0 0
## F = 0 1 1 2 3 5 0 0 0 0
## F = 0 1 1 2 3 5 8 0 0 0
## F = 0 1 1 2 3 5 8 13 0 0
## F = 0 1 1 2 3 5 8 13 21 0
## F = 0 1 1 2 3 5 8 13 21 34
```
For a more statistical case where we might want to perform a loop, we can consider the creation of the bootstrap estimate of a sampling distribution.
```
library(dplyr)
library(ggplot2)
SampDist <- data.frame() # Make a data frame to store the means
for( i in 1:1000 ){
SampDist <- trees %>%
sample_frac(replace=TRUE) %>%
dplyr::summarise(xbar=mean(Height)) %>% # 1x1 data frame
rbind( SampDist )
}
ggplot(SampDist, aes(x=xbar)) +
geom_histogram()
```
### 11\.2\.1 `while` Loops
The basic form of a `while` loop is as follows:
```
# while loop with 1 line
while( logical )
expression # One line of R-code
# while loop with multiple lines to be repeated
while( logical ){
expression1 # multiple lines of R code
expression2
}
```
The computer will first evaluate the test expression. If it is true, it will execute the code once. It will then evaluate the test expression again to see if it is still true, and if so it will execute the code section a third time. The computer will continue with this process until the test expression finally evaluates as false.
```
x <- 2
while( x < 100 ){
x <- 2*x
print(x)
}
```
```
## [1] 4
## [1] 8
## [1] 16
## [1] 32
## [1] 64
## [1] 128
```
It is very common to forget to update the variable used in the test expression. In that case the test expression will never be false and the computer will never stop. This unfortunate situation is called an *infinite loop*.
```
# Example of an infinite loop! Do not Run!
x <- 1
while( x < 10 ){
print(x)
}
```
### 11\.2\.2 `for` Loops
Often we know ahead of time exactly how many times we should go through the loop. We could use a `while` loop, but there is also a second construct called a `for` loop that is quite useful.
The format of a for loop is as follows:
```
for( index in vector )
expression
for( index in vector ){
expression1
expression2
}
```
where the `index` variable will take on each value in `vector` in succession and then statement will be evaluated. As always, statement can be multiple statements wrapped in curly brackets {}.
```
for( i in 1:5 ){
print(i)
}
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
What is happening is that `i` starts out as the first element of the vector `c(1,2,3,4,5)`, in this case, `i` starts out as 1\. After `i` is assigned, the statements in the curly brackets are then evaluated. Once we get to the end of those statements, i is reassigned to the next element of the vector `c(1,2,3,4,5)`. This process is repeated until `i` has been assigned to each element of the given vector. It is somewhat traditional to use `i` and `j` and the index variables, but they could be anything.
We can use this loop to calculate the first \\(10\\) elements of the Fibonacci sequence. Recall that the Fibonacci sequence is defined by \\(F\_{n}\=F\_{n\-1}\+F\_{n\-2}\\) where \\(F\_{1}\=0\\) and \\(F\_{2}\=1\\).
```
F <- rep(0, 10) # initialize a vector of zeros
F[1] <- 0 # F[1] should be zero
F[2] <- 1 # F[2] should be 1
cat('F = ', F, '\n') # concatenate for pretty output; Just for show
```
```
## F = 0 1 0 0 0 0 0 0 0 0
```
```
for( n in 3:10 ){
F[n] <- F[n-1] + F[n-2] # define based on the prior two values
cat('F = ', F, '\n') # show the current step of the loop
}
```
```
## F = 0 1 1 0 0 0 0 0 0 0
## F = 0 1 1 2 0 0 0 0 0 0
## F = 0 1 1 2 3 0 0 0 0 0
## F = 0 1 1 2 3 5 0 0 0 0
## F = 0 1 1 2 3 5 8 0 0 0
## F = 0 1 1 2 3 5 8 13 0 0
## F = 0 1 1 2 3 5 8 13 21 0
## F = 0 1 1 2 3 5 8 13 21 34
```
For a more statistical case where we might want to perform a loop, we can consider the creation of the bootstrap estimate of a sampling distribution.
```
library(dplyr)
library(ggplot2)
SampDist <- data.frame() # Make a data frame to store the means
for( i in 1:1000 ){
SampDist <- trees %>%
sample_frac(replace=TRUE) %>%
dplyr::summarise(xbar=mean(Height)) %>% # 1x1 data frame
rbind( SampDist )
}
ggplot(SampDist, aes(x=xbar)) +
geom_histogram()
```
11\.3 Exercises
---------------
1. The \\(Uniform\\left(a,b\\right)\\) distribution is defined on x \\(\\in \[a,b]\\) and represents a random variable that takes on any value of between `a` and `b` with equal probability. Technically since there are an infinite number of values between `a` and `b`, each value has a probability of 0 of being selected and I should say each interval of width \\(d\\) has equal probability. It has the density function \\\[f\\left(x\\right)\=\\begin{cases}
\\frac{1}{b\-a} \& \\;\\;\\;\\;a\\le x\\le b\\\\
0 \& \\;\\;\\;\\;\\textrm{otherwise}
\\end{cases}\\]
The R function `dunif()`
```
a <- 4 # The min and max values we will use for this example
b <- 10 # Could be anything, but we need to pick something
x <- runif(n=1, 0,10) # one random value between 0 and 10
# what is value of f(x) at the randomly selected x value?
dunif(x, a, b)
```
```
## [1] 0.1666667
```
evaluates this density function for the above defined values of x, a, and b. Somewhere in that function, there is a chunk of code that evaluates the density for arbitrary values of \\(x\\). Run this code a few times and notice sometimes the result is \\(0\\) and sometimes it is \\(1/(10\-4\)\=0\.16666667\\).
Write a sequence of statements that utilizes an if statements to appropriately calculate the density of x assuming that `a`, `b` , and `x` are given to you, but your code won’t know if `x` is between `a` and `b`. That is, your code needs to figure out if it is and give either `1/(b-a)` or `0`.
1. We could write a set of if/else statements
```
a <- 4
b <- 10
x <- runif(n=1, 0,10) # one random value between 0 and 10
x
if( x < a ){
result <- ???
}else if( x <= b ){
result <- ???
}else{
result <- ???
}
```
Replace the `???` with the appropriate value, either 0 or \\(1/\\left(b\-a\\right)\\).
2. We could perform the logical comparison all in one comparison. Recall that we can use `&` to mean “and” and `|` to mean “or”. In the following two code chunks, replace the `???` with either `&` or `|` to make the appropriate result.
1. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
if( (a<=x) & (x<=b) ){
result <- 1/(b-a)
}else{
result <- 0
}
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
2. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
if( (x<a) ??? (b<x) ){
result <- 0
}else{
result <- 1/(b-a)
}
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
3. ```
x <- runif(n=1, 0,10) # one random value between 0 and 10
result <- ifelse( a<x & x<b, ???, ??? )
print(paste('x=',round(x,digits=3), ' result=', round(result,digits=3)))
```
2. I often want to repeat some section of code some number of times. For example, I might want to create a bunch plots that compare the density of a t\-distribution with specified degrees of freedom to a standard normal distribution.
```
library(ggplot2)
df <- 4
N <- 1000
x <- seq(-4, 4, length=N)
data <- data.frame(
x = c(x,x),
y = c(dnorm(x), dt(x, df)),
type = c( rep('Normal',N), rep('T',N) ) )
# make a nice graph
myplot <- ggplot(data, aes(x=x, y=y, color=type, linetype=type)) +
geom_line() +
labs(title = paste('Std Normal vs t with', df, 'degrees of freedom'))
# actually print the nice graph we made
print(myplot)
```
1. Use a for loop to create similar graphs for degrees of freedom \\(2,3,4,\\dots,29,30\\).
2. In retrospect, perhaps we didn’t need to produce all of those. Rewrite your loop so that we only produce graphs for \\(\\left\\{ 2,3,4,5,10,15,20,25,30\\right\\}\\) degrees of freedom. *Hint: you can just modify the vector in the `for` statement to include the desired degrees of freedom.*
3. The `for` loop usually is the most natural one to use, but occasionally we have occasions where it is too cumbersome and a different sort of loop is appropriate. One example is taking a random sample from a truncated distribution. For example, I might want to take a sample from a normal distribution with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) but for some reason need the answer to be larger than zero. One solution is to just sample from the given normal distribution until I get a value that is bigger than zero.
```
mu <- 0
sigma <- 1
x <- rnorm(1, mean=mu, sd=sigma)
# start the while loop checking if x < 0
# generate a new x value
# end the while loop
```
Replace the comments in the above code so that x is a random observation from the truncated normal distribution.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/12-user-defined-functions.html |
Chapter 12 User Defined Functions
=================================
It is very important to be able to define a piece of programing logic that is repeated often. For example, I don’t want to have to always program the mathematical code for calculating the sample variance of a vector of data. Instead I just want to call a function that does everything for me and I don’t have to worry about the details.
While hiding the computational details is nice, fundamentally writing functions allows us to think about our problems at a higher layer of abstraction. For example, most scientists just want to run a t\-test on their data and get the appropriate p\-value out; they want to focus on their problem and not how to calculate what the appropriate degrees of freedom are. Functions let us do that.
12\.1 Basic function definition
-------------------------------
In the course of your analysis, it can be useful to define your own functions. The format for defining your own function is
```
function.name <- function(arg1, arg2, arg3){
statement1
statement2
}
```
where `arg1` is the first argument passed to the function and `arg2` is the second.
To illustrate how to define your own function, we will define a variance calculating function.
```
# define my function
my.var <- function(x){
n <- length(x) # calculate sample size
xbar <- mean(x) # calculate sample mean
SSE <- sum( (x-xbar)^2 ) # calculate sum of squared error
v <- SSE / ( n - 1 ) # "average" squared error
return(v) # result of function is v
}
```
```
# create a vector that I wish to calculate the variance of
test.vector <- c(1,2,2,4,5)
# calculate the variance using my function
calculated.var <- my.var( test.vector )
calculated.var
```
```
## [1] 2.7
```
Notice that even though I defined my function using `x` as my vector of data, and passed my function something named `test.vector`, R does the appropriate renaming.If my function doesn’t modify its input arguments, then R just passes a pointer to the inputs to avoid copying large amounts of data when you call a function. If your function modifies its input, then R will take the input data, copy it, and then pass that new copy to the function. This means that a function cannot modify its arguments. In Computer Science parlance, R does not allow for procedural side effects. Think of the variable `x` as a placeholder, with it being replaced by whatever gets passed into the function.
When I call a function, the function might cause something to happen (e.g. draw a plot) or it might do some calculates the result is returned by the function and we might want to save that. Inside a function, if I want the result of some calculation saved, I return the result as the output of the function. The way I specify to do this is via the `return` statement. (Actually R doesn’t completely require this. But the alternative method is less intuitive and I strongly recommend using the `return()` statement for readability.)
By writing a function, I can use the same chunk of code repeatedly. This means that I can do all my tedious calculations inside the function and just call the function whenever I want and happily ignore the details. Consider the function `t.test()` which we have used to do all the calculations in a t\-test. We could write a similar function using the following code:
```
# define my function
one.sample.t.test <- function(input.data, mu0){
n <- length(input.data)
xbar <- mean(input.data)
s <- sd(input.data)
t <- (xbar - mu0)/(s / sqrt(n))
if( t < 0 ){
p.value <- 2 * pt(t, df=n-1)
}else{
p.value <- 2 * (1-pt(t, df=n-1))
}
# we haven't addressed how to print things in a organized
# fashion, the following is ugly, but works...
# Notice that this function returns a character string
# with the necessary information in the string.
return( paste('t =', t, ' and p.value =', p.value) )
}
```
```
# create a vector that I wish apply a one-sample t-test on.
test.data <- c(1,2,2,4,5,4,3,2,3,2,4,5,6)
one.sample.t.test( test.data, mu0=2 )
```
```
## [1] "t = 3.15682074900988 and p.value = 0.00826952416706961"
```
Nearly every function we use to do data analysis is written in a similar fashion. Somebody decided it would be convenient to have a function that did an ANOVA analysis and they wrote something similar to the above function, but is a bit grander in scope. Even if you don’t end up writing any of your own functions, knowing how to will help you understand why certain functions you use are designed the way they are.
12\.2 Parameter Defaults
------------------------
When I define a function and can let it take as many arguments as I want and I can also give default values to the arguments. For example we can define the normal density function using the following code which gives a default mean of \\(0\\) and default standard deviation of \\(1\\).
```
# a function that defines the shape of a normal distribution.
# by including mu=0, we give a default value that the function
# user can override
dnorm.alternate <- function(x, mu=0, sd=1){
out <- 1 / (sd * sqrt(2*pi)) * exp( -(x-mu)^2 / (2 * sd^2) )
return(out)
}
```
```
# test the function to see if it works
dnorm.alternate(1)
```
```
## [1] 0.2419707
```
```
dnorm.alternate(1, mu=1)
```
```
## [1] 0.3989423
```
```
# Lets test the function a bit more by drawing the height
# of the normal distribution a lots of different points
# ... First the standard normal!
x <- seq(-3, 3, length=601)
plot( x, dnorm.alternate(x) ) # use default mu=0, sd=1
```
```
# next a normal with mean 1, and standard deviation 1
plot( x, dnorm.alternate(x, mu=1) ) # override mu, but use sd=1
```
Many functions that we use have defaults that we don’t normally mess with. For example, the function `mean()` has an option the specifies what it should do if your vector of data has missing data. The common solution is to remove those observations, but we might have wanted to say that the mean is unknown one component of it was unknown.
```
x <- c(1,2,3,NA) # fourth element is missing
mean(x) # default is to return NA if any element is missing
```
```
## [1] NA
```
```
mean(x, na.rm=TRUE) # Only average the non-missing data
```
```
## [1] 2
```
As you look at the help pages for different functions, you’ll see in the function definitions what the default values are. For example, the function `mean` has another option, `trim`, which specifies what percent of the data to trim at the extremes. Because we would expect mean to not do any trimming by default, the authors have appropriately defined the default amount of trimming to be zero via the definition `trim=0`.
12\.3 Ellipses
--------------
When writing functions, I occasionally have a situation where I call function `a()` and function `a()` needs to call another function, say `b()`, and I want to pass an unusual parameter to that function. To do this, I’ll use a set of three periods called an *ellipses*. What these do is represent a set of parameter values that will be passed along to a subsequent function.For example the following code takes the result of a simple linear regression and plots the data and the regression line and confidence region (basically I’m recreating a function that does the same thing as ggplot2’s geom\_smooth() layer). I might not want to specify (and give good defaults) to every single graphical parameter that the plot() function supports. Instead I’ll just use the ‘…’ argument and pass any additional parameters to the plot function.
```
# a function that draws the regression line and confidence interval
# notice it doesn't return anything... all it does is draw a plot
show.lm <- function(m, interval.type='confidence', fill.col='light grey', ...){
x <- m$model[,2] # extract the predictor variable
y <- m$model[,1] # extract the response
pred <- predict(m, interval=interval.type)
plot(x, y, ...)
polygon( c(x,rev(x)), # draw the ribbon defined
c(pred[,'lwr'], rev(pred[,'upr'])), # by lwr and upr - polygon
col='light grey') # fills in the region defined by
lines(x, pred[, 'fit']) # a set of vertices, need to reverse
points(x, y) # the uppers to make a nice figure
}
```
This function looks daunting, but we experiment to see what it does.
```
# first define a simple linear model from our cherry tree data
m <- lm( Volume ~ Girth, data=trees )
# call the function with no extraneous parameters
show.lm( m )
```
```
# Pass arguments that will just be passed along to the plot function
show.lm( m, xlab='Girth', ylab='Volume',
main='Relationship between Girth and Volume')
```
This type of trick is done commonly. Look at the help files for `hist()` and `qqnorm()` and you’ll see the ellipses used to pass graphical parameters along to sub\-functions. Functions like `lm()` use the ellipses to pass arguments to the low level regression fitting functions that do the actual calculations. By only including these parameters via the ellipses, must users won’t be tempted to mess with the parameters, but experts who know the nitty\-gritty details can still modify those parameters.
12\.4 Function Overloading
--------------------------
Frequently the user wants to inspect the results of some calculation and display a variable or object to the screen. The `print()` function does exactly that, but it acts differently for matrices than it does for vectors. It especially acts different for lists that I obtained from a call like `lm()` or `aov()`.
The reason that the print function can act differently depending on the object type that I pass it is because the function `print()` is *overloaded*. What this means is that there is a `print.lm()` function that is called whenever I call `print(obj)` when `obj` is the output of an `lm()` command.
Recall that we initially introduced a few different classes of data, Numerical, Factors, and Logicals. It turns out that I can create more types of classes.
```
x <- seq(1:10)
y <- 3+2*x+rnorm(10)
h <- hist(y) # h should be of class "Histogram"
```
```
class(h)
```
```
## [1] "histogram"
```
```
model <- lm( y ~ x ) # model is something of class "lm"
class(model)
```
```
## [1] "lm"
```
Many common functions such as `plot()` are overloaded so that when I call the plot function with an object, it will in turn call `plot.lm()` or `plot.histogram()` as appropriate. When building statistical models I am often interested in different quantities and would like to get those regardless of the model I am using. Below are a list of functions that work whether I fit a model via `aov()`, `lm()`, `glm()`, or `gam()`.
| Quantity | Function Name |
| --- | --- |
| Residuals | `resid( obj )` |
| Model Coefficients | `coef( obj )` |
| Summary Table | `summary( obj )` |
| ANOVA Table | `anova( obj )` |
| AIC value | `AIC( obj )` |
For the residual function, there exists a `resid.lm()` function, and `resid.gam()` and it is these functions are called when we run the command `resid( obj )`.
12\.5 Scope
-----------
Consider the case where we make a function that calculates the trimmed mean. A good implementation of the function is given here.
```
# Define a function for the trimmed mean
# x: vector of values to be averaged
# k: the number of elements to trim on either side
trimmed.mean <- function(x, k=0){
x <- sort(x) # arrange the input according magnitude
n <- length(x) # n = how many observations
if( k > 0){
x <- x[c(-1*(1:k), -1*((n-k+1):n))] # remove first k, last k
}
tm <- sum(x) / length(x) # mean of the remaining observations
return( tm )
}
x <- c(10:1,50) # 10, 9, 8, ..., 1
output <- trimmed.mean(x, k=2)
output
```
```
## [1] 6
```
```
x # x is unchanged
```
```
## [1] 10 9 8 7 6 5 4 3 2 1 50
```
Notice that even though I passed `x` into the function and then sorted it, `x` remained unsorted outside the function. When I modified `x`, R made a copy of `x` and sorted the *copy* that belonged to the function so that I didn’t modify a variable that was defined outside of the scope of my function. But what if I didn’t bother with passing x and k. If I don’t pass in the values of x and k, then R will try to find them in my current workspace.
```
# a horribly defined function that has no parameters
# but still accesses something called "x"
trimmed.mean <- function(){
x <- sort(x)
n <- length(x)
if( k > 0){
x <- x[c(-1*(1:k), -1*((n-k+1):n))]
}
tm <- sum(x)/length(x)
return( tm )
}
x <- c( 1:10, 50 ) # data to trim
k <- 2
trimmed.mean() # amazingly this still works
```
```
## [1] 6
```
```
# but what if k wasn't defined?
rm(k) # remove k
trimmed.mean() # now the function can't find anything named k and throws and error.
```
```
## Error in trimmed.mean(): object 'k' not found
```
So if I forget to pass some variable into a function, but it happens to be defined outside the function, R will find it. It is not good practice to rely on that because how do I take the trimmed mean of a vector named z? Worse yet, what if the variable x changes between runs of your function? What should be consistently giving the same result keeps changing. This is especially insidious when you have defined most of the arguments the function uses, but missed one. Your function happily goes to the next higher scope and sometimes finds it.
When executing a function, R will have access to all the variables defined in the function, all the variables defined in the function that called your function and so on until the base workspace. However, you should never let your function refer to something that is not either created in your function or passed in via a parameter.
12\.6 Exercises
---------------
1. Write a function that calculates the density function of a Uniform continuous variable on the interval \\(\\left(a,b\\right)\\). The function is defined as \\\[f\\left(x\\right)\=\\begin{cases}
\\frac{1}{b\-a} \& \\;\\;\\;\\textrm{if }a\\le x\\le b\\\\
0 \& \\;\\;\\;\\textrm{otherwise}
\\end{cases}\\] which looks like this
We want to write a function `duniform(x, a, b)` that takes an arbitrary value of `x` and parameters a and b and return the appropriate height of the density function. For various values of `x`, `a`, and `b`, demonstrate that your function returns the correct density value. Ideally, your function should be able to take a vector of values for `x` and return a vector of densities.
2. I very often want to provide default values to a parameter that I pass to a function. For example, it is so common for me to use the `pnorm()` and `qnorm()` functions on the standard normal, that R will automatically use `mean=0` and `sd=1` parameters unless you tell R otherwise. To get that behavior, we just set the default parameter values in the definition. When the function is called, the user specified value is used, but if none is specified, the defaults are used. Look at the help page for the functions `dunif()`, and notice that there are a number of default parameters. For your `duniform()` function provide default values of `0` and `1` for `a` and `b`. Demonstrate that your function is appropriately using the given default values.
12\.1 Basic function definition
-------------------------------
In the course of your analysis, it can be useful to define your own functions. The format for defining your own function is
```
function.name <- function(arg1, arg2, arg3){
statement1
statement2
}
```
where `arg1` is the first argument passed to the function and `arg2` is the second.
To illustrate how to define your own function, we will define a variance calculating function.
```
# define my function
my.var <- function(x){
n <- length(x) # calculate sample size
xbar <- mean(x) # calculate sample mean
SSE <- sum( (x-xbar)^2 ) # calculate sum of squared error
v <- SSE / ( n - 1 ) # "average" squared error
return(v) # result of function is v
}
```
```
# create a vector that I wish to calculate the variance of
test.vector <- c(1,2,2,4,5)
# calculate the variance using my function
calculated.var <- my.var( test.vector )
calculated.var
```
```
## [1] 2.7
```
Notice that even though I defined my function using `x` as my vector of data, and passed my function something named `test.vector`, R does the appropriate renaming.If my function doesn’t modify its input arguments, then R just passes a pointer to the inputs to avoid copying large amounts of data when you call a function. If your function modifies its input, then R will take the input data, copy it, and then pass that new copy to the function. This means that a function cannot modify its arguments. In Computer Science parlance, R does not allow for procedural side effects. Think of the variable `x` as a placeholder, with it being replaced by whatever gets passed into the function.
When I call a function, the function might cause something to happen (e.g. draw a plot) or it might do some calculates the result is returned by the function and we might want to save that. Inside a function, if I want the result of some calculation saved, I return the result as the output of the function. The way I specify to do this is via the `return` statement. (Actually R doesn’t completely require this. But the alternative method is less intuitive and I strongly recommend using the `return()` statement for readability.)
By writing a function, I can use the same chunk of code repeatedly. This means that I can do all my tedious calculations inside the function and just call the function whenever I want and happily ignore the details. Consider the function `t.test()` which we have used to do all the calculations in a t\-test. We could write a similar function using the following code:
```
# define my function
one.sample.t.test <- function(input.data, mu0){
n <- length(input.data)
xbar <- mean(input.data)
s <- sd(input.data)
t <- (xbar - mu0)/(s / sqrt(n))
if( t < 0 ){
p.value <- 2 * pt(t, df=n-1)
}else{
p.value <- 2 * (1-pt(t, df=n-1))
}
# we haven't addressed how to print things in a organized
# fashion, the following is ugly, but works...
# Notice that this function returns a character string
# with the necessary information in the string.
return( paste('t =', t, ' and p.value =', p.value) )
}
```
```
# create a vector that I wish apply a one-sample t-test on.
test.data <- c(1,2,2,4,5,4,3,2,3,2,4,5,6)
one.sample.t.test( test.data, mu0=2 )
```
```
## [1] "t = 3.15682074900988 and p.value = 0.00826952416706961"
```
Nearly every function we use to do data analysis is written in a similar fashion. Somebody decided it would be convenient to have a function that did an ANOVA analysis and they wrote something similar to the above function, but is a bit grander in scope. Even if you don’t end up writing any of your own functions, knowing how to will help you understand why certain functions you use are designed the way they are.
12\.2 Parameter Defaults
------------------------
When I define a function and can let it take as many arguments as I want and I can also give default values to the arguments. For example we can define the normal density function using the following code which gives a default mean of \\(0\\) and default standard deviation of \\(1\\).
```
# a function that defines the shape of a normal distribution.
# by including mu=0, we give a default value that the function
# user can override
dnorm.alternate <- function(x, mu=0, sd=1){
out <- 1 / (sd * sqrt(2*pi)) * exp( -(x-mu)^2 / (2 * sd^2) )
return(out)
}
```
```
# test the function to see if it works
dnorm.alternate(1)
```
```
## [1] 0.2419707
```
```
dnorm.alternate(1, mu=1)
```
```
## [1] 0.3989423
```
```
# Lets test the function a bit more by drawing the height
# of the normal distribution a lots of different points
# ... First the standard normal!
x <- seq(-3, 3, length=601)
plot( x, dnorm.alternate(x) ) # use default mu=0, sd=1
```
```
# next a normal with mean 1, and standard deviation 1
plot( x, dnorm.alternate(x, mu=1) ) # override mu, but use sd=1
```
Many functions that we use have defaults that we don’t normally mess with. For example, the function `mean()` has an option the specifies what it should do if your vector of data has missing data. The common solution is to remove those observations, but we might have wanted to say that the mean is unknown one component of it was unknown.
```
x <- c(1,2,3,NA) # fourth element is missing
mean(x) # default is to return NA if any element is missing
```
```
## [1] NA
```
```
mean(x, na.rm=TRUE) # Only average the non-missing data
```
```
## [1] 2
```
As you look at the help pages for different functions, you’ll see in the function definitions what the default values are. For example, the function `mean` has another option, `trim`, which specifies what percent of the data to trim at the extremes. Because we would expect mean to not do any trimming by default, the authors have appropriately defined the default amount of trimming to be zero via the definition `trim=0`.
12\.3 Ellipses
--------------
When writing functions, I occasionally have a situation where I call function `a()` and function `a()` needs to call another function, say `b()`, and I want to pass an unusual parameter to that function. To do this, I’ll use a set of three periods called an *ellipses*. What these do is represent a set of parameter values that will be passed along to a subsequent function.For example the following code takes the result of a simple linear regression and plots the data and the regression line and confidence region (basically I’m recreating a function that does the same thing as ggplot2’s geom\_smooth() layer). I might not want to specify (and give good defaults) to every single graphical parameter that the plot() function supports. Instead I’ll just use the ‘…’ argument and pass any additional parameters to the plot function.
```
# a function that draws the regression line and confidence interval
# notice it doesn't return anything... all it does is draw a plot
show.lm <- function(m, interval.type='confidence', fill.col='light grey', ...){
x <- m$model[,2] # extract the predictor variable
y <- m$model[,1] # extract the response
pred <- predict(m, interval=interval.type)
plot(x, y, ...)
polygon( c(x,rev(x)), # draw the ribbon defined
c(pred[,'lwr'], rev(pred[,'upr'])), # by lwr and upr - polygon
col='light grey') # fills in the region defined by
lines(x, pred[, 'fit']) # a set of vertices, need to reverse
points(x, y) # the uppers to make a nice figure
}
```
This function looks daunting, but we experiment to see what it does.
```
# first define a simple linear model from our cherry tree data
m <- lm( Volume ~ Girth, data=trees )
# call the function with no extraneous parameters
show.lm( m )
```
```
# Pass arguments that will just be passed along to the plot function
show.lm( m, xlab='Girth', ylab='Volume',
main='Relationship between Girth and Volume')
```
This type of trick is done commonly. Look at the help files for `hist()` and `qqnorm()` and you’ll see the ellipses used to pass graphical parameters along to sub\-functions. Functions like `lm()` use the ellipses to pass arguments to the low level regression fitting functions that do the actual calculations. By only including these parameters via the ellipses, must users won’t be tempted to mess with the parameters, but experts who know the nitty\-gritty details can still modify those parameters.
12\.4 Function Overloading
--------------------------
Frequently the user wants to inspect the results of some calculation and display a variable or object to the screen. The `print()` function does exactly that, but it acts differently for matrices than it does for vectors. It especially acts different for lists that I obtained from a call like `lm()` or `aov()`.
The reason that the print function can act differently depending on the object type that I pass it is because the function `print()` is *overloaded*. What this means is that there is a `print.lm()` function that is called whenever I call `print(obj)` when `obj` is the output of an `lm()` command.
Recall that we initially introduced a few different classes of data, Numerical, Factors, and Logicals. It turns out that I can create more types of classes.
```
x <- seq(1:10)
y <- 3+2*x+rnorm(10)
h <- hist(y) # h should be of class "Histogram"
```
```
class(h)
```
```
## [1] "histogram"
```
```
model <- lm( y ~ x ) # model is something of class "lm"
class(model)
```
```
## [1] "lm"
```
Many common functions such as `plot()` are overloaded so that when I call the plot function with an object, it will in turn call `plot.lm()` or `plot.histogram()` as appropriate. When building statistical models I am often interested in different quantities and would like to get those regardless of the model I am using. Below are a list of functions that work whether I fit a model via `aov()`, `lm()`, `glm()`, or `gam()`.
| Quantity | Function Name |
| --- | --- |
| Residuals | `resid( obj )` |
| Model Coefficients | `coef( obj )` |
| Summary Table | `summary( obj )` |
| ANOVA Table | `anova( obj )` |
| AIC value | `AIC( obj )` |
For the residual function, there exists a `resid.lm()` function, and `resid.gam()` and it is these functions are called when we run the command `resid( obj )`.
12\.5 Scope
-----------
Consider the case where we make a function that calculates the trimmed mean. A good implementation of the function is given here.
```
# Define a function for the trimmed mean
# x: vector of values to be averaged
# k: the number of elements to trim on either side
trimmed.mean <- function(x, k=0){
x <- sort(x) # arrange the input according magnitude
n <- length(x) # n = how many observations
if( k > 0){
x <- x[c(-1*(1:k), -1*((n-k+1):n))] # remove first k, last k
}
tm <- sum(x) / length(x) # mean of the remaining observations
return( tm )
}
x <- c(10:1,50) # 10, 9, 8, ..., 1
output <- trimmed.mean(x, k=2)
output
```
```
## [1] 6
```
```
x # x is unchanged
```
```
## [1] 10 9 8 7 6 5 4 3 2 1 50
```
Notice that even though I passed `x` into the function and then sorted it, `x` remained unsorted outside the function. When I modified `x`, R made a copy of `x` and sorted the *copy* that belonged to the function so that I didn’t modify a variable that was defined outside of the scope of my function. But what if I didn’t bother with passing x and k. If I don’t pass in the values of x and k, then R will try to find them in my current workspace.
```
# a horribly defined function that has no parameters
# but still accesses something called "x"
trimmed.mean <- function(){
x <- sort(x)
n <- length(x)
if( k > 0){
x <- x[c(-1*(1:k), -1*((n-k+1):n))]
}
tm <- sum(x)/length(x)
return( tm )
}
x <- c( 1:10, 50 ) # data to trim
k <- 2
trimmed.mean() # amazingly this still works
```
```
## [1] 6
```
```
# but what if k wasn't defined?
rm(k) # remove k
trimmed.mean() # now the function can't find anything named k and throws and error.
```
```
## Error in trimmed.mean(): object 'k' not found
```
So if I forget to pass some variable into a function, but it happens to be defined outside the function, R will find it. It is not good practice to rely on that because how do I take the trimmed mean of a vector named z? Worse yet, what if the variable x changes between runs of your function? What should be consistently giving the same result keeps changing. This is especially insidious when you have defined most of the arguments the function uses, but missed one. Your function happily goes to the next higher scope and sometimes finds it.
When executing a function, R will have access to all the variables defined in the function, all the variables defined in the function that called your function and so on until the base workspace. However, you should never let your function refer to something that is not either created in your function or passed in via a parameter.
12\.6 Exercises
---------------
1. Write a function that calculates the density function of a Uniform continuous variable on the interval \\(\\left(a,b\\right)\\). The function is defined as \\\[f\\left(x\\right)\=\\begin{cases}
\\frac{1}{b\-a} \& \\;\\;\\;\\textrm{if }a\\le x\\le b\\\\
0 \& \\;\\;\\;\\textrm{otherwise}
\\end{cases}\\] which looks like this
We want to write a function `duniform(x, a, b)` that takes an arbitrary value of `x` and parameters a and b and return the appropriate height of the density function. For various values of `x`, `a`, and `b`, demonstrate that your function returns the correct density value. Ideally, your function should be able to take a vector of values for `x` and return a vector of densities.
2. I very often want to provide default values to a parameter that I pass to a function. For example, it is so common for me to use the `pnorm()` and `qnorm()` functions on the standard normal, that R will automatically use `mean=0` and `sd=1` parameters unless you tell R otherwise. To get that behavior, we just set the default parameter values in the definition. When the function is called, the user specified value is used, but if none is specified, the defaults are used. Look at the help page for the functions `dunif()`, and notice that there are a number of default parameters. For your `duniform()` function provide default values of `0` and `1` for `a` and `b`. Demonstrate that your function is appropriately using the given default values.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/13-string-manipulation.html |
Chapter 13 String Manipulation
==============================
Strings make up a very important class of data. Data being read into R often come in the form of character strings where different parts might mean different things. For example a sample ID of “R1\_P2\_C1\_2012\_05\_28” might represent data from Region 1, Park 2, Camera 1, taken on May 28, 2012\. It is important that we have a set of utilities that allow us to split and combine character strings in a easy and consistent fashion.
Unfortunately, the utilities included in the base version of R are somewhat inconsistent and were not designed to work nicely together. Hadley Wickham, the developer of `ggplot2` and `dplyr` has this to say:
> “R provides a solid set of string operations, but because they have grown organically over time, they can be inconsistent and a little hard to learn. Additionally, they lag behind the string operations in other programming languages, so that some things that are easy to do in languages like Ruby or Python are rather hard to do in R.” – Hadley Wickham
For this chapter we will introduce the most commonly used functions from the base version of R that you might use or see in other people’s code. Second, we introduce Dr Wickham’s `stringr` package that provides many useful functions that operate in a consistent manner.
13\.1 Base function
-------------------
1\.1\.1 `paste()`
The most basic thing we will want to do is to combine two strings or to combine a string with a numerical value. The `paste()` command will take one or more R objects and converts them to character strings and then pastes them together to form one or more character strings. It has the form:
```
paste( ..., sep = ' ', collapse = NULL )
```
The `...` piece means that we can pass any number of objects to be pasted together. The `sep` argument gives the string that separates the strings to be joined and the collapse argument that specifies if a simplification should be performed after being pasting together.
Suppose we want to combine the strings “Peanut butter” and “Jelly” then we could execute:
```
paste( "PeanutButter", "Jelly" )
```
```
## [1] "PeanutButter Jelly"
```
Notice that without specifying the separator character, R chose to put a space between the two strings. We could specify whatever we wanted:
```
paste( "Hello", "World", sep='_' )
```
```
## [1] "Hello_World"
```
Also we can combine strings with numerical values
```
paste( "Pi is equal to", pi )
```
```
## [1] "Pi is equal to 3.14159265358979"
```
We can combine vectors of similar or different lengths as well. By default R assumes that you want to produce a vector of character strings as output.
```
paste( "n =", c(5,25,100) )
```
```
## [1] "n = 5" "n = 25" "n = 100"
```
```
first.names <- c('Robb','Stannis','Daenerys')
last.names <- c('Stark','Baratheon','Targaryen')
paste( first.names, last.names)
```
```
## [1] "Robb Stark" "Stannis Baratheon" "Daenerys Targaryen"
```
If we want `paste()` produce just a single string of output, use the `collapse=` argument to paste together each of the output vectors (separated by the `collapse` character).
```
paste( "n =", c(5,25,100) ) # Produces 3 strings
```
```
## [1] "n = 5" "n = 25" "n = 100"
```
```
paste( "n =", c(5,25,100), collapse=':' ) # collapses output into one string
```
```
## [1] "n = 5:n = 25:n = 100"
```
```
paste(first.names, last.names, sep='.', collapse=' : ')
```
```
## [1] "Robb.Stark : Stannis.Baratheon : Daenerys.Targaryen"
```
Notice we could use the `paste()` command with the collapse option to combine a vector of character strings together.
```
paste(first.names, collapse=':')
```
```
## [1] "Robb:Stannis:Daenerys"
```
13\.2 Package `stringr`: basic operations
-----------------------------------------
> The goal of stringr is to make a consistent user interface to a suite of functions to manipulate strings. “(stringr) is a set of simple wrappers that make R’s string functions more consistent, simpler and easier to use. It does this by ensuring that: function and argument names (and positions) are consistent, all functions deal with NA’s and zero length character appropriately, and the output data structures from each function matches the input data structures of other functions.” \- Hadley Wickham
We’ll investigate the most commonly used function but there are many we will ignore.
| Function | Description |
| --- | --- |
| `str_c()` | string concatenation, similar to paste |
| `str_length()` | number of characters in the string |
| `str_sub()` | extract a substring |
| `str_trim()` | remove leading and trailing whitespace |
| `str_pad()` | pad a string with empty space to make it a certain length |
### 13\.2\.1 Concatenating with `str_c()` or `str_join()`
The first thing we do is to concatenate two strings or two vectors of strings similarly to the `paste()` command. The `str_c()` and `str_join()` functions are a synonym for the exact same function, but str\_join() might be a more natural verb to use and remember. The syntax is:
```
str_c( ..., sep='', collapse=NULL)
```
You can think of the inputs building a matrix of strings, with each input creating a column of the matrix. For each row, `str_c()` first joins all the columns (using the separator character given in `sep`) into a single column of strings. If the collapse argument is non\-NULL, the function takes the vector and joins each element together using collapse as the separator character.
```
# load the stringr library
library(stringr)
# envisioning the matrix of strings
cbind(first.names, last.names)
```
```
## first.names last.names
## [1,] "Robb" "Stark"
## [2,] "Stannis" "Baratheon"
## [3,] "Daenerys" "Targaryen"
```
```
# join the columns together
full.names <- str_c( first.names, last.names, sep='.')
cbind( first.names, last.names, full.names)
```
```
## first.names last.names full.names
## [1,] "Robb" "Stark" "Robb.Stark"
## [2,] "Stannis" "Baratheon" "Stannis.Baratheon"
## [3,] "Daenerys" "Targaryen" "Daenerys.Targaryen"
```
```
# Join each of the rows together separated by collapse
str_c( first.names, last.names, sep='.', collapse=' : ')
```
```
## [1] "Robb.Stark : Stannis.Baratheon : Daenerys.Targaryen"
```
### 13\.2\.2 Calculating string length with `str_length()`
The `str_length()` function calculates the length of each string in the vector of strings passed to it.
```
text <- c('WordTesting', 'With a space', NA, 'Night')
str_length( text )
```
```
## [1] 11 12 NA 5
```
Notice that `str_length()` correctly interprets the missing data as missing and that the length ought to also be missing.
### 13\.2\.3 Extracting substrings with `str_sub()`
If we know we want to extract the \\(3^{rd}\\) through \\(6^{th}\\) letters in a string, this function will grab them.
```
str_sub(text, start=3, end=6)
```
```
## [1] "rdTe" "th a" NA "ght"
```
If a given string isn’t long enough to contain all the necessary indices, `str_sub()` returns only the letters that where there (as in the above case for “Night”
### 13\.2\.4 Pad a string with `str_pad()`
Sometimes we to make every string in a vector the same length to facilitate display or in the creation of a uniform system of assigning ID numbers. The `str_pad()` function will add spaces at either the beginning or end of the of every string appropriately.
```
str_pad(first.names, width=8)
```
```
## [1] " Robb" " Stannis" "Daenerys"
```
```
str_pad(first.names, width=8, side='right', pad='*')
```
```
## [1] "Robb****" "Stannis*" "Daenerys"
```
### 13\.2\.5 Trim a string with `str_trim()`
This removes any leading or trailing whitespace where whitespace is defined as spaces ‘’, tabs `\t` or returns `\n`.
```
text <- ' Some text. \n '
print(text)
```
```
## [1] " Some text. \n "
```
```
str_trim(text)
```
```
## [1] "Some text."
```
13\.3 Package `stringr`: Pattern Matching
-----------------------------------------
The previous commands are all quite useful but the most powerful string operation is take a string and match some pattern within it. The following commands are available within `stringr`.
| Function | Description |
| --- | --- |
| `str_detect()` | Detect if a pattern occurs in input string |
| `str_locate()` `str_locate_all()` | Locates the first (or all) positions of a pattern. |
| `str_extract()` `str_extract_all()` | Extracts the first (or all) substrings corresponding to a pattern |
| `str_replace()` `str_replace_all()` | Replaces the matched substring(s) with a new pattern |
| `str_split()` `str_split_fixed()` | Splits the input string based on the inputed pattern |
We will first examine these functions using a very simple pattern matching algorithm where we are matching a specific pattern. For most people, this is as complex as we need.
Suppose that we have a vector of strings that contain a date in the form “2012\-May\-27” and we want to manipulate them to extract certain information.
```
test.vector <- c('2008-Feb-10', '2010-Sept-18', '2013-Jan-11', '2016-Jan-2')
```
### 13\.3\.1 Detecting a pattern using str\_detect()
Suppose we want to know which dates are in September. We want to detect if the pattern “Sept” occurs in the strings. It is important that I used fixed(“Sept”) in this code to “turn off” the complicated regular expression matching rules and just look for exactly what I specified.
```
str_detect( test.vector, pattern=fixed('Sept') )
```
```
## [1] FALSE TRUE FALSE FALSE
```
Here we see that the second string in the test vector included the substring “Sept” but none of the others did.
### 13\.3\.2 Locating a pattern using str\_locate()
To figure out where the “\-” characters are, we can use the `str_locate()` function.
```
str_locate(test.vector, pattern=fixed('-') )
```
```
## start end
## [1,] 5 5
## [2,] 5 5
## [3,] 5 5
## [4,] 5 5
```
which shows that the first dash occurs as the \\(5^{th}\\) character in each string. If we wanted all the dashes in the string the following works.
```
str_locate_all(test.vector, pattern=fixed('-') )
```
```
## [[1]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[2]]
## start end
## [1,] 5 5
## [2,] 10 10
##
## [[3]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[4]]
## start end
## [1,] 5 5
## [2,] 9 9
```
The output of `str_locate_all()` is a list of matrices that gives the start and end of each matrix. Using this information, we could grab the Year/Month/Day information out of each of the dates. We won’t do that here because it will be easier to do this using `str_split()`.
### 13\.3\.3 Replacing substrings using `str_replace()`
Suppose we didn’t like using “\-” to separate the Year/Month/Day but preferred a space, or an underscore, or something else. This can be done by replacing all of the “\-” with the desired character. The `str_replace()` function only replaces the first match, but `str_replace_all()` replaces all matches.
```
str_replace(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb-10" "2010:Sept-18" "2013:Jan-11" "2016:Jan-2"
```
```
str_replace_all(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb:10" "2010:Sept:18" "2013:Jan:11" "2016:Jan:2"
```
### 13\.3\.4 Splitting into substrings using `str_split()`
We can split each of the dates into three smaller substrings using the `str_split()` command, which returns a list where each element of the list is a vector containing pieces of the original string (excluding the pattern we matched on).
If we know that all the strings will be split into a known number of substrings (we have to specify how many substrings to match with the `n=` argument), we can use `str_split_fixed()` to get a matrix of substrings instead of list of substrings. It is somewhat unfortunate that the `_fixed` modifier to the function name is the same as what we use to specify to use simple pattern matching.
```
str_split_fixed(test.vector, pattern=fixed('-'), n=3)
```
```
## [,1] [,2] [,3]
## [1,] "2008" "Feb" "10"
## [2,] "2010" "Sept" "18"
## [3,] "2013" "Jan" "11"
## [4,] "2016" "Jan" "2"
```
13\.4 Regular Expressions
-------------------------
The next section will introduce using regular expressions. Regular expressions are a way to specify very complicated patterns. Go look at <https://xkcd.com/208/> to gain insight into just how geeky regular expressions are.
Regular expressions are a way of precisely writing out patterns that are very complicated. The stringr package pattern arguments can be given using standard regular expressions (not perl\-style!) instead of using fixed strings.
Regular expressions are extremely powerful for sifting through large amounts of text. For example, we might want to extract all of the 4 digit substrings (the years) out of our dates vector, or I might want to find all cases in a paragraph of text of words that begin with a capital letter and are at least 5 letters long. In another, somewhat nefarious example, spammers might have downloaded a bunch of text from webpages and want to be able to look for email addresses. So as a first pass, they want to match a pattern: \\\[\\underset{\\textrm{1 or more letters}}{\\underbrace{\\texttt{Username}}}\\texttt{@}\\;\\;\\underset{\\textrm{1 or more letter}}{\\underbrace{\\texttt{OrganizationName}}}\\;\\texttt{.\\;}\\begin{cases}
\\texttt{com}\\\\
\\texttt{org}\\\\
\\texttt{edu}
\\end{cases}\\] where the `Username` and `OrganizationName` can be pretty much anything, but a valid email address looks like this. We might get even more creative and recognize that my list of possible endings could include country codes as well.
For most people, I don’t recommend opening the regular expression can\-of\-worms, but it is good to know that these pattern matching utilities are available within R and you don’t need to export your pattern matching problems to Perl or Python.
13\.5 Exercises
---------------
1. The following file names were used in a camera trap study. The S number represents the site, P is the plot within a site, C is the camera number within the plot, the first string of numbers is the YearMonthDay and the second string of numbers is the HourMinuteSecond.
```
file.names <- c( 'S123.P2.C10_20120621_213422.jpg',
'S10.P1.C1_20120622_050148.jpg',
'S187.P2.C2_20120702_023501.jpg')
```
Use a combination of `str_sub()` and `str_split()` to produce a data frame with columns corresponding to the `site`, `plot`, `camera`, `year`, `month`, `day`, `hour`, `minute`, and `second` for these three file names. So we want to produce code that will create the data frame:
```
Site Plot Camera Year Month Day Hour Minute Second
S123 P2 C10 2012 06 21 21 34 22
S10 P1 C1 2012 06 22 05 01 48
S187 P2 C2 2012 07 02 02 35 01
```
*Hint: Convert all the dashes to periods and then split on the dots. After that you’ll have to further tear apart the date and time columns using str\_sub().*
13\.1 Base function
-------------------
1\.1\.1 `paste()`
The most basic thing we will want to do is to combine two strings or to combine a string with a numerical value. The `paste()` command will take one or more R objects and converts them to character strings and then pastes them together to form one or more character strings. It has the form:
```
paste( ..., sep = ' ', collapse = NULL )
```
The `...` piece means that we can pass any number of objects to be pasted together. The `sep` argument gives the string that separates the strings to be joined and the collapse argument that specifies if a simplification should be performed after being pasting together.
Suppose we want to combine the strings “Peanut butter” and “Jelly” then we could execute:
```
paste( "PeanutButter", "Jelly" )
```
```
## [1] "PeanutButter Jelly"
```
Notice that without specifying the separator character, R chose to put a space between the two strings. We could specify whatever we wanted:
```
paste( "Hello", "World", sep='_' )
```
```
## [1] "Hello_World"
```
Also we can combine strings with numerical values
```
paste( "Pi is equal to", pi )
```
```
## [1] "Pi is equal to 3.14159265358979"
```
We can combine vectors of similar or different lengths as well. By default R assumes that you want to produce a vector of character strings as output.
```
paste( "n =", c(5,25,100) )
```
```
## [1] "n = 5" "n = 25" "n = 100"
```
```
first.names <- c('Robb','Stannis','Daenerys')
last.names <- c('Stark','Baratheon','Targaryen')
paste( first.names, last.names)
```
```
## [1] "Robb Stark" "Stannis Baratheon" "Daenerys Targaryen"
```
If we want `paste()` produce just a single string of output, use the `collapse=` argument to paste together each of the output vectors (separated by the `collapse` character).
```
paste( "n =", c(5,25,100) ) # Produces 3 strings
```
```
## [1] "n = 5" "n = 25" "n = 100"
```
```
paste( "n =", c(5,25,100), collapse=':' ) # collapses output into one string
```
```
## [1] "n = 5:n = 25:n = 100"
```
```
paste(first.names, last.names, sep='.', collapse=' : ')
```
```
## [1] "Robb.Stark : Stannis.Baratheon : Daenerys.Targaryen"
```
Notice we could use the `paste()` command with the collapse option to combine a vector of character strings together.
```
paste(first.names, collapse=':')
```
```
## [1] "Robb:Stannis:Daenerys"
```
13\.2 Package `stringr`: basic operations
-----------------------------------------
> The goal of stringr is to make a consistent user interface to a suite of functions to manipulate strings. “(stringr) is a set of simple wrappers that make R’s string functions more consistent, simpler and easier to use. It does this by ensuring that: function and argument names (and positions) are consistent, all functions deal with NA’s and zero length character appropriately, and the output data structures from each function matches the input data structures of other functions.” \- Hadley Wickham
We’ll investigate the most commonly used function but there are many we will ignore.
| Function | Description |
| --- | --- |
| `str_c()` | string concatenation, similar to paste |
| `str_length()` | number of characters in the string |
| `str_sub()` | extract a substring |
| `str_trim()` | remove leading and trailing whitespace |
| `str_pad()` | pad a string with empty space to make it a certain length |
### 13\.2\.1 Concatenating with `str_c()` or `str_join()`
The first thing we do is to concatenate two strings or two vectors of strings similarly to the `paste()` command. The `str_c()` and `str_join()` functions are a synonym for the exact same function, but str\_join() might be a more natural verb to use and remember. The syntax is:
```
str_c( ..., sep='', collapse=NULL)
```
You can think of the inputs building a matrix of strings, with each input creating a column of the matrix. For each row, `str_c()` first joins all the columns (using the separator character given in `sep`) into a single column of strings. If the collapse argument is non\-NULL, the function takes the vector and joins each element together using collapse as the separator character.
```
# load the stringr library
library(stringr)
# envisioning the matrix of strings
cbind(first.names, last.names)
```
```
## first.names last.names
## [1,] "Robb" "Stark"
## [2,] "Stannis" "Baratheon"
## [3,] "Daenerys" "Targaryen"
```
```
# join the columns together
full.names <- str_c( first.names, last.names, sep='.')
cbind( first.names, last.names, full.names)
```
```
## first.names last.names full.names
## [1,] "Robb" "Stark" "Robb.Stark"
## [2,] "Stannis" "Baratheon" "Stannis.Baratheon"
## [3,] "Daenerys" "Targaryen" "Daenerys.Targaryen"
```
```
# Join each of the rows together separated by collapse
str_c( first.names, last.names, sep='.', collapse=' : ')
```
```
## [1] "Robb.Stark : Stannis.Baratheon : Daenerys.Targaryen"
```
### 13\.2\.2 Calculating string length with `str_length()`
The `str_length()` function calculates the length of each string in the vector of strings passed to it.
```
text <- c('WordTesting', 'With a space', NA, 'Night')
str_length( text )
```
```
## [1] 11 12 NA 5
```
Notice that `str_length()` correctly interprets the missing data as missing and that the length ought to also be missing.
### 13\.2\.3 Extracting substrings with `str_sub()`
If we know we want to extract the \\(3^{rd}\\) through \\(6^{th}\\) letters in a string, this function will grab them.
```
str_sub(text, start=3, end=6)
```
```
## [1] "rdTe" "th a" NA "ght"
```
If a given string isn’t long enough to contain all the necessary indices, `str_sub()` returns only the letters that where there (as in the above case for “Night”
### 13\.2\.4 Pad a string with `str_pad()`
Sometimes we to make every string in a vector the same length to facilitate display or in the creation of a uniform system of assigning ID numbers. The `str_pad()` function will add spaces at either the beginning or end of the of every string appropriately.
```
str_pad(first.names, width=8)
```
```
## [1] " Robb" " Stannis" "Daenerys"
```
```
str_pad(first.names, width=8, side='right', pad='*')
```
```
## [1] "Robb****" "Stannis*" "Daenerys"
```
### 13\.2\.5 Trim a string with `str_trim()`
This removes any leading or trailing whitespace where whitespace is defined as spaces ‘’, tabs `\t` or returns `\n`.
```
text <- ' Some text. \n '
print(text)
```
```
## [1] " Some text. \n "
```
```
str_trim(text)
```
```
## [1] "Some text."
```
### 13\.2\.1 Concatenating with `str_c()` or `str_join()`
The first thing we do is to concatenate two strings or two vectors of strings similarly to the `paste()` command. The `str_c()` and `str_join()` functions are a synonym for the exact same function, but str\_join() might be a more natural verb to use and remember. The syntax is:
```
str_c( ..., sep='', collapse=NULL)
```
You can think of the inputs building a matrix of strings, with each input creating a column of the matrix. For each row, `str_c()` first joins all the columns (using the separator character given in `sep`) into a single column of strings. If the collapse argument is non\-NULL, the function takes the vector and joins each element together using collapse as the separator character.
```
# load the stringr library
library(stringr)
# envisioning the matrix of strings
cbind(first.names, last.names)
```
```
## first.names last.names
## [1,] "Robb" "Stark"
## [2,] "Stannis" "Baratheon"
## [3,] "Daenerys" "Targaryen"
```
```
# join the columns together
full.names <- str_c( first.names, last.names, sep='.')
cbind( first.names, last.names, full.names)
```
```
## first.names last.names full.names
## [1,] "Robb" "Stark" "Robb.Stark"
## [2,] "Stannis" "Baratheon" "Stannis.Baratheon"
## [3,] "Daenerys" "Targaryen" "Daenerys.Targaryen"
```
```
# Join each of the rows together separated by collapse
str_c( first.names, last.names, sep='.', collapse=' : ')
```
```
## [1] "Robb.Stark : Stannis.Baratheon : Daenerys.Targaryen"
```
### 13\.2\.2 Calculating string length with `str_length()`
The `str_length()` function calculates the length of each string in the vector of strings passed to it.
```
text <- c('WordTesting', 'With a space', NA, 'Night')
str_length( text )
```
```
## [1] 11 12 NA 5
```
Notice that `str_length()` correctly interprets the missing data as missing and that the length ought to also be missing.
### 13\.2\.3 Extracting substrings with `str_sub()`
If we know we want to extract the \\(3^{rd}\\) through \\(6^{th}\\) letters in a string, this function will grab them.
```
str_sub(text, start=3, end=6)
```
```
## [1] "rdTe" "th a" NA "ght"
```
If a given string isn’t long enough to contain all the necessary indices, `str_sub()` returns only the letters that where there (as in the above case for “Night”
### 13\.2\.4 Pad a string with `str_pad()`
Sometimes we to make every string in a vector the same length to facilitate display or in the creation of a uniform system of assigning ID numbers. The `str_pad()` function will add spaces at either the beginning or end of the of every string appropriately.
```
str_pad(first.names, width=8)
```
```
## [1] " Robb" " Stannis" "Daenerys"
```
```
str_pad(first.names, width=8, side='right', pad='*')
```
```
## [1] "Robb****" "Stannis*" "Daenerys"
```
### 13\.2\.5 Trim a string with `str_trim()`
This removes any leading or trailing whitespace where whitespace is defined as spaces ‘’, tabs `\t` or returns `\n`.
```
text <- ' Some text. \n '
print(text)
```
```
## [1] " Some text. \n "
```
```
str_trim(text)
```
```
## [1] "Some text."
```
13\.3 Package `stringr`: Pattern Matching
-----------------------------------------
The previous commands are all quite useful but the most powerful string operation is take a string and match some pattern within it. The following commands are available within `stringr`.
| Function | Description |
| --- | --- |
| `str_detect()` | Detect if a pattern occurs in input string |
| `str_locate()` `str_locate_all()` | Locates the first (or all) positions of a pattern. |
| `str_extract()` `str_extract_all()` | Extracts the first (or all) substrings corresponding to a pattern |
| `str_replace()` `str_replace_all()` | Replaces the matched substring(s) with a new pattern |
| `str_split()` `str_split_fixed()` | Splits the input string based on the inputed pattern |
We will first examine these functions using a very simple pattern matching algorithm where we are matching a specific pattern. For most people, this is as complex as we need.
Suppose that we have a vector of strings that contain a date in the form “2012\-May\-27” and we want to manipulate them to extract certain information.
```
test.vector <- c('2008-Feb-10', '2010-Sept-18', '2013-Jan-11', '2016-Jan-2')
```
### 13\.3\.1 Detecting a pattern using str\_detect()
Suppose we want to know which dates are in September. We want to detect if the pattern “Sept” occurs in the strings. It is important that I used fixed(“Sept”) in this code to “turn off” the complicated regular expression matching rules and just look for exactly what I specified.
```
str_detect( test.vector, pattern=fixed('Sept') )
```
```
## [1] FALSE TRUE FALSE FALSE
```
Here we see that the second string in the test vector included the substring “Sept” but none of the others did.
### 13\.3\.2 Locating a pattern using str\_locate()
To figure out where the “\-” characters are, we can use the `str_locate()` function.
```
str_locate(test.vector, pattern=fixed('-') )
```
```
## start end
## [1,] 5 5
## [2,] 5 5
## [3,] 5 5
## [4,] 5 5
```
which shows that the first dash occurs as the \\(5^{th}\\) character in each string. If we wanted all the dashes in the string the following works.
```
str_locate_all(test.vector, pattern=fixed('-') )
```
```
## [[1]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[2]]
## start end
## [1,] 5 5
## [2,] 10 10
##
## [[3]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[4]]
## start end
## [1,] 5 5
## [2,] 9 9
```
The output of `str_locate_all()` is a list of matrices that gives the start and end of each matrix. Using this information, we could grab the Year/Month/Day information out of each of the dates. We won’t do that here because it will be easier to do this using `str_split()`.
### 13\.3\.3 Replacing substrings using `str_replace()`
Suppose we didn’t like using “\-” to separate the Year/Month/Day but preferred a space, or an underscore, or something else. This can be done by replacing all of the “\-” with the desired character. The `str_replace()` function only replaces the first match, but `str_replace_all()` replaces all matches.
```
str_replace(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb-10" "2010:Sept-18" "2013:Jan-11" "2016:Jan-2"
```
```
str_replace_all(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb:10" "2010:Sept:18" "2013:Jan:11" "2016:Jan:2"
```
### 13\.3\.4 Splitting into substrings using `str_split()`
We can split each of the dates into three smaller substrings using the `str_split()` command, which returns a list where each element of the list is a vector containing pieces of the original string (excluding the pattern we matched on).
If we know that all the strings will be split into a known number of substrings (we have to specify how many substrings to match with the `n=` argument), we can use `str_split_fixed()` to get a matrix of substrings instead of list of substrings. It is somewhat unfortunate that the `_fixed` modifier to the function name is the same as what we use to specify to use simple pattern matching.
```
str_split_fixed(test.vector, pattern=fixed('-'), n=3)
```
```
## [,1] [,2] [,3]
## [1,] "2008" "Feb" "10"
## [2,] "2010" "Sept" "18"
## [3,] "2013" "Jan" "11"
## [4,] "2016" "Jan" "2"
```
### 13\.3\.1 Detecting a pattern using str\_detect()
Suppose we want to know which dates are in September. We want to detect if the pattern “Sept” occurs in the strings. It is important that I used fixed(“Sept”) in this code to “turn off” the complicated regular expression matching rules and just look for exactly what I specified.
```
str_detect( test.vector, pattern=fixed('Sept') )
```
```
## [1] FALSE TRUE FALSE FALSE
```
Here we see that the second string in the test vector included the substring “Sept” but none of the others did.
### 13\.3\.2 Locating a pattern using str\_locate()
To figure out where the “\-” characters are, we can use the `str_locate()` function.
```
str_locate(test.vector, pattern=fixed('-') )
```
```
## start end
## [1,] 5 5
## [2,] 5 5
## [3,] 5 5
## [4,] 5 5
```
which shows that the first dash occurs as the \\(5^{th}\\) character in each string. If we wanted all the dashes in the string the following works.
```
str_locate_all(test.vector, pattern=fixed('-') )
```
```
## [[1]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[2]]
## start end
## [1,] 5 5
## [2,] 10 10
##
## [[3]]
## start end
## [1,] 5 5
## [2,] 9 9
##
## [[4]]
## start end
## [1,] 5 5
## [2,] 9 9
```
The output of `str_locate_all()` is a list of matrices that gives the start and end of each matrix. Using this information, we could grab the Year/Month/Day information out of each of the dates. We won’t do that here because it will be easier to do this using `str_split()`.
### 13\.3\.3 Replacing substrings using `str_replace()`
Suppose we didn’t like using “\-” to separate the Year/Month/Day but preferred a space, or an underscore, or something else. This can be done by replacing all of the “\-” with the desired character. The `str_replace()` function only replaces the first match, but `str_replace_all()` replaces all matches.
```
str_replace(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb-10" "2010:Sept-18" "2013:Jan-11" "2016:Jan-2"
```
```
str_replace_all(test.vector, pattern=fixed('-'), replacement=fixed(':') )
```
```
## [1] "2008:Feb:10" "2010:Sept:18" "2013:Jan:11" "2016:Jan:2"
```
### 13\.3\.4 Splitting into substrings using `str_split()`
We can split each of the dates into three smaller substrings using the `str_split()` command, which returns a list where each element of the list is a vector containing pieces of the original string (excluding the pattern we matched on).
If we know that all the strings will be split into a known number of substrings (we have to specify how many substrings to match with the `n=` argument), we can use `str_split_fixed()` to get a matrix of substrings instead of list of substrings. It is somewhat unfortunate that the `_fixed` modifier to the function name is the same as what we use to specify to use simple pattern matching.
```
str_split_fixed(test.vector, pattern=fixed('-'), n=3)
```
```
## [,1] [,2] [,3]
## [1,] "2008" "Feb" "10"
## [2,] "2010" "Sept" "18"
## [3,] "2013" "Jan" "11"
## [4,] "2016" "Jan" "2"
```
13\.4 Regular Expressions
-------------------------
The next section will introduce using regular expressions. Regular expressions are a way to specify very complicated patterns. Go look at <https://xkcd.com/208/> to gain insight into just how geeky regular expressions are.
Regular expressions are a way of precisely writing out patterns that are very complicated. The stringr package pattern arguments can be given using standard regular expressions (not perl\-style!) instead of using fixed strings.
Regular expressions are extremely powerful for sifting through large amounts of text. For example, we might want to extract all of the 4 digit substrings (the years) out of our dates vector, or I might want to find all cases in a paragraph of text of words that begin with a capital letter and are at least 5 letters long. In another, somewhat nefarious example, spammers might have downloaded a bunch of text from webpages and want to be able to look for email addresses. So as a first pass, they want to match a pattern: \\\[\\underset{\\textrm{1 or more letters}}{\\underbrace{\\texttt{Username}}}\\texttt{@}\\;\\;\\underset{\\textrm{1 or more letter}}{\\underbrace{\\texttt{OrganizationName}}}\\;\\texttt{.\\;}\\begin{cases}
\\texttt{com}\\\\
\\texttt{org}\\\\
\\texttt{edu}
\\end{cases}\\] where the `Username` and `OrganizationName` can be pretty much anything, but a valid email address looks like this. We might get even more creative and recognize that my list of possible endings could include country codes as well.
For most people, I don’t recommend opening the regular expression can\-of\-worms, but it is good to know that these pattern matching utilities are available within R and you don’t need to export your pattern matching problems to Perl or Python.
13\.5 Exercises
---------------
1. The following file names were used in a camera trap study. The S number represents the site, P is the plot within a site, C is the camera number within the plot, the first string of numbers is the YearMonthDay and the second string of numbers is the HourMinuteSecond.
```
file.names <- c( 'S123.P2.C10_20120621_213422.jpg',
'S10.P1.C1_20120622_050148.jpg',
'S187.P2.C2_20120702_023501.jpg')
```
Use a combination of `str_sub()` and `str_split()` to produce a data frame with columns corresponding to the `site`, `plot`, `camera`, `year`, `month`, `day`, `hour`, `minute`, and `second` for these three file names. So we want to produce code that will create the data frame:
```
Site Plot Camera Year Month Day Hour Minute Second
S123 P2 C10 2012 06 21 21 34 22
S10 P1 C1 2012 06 22 05 01 48
S187 P2 C2 2012 07 02 02 35 01
```
*Hint: Convert all the dashes to periods and then split on the dots. After that you’ll have to further tear apart the date and time columns using str\_sub().*
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/14-dates-and-times.html |
Chapter 14 Dates and Times
==========================
```
Sys.setenv(TZ='US/Arizona') # an error in Mac OSX
library( lubridate )
```
Dates within a computer require some special organization because there are several competing conventions for how to write a date (some of them more confusing than others) and because the sort order should be in the order that the dates occur in time.
One useful tidbit of knowledge is that computer systems store a time point as the number of seconds from set point in time, called the epoch. So long as you always use the same epoch, you doesn’t have to worry about when the epoch is, but if you are switching between software systems, you might run into problems if they use different epochs. In R, we use midnight on Jan 1, 1970\. In Microsoft Excel, they use Jan 0, 1900\.
For many years, R users hated dealing with dates because it was difficult to remember how to get R to take a string that represents a date (e.g. “June 26, 1997”) because users were required to specify how the format was arranged using a relatively complex set of rules. For example `%y` represents the two digit year, `%Y` represents the four digit year, `%m` represents the month, but `%b` represents the month written as Jan or Mar. Into this mess came Hadley Wickham (of `ggplot2` and `dplyr` fame) and his student Garrett Grolemund. The internal structure of R dates and times is quite robust, but the functions we use to manipulate them are horrible. To fix this, Dr Wickham and his then PhD student Dr Grolemund introduced the `lubridate` package.
14\.1 Creating Date and Time objects
------------------------------------
To create a `Date` object, we need to take a string or number that represents a date and tell the computer how to figure out which bits are the year, which are the month, and which are the day. The lubridate package uses the following functions:
| Common Orders | | Uncommon Orders |
| --- | --- | --- |
| `ymd()` Year Month Day | | `dym()` Day Year Month |
| `mdy()` Month Day Year | | `myd()` Month Year Day |
| `dmy()` Day Month Year | | `ydm()` Year Day Month |
The uncommon orders aren’t likely to be used, but the `lubridate` package includes them for completeness. Once the order has been specified, the `lubridate` package will try as many different ways to parse the date that make sense. As a result, so long as the order is consistent, all of the following will work:
```
mdy( 'June 26, 1997', 'Jun 26 97', '6-26-97', '6-26-1997', '6/26/97', '6-26/97' )
```
```
## [1] "1997-06-26" "1997-06-26" "1997-06-26" "1997-06-26" "1997-06-26"
## [6] "1997-06-26"
```
Unfortunately `lubridate()` is inconsistency recognizing the two digit year as either 97 or 1997\. This illustrates that you should ALWAYS fully specify the year.
The lubridate functions will also accommodate if an integer representation of the date, but it has to have enough digits to uniquely identify the month and day.
```
ymd(20090110)
```
```
## [1] "2009-01-10"
```
```
ymd(2009722) # only one digit for month --- error!
```
```
## Warning: All formats failed to parse. No formats found.
```
```
## [1] NA
```
```
ymd(2009116) # this is ambiguous! 1-16 or 11-6?
```
```
## Warning: All formats failed to parse. No formats found.
```
```
## [1] NA
```
If we want to add a time to a date, we will use a function with the suffix `_hm` or `_hms`. Suppose that we want to encode a date and time, for example, the date and time of my wedding ceremony
```
mdy_hm('Sept 18, 2010 5:30 PM', '9-18-2010 17:30')
```
```
## [1] NA "2010-09-18 17:30:00 UTC"
```
In the above case, `lubridate` is having trouble understanding AM/PM differences and it is better to always specify times using 24 hour notation and skip the AM/PM designations.
By default, R codes the time of day using as if the event occurred in the UMT time zone (also know as Greenwich Mean Time GMT). To specify a different time zone, use the `tz=` option. For example:
```
mdy_hm('9-18-2010 17:30', tz='MST') # Mountain Standard Time
```
```
## [1] "2010-09-18 17:30:00 MST"
```
This isn’t bad, but Loveland, Colorado is on MST in the winter and MDT in the summer because of daylight savings time. So to specify the time zone that could switch between standard time and daylight savings time, I should specify `tz='US/Mountain'`
```
mdy_hm('9-18-2010 17:30', tz='US/Mountain') # US mountain time
```
```
## [1] "2010-09-18 17:30:00 MDT"
```
As Arizonans, we recognize that Arizona is weird and doesn’t use daylight savings time. Fortunately R has a built\-in time zone just for us.
```
mdy_hm('9-18-2010 17:30', tz='US/Arizona') # US Arizona time
```
```
## [1] "2010-09-18 17:30:00 MST"
```
R recognizes 582 different time zone locals and you can find these using the function `OlsonNames()`. To find out more about what these mean you can check out the Wikipedia page on timezones \[<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>\|\|<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>].
14\.2 Extracting information
----------------------------
The `lubridate` package provides many functions for extracting information from the date. Suppose we have defined
```
# Derek's wedding!
x <- mdy_hm('9-18-2010 17:30', tz='US/Mountain') # US Mountain time
```
| Command | Ouput | Description |
| --- | --- | --- |
| `year(x)` | 2010 | Year |
| `month(x)` | 9 | Month |
| `day(x)` | 18 | Day |
| `hour(x)` | 17 | Hour of the day |
| `minute(x)` | 30 | Minute of the hour |
| `second(x)` | 0 | Seconds |
| `wday(x)` | 7 | Day of the week (Sunday \= 1\) |
| `mday(x)` | 18 | Day of the month |
| `yday(x)` | 261 | Day of the year |
Here we get the output as digits, where September is represented as a 9 and the day of the week is a number between 1\-7\. To get nicer labels, we can use `label=TRUE` for some commands.
| Command | Ouput |
| --- | --- |
| `wday(x, label=TRUE)` | Sat |
| `month(x, label=TRUE)` | Sep |
All of these functions can also be used to update the value. For example, we could move the day of the wedding from September \\(18^{th}\\) to October \\(18^{th}\\) by changing the month.
```
month(x) <- 10
x
```
```
## [1] "2010-10-18 17:30:00 MDT"
```
Often I want to consider some point in time, but need to convert the timezone the date was specified into another timezone. The function `with_tz()` will take a given moment in time and figure out when that same moment is in another timezone. For example, *Game of Thrones* is made available on HBO’s streaming service at 9pm on Sunday evenings Eastern time. I need to know when I can start watching it here in Arizona.
```
GoT <- ymd_hm('2015-4-26 21:00', tz='US/Eastern')
with_tz(GoT, tz='US/Arizona')
```
```
## [1] "2015-04-26 18:00:00 MST"
```
This means that Game of Thrones is available for streaming at 6 pm Arizona time.
14\.3 Arithmetic on Dates
-------------------------
Once we have two or more Date objects defined, we can perform appropriate mathematical operations. For example, we might want to the know the number of days there are between two dates.
```
Wedding <- ymd('2010-Sep-18')
Elise <- ymd('2013-Jan-11')
Childless <- Elise - Wedding
Childless
```
```
## Time difference of 846 days
```
Because both dates were recorded without the hours or seconds, R defaults to just reporting the difference in number of days.
Often I want to add two weeks, or 3 months, or one year to a date. However it is not completely obvious what I mean by “add 1 year”. Do we mean to increment the year number (eg Feb 2, 2011 \-\> Feb 2, 2012\) or do we mean to add 31,536,000 seconds? To get around this, `lubridate` includes functions of the form `dunits()` and `units()` where the “unit” portion could be year, month, week, etc. The “d” prefix will stand for duration when appropriate.
```
x <- ymd("2011-Feb-21")
x + years(2) # Just add two to the year
```
```
## [1] "2013-02-21"
```
```
x + dyears(2) # Add 2*365 days; 2012 was a leap year
```
```
## [1] "2013-02-20"
```
14\.4 Exercises
---------------
1. For the following formats for a date, transform them into a date/time object. Which formats can be handled nicely and which are not?
```
birthday <- c(
'September 13, 1978',
'Sept 13, 1978',
'Sep 13, 1978',
'9-13-78',
'9/13/78')
```
2. Suppose you have arranged for a phone call to be at 3 pm on May 8, 2015 at Arizona time. However, the recipient will be in Auckland, NZ. What time will it be there?
3. It turns out there is some interesting periodicity regarding the number of births on particular days of the year.
1. Using the `mosaicData` package, load the data set `Births78` which records the number of children born on each day in the United States in 1978\.
2. There is already a date column in the data set that is called, appropriately, date. Notice that `ggplot2` knows how to represent dates in a pretty fashion and the following chart looks nice.
```
library(mosaicData)
library(ggplot2)
ggplot(Births78, aes(x=date, y=births)) +
geom_point()
```
What stands out to you? Why do you think we have this trend?
3. To test your assumption, we need to figure out the what day of the week each observation is. Use `dplyr::mutate` to add a new column named dow that is the day of the week (Monday, Tuesday, etc). This calculation will involve some function in the `lubridate` package.
4. Plot the data with the point color being determined by the dow variable.
14\.1 Creating Date and Time objects
------------------------------------
To create a `Date` object, we need to take a string or number that represents a date and tell the computer how to figure out which bits are the year, which are the month, and which are the day. The lubridate package uses the following functions:
| Common Orders | | Uncommon Orders |
| --- | --- | --- |
| `ymd()` Year Month Day | | `dym()` Day Year Month |
| `mdy()` Month Day Year | | `myd()` Month Year Day |
| `dmy()` Day Month Year | | `ydm()` Year Day Month |
The uncommon orders aren’t likely to be used, but the `lubridate` package includes them for completeness. Once the order has been specified, the `lubridate` package will try as many different ways to parse the date that make sense. As a result, so long as the order is consistent, all of the following will work:
```
mdy( 'June 26, 1997', 'Jun 26 97', '6-26-97', '6-26-1997', '6/26/97', '6-26/97' )
```
```
## [1] "1997-06-26" "1997-06-26" "1997-06-26" "1997-06-26" "1997-06-26"
## [6] "1997-06-26"
```
Unfortunately `lubridate()` is inconsistency recognizing the two digit year as either 97 or 1997\. This illustrates that you should ALWAYS fully specify the year.
The lubridate functions will also accommodate if an integer representation of the date, but it has to have enough digits to uniquely identify the month and day.
```
ymd(20090110)
```
```
## [1] "2009-01-10"
```
```
ymd(2009722) # only one digit for month --- error!
```
```
## Warning: All formats failed to parse. No formats found.
```
```
## [1] NA
```
```
ymd(2009116) # this is ambiguous! 1-16 or 11-6?
```
```
## Warning: All formats failed to parse. No formats found.
```
```
## [1] NA
```
If we want to add a time to a date, we will use a function with the suffix `_hm` or `_hms`. Suppose that we want to encode a date and time, for example, the date and time of my wedding ceremony
```
mdy_hm('Sept 18, 2010 5:30 PM', '9-18-2010 17:30')
```
```
## [1] NA "2010-09-18 17:30:00 UTC"
```
In the above case, `lubridate` is having trouble understanding AM/PM differences and it is better to always specify times using 24 hour notation and skip the AM/PM designations.
By default, R codes the time of day using as if the event occurred in the UMT time zone (also know as Greenwich Mean Time GMT). To specify a different time zone, use the `tz=` option. For example:
```
mdy_hm('9-18-2010 17:30', tz='MST') # Mountain Standard Time
```
```
## [1] "2010-09-18 17:30:00 MST"
```
This isn’t bad, but Loveland, Colorado is on MST in the winter and MDT in the summer because of daylight savings time. So to specify the time zone that could switch between standard time and daylight savings time, I should specify `tz='US/Mountain'`
```
mdy_hm('9-18-2010 17:30', tz='US/Mountain') # US mountain time
```
```
## [1] "2010-09-18 17:30:00 MDT"
```
As Arizonans, we recognize that Arizona is weird and doesn’t use daylight savings time. Fortunately R has a built\-in time zone just for us.
```
mdy_hm('9-18-2010 17:30', tz='US/Arizona') # US Arizona time
```
```
## [1] "2010-09-18 17:30:00 MST"
```
R recognizes 582 different time zone locals and you can find these using the function `OlsonNames()`. To find out more about what these mean you can check out the Wikipedia page on timezones \[<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>\|\|<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>].
14\.2 Extracting information
----------------------------
The `lubridate` package provides many functions for extracting information from the date. Suppose we have defined
```
# Derek's wedding!
x <- mdy_hm('9-18-2010 17:30', tz='US/Mountain') # US Mountain time
```
| Command | Ouput | Description |
| --- | --- | --- |
| `year(x)` | 2010 | Year |
| `month(x)` | 9 | Month |
| `day(x)` | 18 | Day |
| `hour(x)` | 17 | Hour of the day |
| `minute(x)` | 30 | Minute of the hour |
| `second(x)` | 0 | Seconds |
| `wday(x)` | 7 | Day of the week (Sunday \= 1\) |
| `mday(x)` | 18 | Day of the month |
| `yday(x)` | 261 | Day of the year |
Here we get the output as digits, where September is represented as a 9 and the day of the week is a number between 1\-7\. To get nicer labels, we can use `label=TRUE` for some commands.
| Command | Ouput |
| --- | --- |
| `wday(x, label=TRUE)` | Sat |
| `month(x, label=TRUE)` | Sep |
All of these functions can also be used to update the value. For example, we could move the day of the wedding from September \\(18^{th}\\) to October \\(18^{th}\\) by changing the month.
```
month(x) <- 10
x
```
```
## [1] "2010-10-18 17:30:00 MDT"
```
Often I want to consider some point in time, but need to convert the timezone the date was specified into another timezone. The function `with_tz()` will take a given moment in time and figure out when that same moment is in another timezone. For example, *Game of Thrones* is made available on HBO’s streaming service at 9pm on Sunday evenings Eastern time. I need to know when I can start watching it here in Arizona.
```
GoT <- ymd_hm('2015-4-26 21:00', tz='US/Eastern')
with_tz(GoT, tz='US/Arizona')
```
```
## [1] "2015-04-26 18:00:00 MST"
```
This means that Game of Thrones is available for streaming at 6 pm Arizona time.
14\.3 Arithmetic on Dates
-------------------------
Once we have two or more Date objects defined, we can perform appropriate mathematical operations. For example, we might want to the know the number of days there are between two dates.
```
Wedding <- ymd('2010-Sep-18')
Elise <- ymd('2013-Jan-11')
Childless <- Elise - Wedding
Childless
```
```
## Time difference of 846 days
```
Because both dates were recorded without the hours or seconds, R defaults to just reporting the difference in number of days.
Often I want to add two weeks, or 3 months, or one year to a date. However it is not completely obvious what I mean by “add 1 year”. Do we mean to increment the year number (eg Feb 2, 2011 \-\> Feb 2, 2012\) or do we mean to add 31,536,000 seconds? To get around this, `lubridate` includes functions of the form `dunits()` and `units()` where the “unit” portion could be year, month, week, etc. The “d” prefix will stand for duration when appropriate.
```
x <- ymd("2011-Feb-21")
x + years(2) # Just add two to the year
```
```
## [1] "2013-02-21"
```
```
x + dyears(2) # Add 2*365 days; 2012 was a leap year
```
```
## [1] "2013-02-20"
```
14\.4 Exercises
---------------
1. For the following formats for a date, transform them into a date/time object. Which formats can be handled nicely and which are not?
```
birthday <- c(
'September 13, 1978',
'Sept 13, 1978',
'Sep 13, 1978',
'9-13-78',
'9/13/78')
```
2. Suppose you have arranged for a phone call to be at 3 pm on May 8, 2015 at Arizona time. However, the recipient will be in Auckland, NZ. What time will it be there?
3. It turns out there is some interesting periodicity regarding the number of births on particular days of the year.
1. Using the `mosaicData` package, load the data set `Births78` which records the number of children born on each day in the United States in 1978\.
2. There is already a date column in the data set that is called, appropriately, date. Notice that `ggplot2` knows how to represent dates in a pretty fashion and the following chart looks nice.
```
library(mosaicData)
library(ggplot2)
ggplot(Births78, aes(x=date, y=births)) +
geom_point()
```
What stands out to you? Why do you think we have this trend?
3. To test your assumption, we need to figure out the what day of the week each observation is. Use `dplyr::mutate` to add a new column named dow that is the day of the week (Monday, Tuesday, etc). This calculation will involve some function in the `lubridate` package.
4. Plot the data with the point color being determined by the dow variable.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/15-speeding-up-r.html |
Chapter 15 Speeding up R
========================
```
library(microbenchmark) # for measuring how long stuff takes
library(doMC) # do multi-core stuff
library(foreach) # parallelizable for loops
library(ggplot2)
library(dplyr)
library(faraway) # some examples
library(boot)
library(caret)
library(glmnet)
```
Eventually if you have large enough data sets, an R user eventually writes code that is slow to execute and needs to be sped up. This chapter tries to lay out common problems and bad habits and shows how to correct them. However, the correctness and maintainability of code should take precendence over speed. Too often, misguided attempts to obtain efficient code results in an unmaintainable mess that is no faster that the initial code.
Hadley Wickham has a book aimed at advanced R user that describes many of the finer details about R. One section in the book describes his process for building fast, maintainable software projects and if you have the time, I highly suggest reading the on\-line version, [Advanced R](http://adv-r.had.co.nz/Performance.html).
First we need some way of measuring how long our code took to run. For this we will use the package `microbenchmark`. The idea is that we want to evaluate two or three expressions that solve a problem.
```
x <- runif(1000)
microbenchmark(
sqrt(x), # First expression to compare
x^(0.5) # second expression to compare
) %>% print(digits=3)
```
```
## Unit: microseconds
## expr min lq mean median uq max neval cld
## sqrt(x) 2.59 2.69 2.92 2.77 2.89 9.33 100 a
## x^(0.5) 30.37 30.73 31.59 30.87 31.00 52.08 100 b
```
What `microbenchmark` does is run the two expressions a number of times and then produces the 5\-number summary of those times. By running it multiple times, we account for the randomness associated with a operating system that is also running at the same time.
15\.1 Faster for loops?
-----------------------
Often we need to perform some simple action repeatedly. It is natural to write a `for` loop to do the action and we wish to speed the up. In this first case, we will consider having to do the action millions of times and each chunk of computation within the `for` takes very little time.
Consider frame of 4 columns, and for each of \\(n\\) rows, we wish to know which column has the largest value.
```
make.data <- function(n){
data <- cbind(
rnorm(n, mean=5, sd=2),
rpois(n, lambda = 5),
rgamma(n, shape = 2, scale = 3),
rexp(n, rate = 1/5))
data <- data.frame(data)
return(data)
}
data <- make.data(100)
```
The way that you might first think about solving this problem is to write a for loop and, for each row, figure it out.
```
f1 <- function( input ){
output <- NULL
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
return(output)
}
```
We might consider that there are two ways to return a value from a function (using the `return` function and just printing it). In fact, I’ve always heard that using the `return` statment is a touch slower.
```
f2.noReturn <- function( input ){
output <- NULL
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
output
}
```
```
data <- make.data(100)
microbenchmark(
f1(data),
f2.noReturn(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f1(data) 3.3 3.43 3.70 3.56 3.71 8.23 100 a
## f2.noReturn(data) 3.3 3.41 3.84 3.54 3.84 9.20 100 a
```
In fact, it looks like it is a touch slower, but not massively compared to the run\-to\-run variability. I prefer to use the `return` statement for readability, but if we agree have the last line of code in the function be whatever needs to be returned, readability isn’t strongly effected.
We next consider whether it would be faster to allocate the output vector once we figure out the number of rows needed, or just build it on the fly?
```
f3.AllocOutput <- function( input ){
n <- nrow(input)
output <- rep(NULL, n)
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
return(output)
}
```
```
microbenchmark(
f1(data),
f3.AllocOutput(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f1(data) 3.32 3.54 3.97 3.80 4.14 8.15 100 a
## f3.AllocOutput(data) 3.29 3.50 4.00 3.77 4.09 9.07 100 a
```
If anything, allocating the size of output first was slower. So given this, we shouldn’t feel to bad being lazy and using `output <- NULL` to initiallize things.
15\.2 Vectorizing loops
-----------------------
In general, `for` loops in R are very slow and we want to avoid them as much as possible. The `apply` family of functions can be quite helpful for applying a function to each row or column of a matrix or data.frame or to each element of a list.
To test this, instead of a `for` loop, we will use `apply`.
```
f4.apply <- function( input ){
output <- apply(input, 1, which.max)
return(output)
}
```
```
microbenchmark(
f1(data),
f4.apply(data)
) %>% print(digits=3)
```
```
## Unit: microseconds
## expr min lq mean median uq max neval cld
## f1(data) 3291 3538 3922 3742 4086 6578 100 b
## f4.apply(data) 264 305 375 337 385 2272 100 a
```
This is the type of speed up that matters. We have a 10\-fold speed up in execution time and particularly the maximum time has dropped impressively.
Unfortunately, I have always found the `apply` functions a little cumbersome and I prefer to use `dplyr` instead strictly for readability.
```
f5.dplyr <- function( input ){
output <- input %>%
mutate( max.col=which.max( c(X1, X2, X3, X4) ) )
return(output$max.col)
}
```
```
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: microseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 265 291 328 316 350 575 100 a
## f5.dplyr(data) 1772 1865 2142 2021 2200 5981 100 b
```
Unfortunately `dplyr` is a lot slower than `apply` in this case. I wonder if the dynamics would change with a larger `n`?
```
data <- make.data(10000)
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 23.3 26.20 33.2 28.96 33.29 238.9 100 b
## f5.dplyr(data) 2.2 2.43 3.0 2.67 3.02 19.6 100 a
```
```
data <- make.data(100000)
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 288.1 374.29 493.32 463.37 579.22 1067 100 b
## f5.dplyr(data) 3.4 4.52 8.54 4.97 6.23 170 100 a
```
What just happened? The package `dplyr` is designed to work well for large data sets, and utilizes a modified structure, called a `tibble`, which provides massive benefits for large tables, but at the small scale, the overhead of converting the `data.frame` to a `tibble` overwhelms any speed up. But because the small sample case is already fast enough to not be noticable, we don’t really care about the small `n` case.
15\.3 Parallel Processing
-------------------------
Most modern computers have multiple computing cores, and can run muliple processes at the same time. Sometimes this means that you can run multiple programs and switch back and forth easily without lag, but we are now interested in using as many cores as possible to get our statistical calculations completed by using muliple processing cores at the same time. This is referred to as running the process “in parallel” and there are many tasks in modern statistical computing that are “embarrasingly easily parallelized”. In particular bootstrapping and cross validation techniques are extremely easy to implement in a parallel fashion.
However, running commands in parallel incurs some overhead cost in set up computation, as well as all the message passing from core to core. For example, to have 5 cores all perform an analysis on a set of data, all 5 cores must have access to the data, and not overwrite any of it. So parallelizing code only makes sense if the individual steps that we pass to each core is of sufficient size that the overhead incurred is substantially less than the time to run the job.
We should think of executing code in parallel as having three major steps: 1\. Tell R that there are multiple computing cores available and to set up a useable cluster to which we can pass jobs to. 2\. Decide what ‘computational chunk’ should be sent to each core and distribute all necessary data, libraries, etc to each core. 3\. Combine the results of each core back into a unified object.
15\.4 Parallelizing for loops
-----------------------------
There are a number of packages that allow you to tell R how many cores you have access to. One of the easiest ways to parallelize a for loop is using a package called `foreach`. The registration of multiple cores is actually pretty easy.
```
doMC::registerDoMC(cores = 2) # my laptop only has two cores.
```
We will consider an example that is common in modern statistics. We will examine parallel computing utilizing a bootstrap example where we create bootstrap samples for calculating confidence intervals for regression coefficients.
```
ggplot(trees, aes(x=Girth, y=Volume)) + geom_point() + geom_smooth(method='lm')
```
```
model <- lm( Volume ~ Girth, data=trees)
```
This is how we would do this previously.
```
# f is a formula
# df is the input data frame
# M is the number of bootstrap iterations
boot.for <- function( f, df, M=999){
output <- list()
for( i in 1:100 ){
# Do stuff
model.star <- lm( f, data=df %>% sample_frac(1, replace=TRUE) )
output[[i]] <- model.star$coefficients
}
# use rbind to put the list of results together into a data.frame
output <- sapply(output, rbind) %>% t() %>% data.frame()
return(output)
}
```
We will first ask about how to do the same thing using the function `foreach`
```
# f is a formula
# df is the input data frame
# M is the number of bootstrap iterations
boot.foreach <- function(f, df, M=999){
output <- foreach( i=1:100 ) %dopar% {
# Do stuff
model.star <- lm( f, data=df %>% sample_frac(1, replace=TRUE) )
model.star$coefficients
}
# use rbind to put the list of results together into a data.frame
output <- sapply(output, rbind) %>% t() %>% data.frame()
return(output)
}
```
Not much has changed in our code. Lets see which is faster.
```
microbenchmark(
boot.for( Volume~Girth, trees ),
boot.foreach( Volume~Girth, trees )
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval
## boot.for(Volume ~ Girth, trees) 233 253 319 337 360 596 100
## boot.foreach(Volume ~ Girth, trees) 358 413 450 430 453 1230 100
## cld
## a
## b
```
In this case, the overhead associated with splitting the job across two cores, copying the data over, and then combining the results back together was more than we saved by using both cores. If the nugget of computation within each pass of the `for` loop was larger, then it would pay to use both cores.
```
# massiveTrees has 31000 observations
massiveTrees <- NULL
for( i in 1:1000 ){
massiveTrees <- rbind(massiveTrees, trees)
}
microbenchmark(
boot.for( Volume~Girth, massiveTrees ) ,
boot.foreach( Volume~Girth, massiveTrees )
) %>% print(digits=3)
```
```
## Unit: seconds
## expr min lq mean median uq
## boot.for(Volume ~ Girth, massiveTrees) 3.04 3.61 3.82 3.72 3.85
## boot.foreach(Volume ~ Girth, massiveTrees) 1.65 1.98 2.32 2.37 2.54
## max neval cld
## 9.55 100 b
## 3.50 100 a
```
Because we often generate a bunch of results that we want to see as a data.frame, the `foreach` function includes and option to do it for us.
```
output <- foreach( i=1:100, .combine=data.frame ) %dopar% {
# Do stuff
model.star <- lm( Volume ~ Girth, data= trees %>% sample_frac(1, replace=TRUE) )
model$coefficients
}
```
It is important to recognize that the data.frame `trees` was utilized inside the `foreach` loop. So when we called the `foreach` loop and distributed the workload across the cores, it was smart enough to distribute the data to each core. However, if there were functions that we utilized inside the foor loop that came from a packege, we need to tell each core to load the function.
```
output <- foreach( i=1:1000, .combine=data.frame, .packages='dplyr' ) %dopar% {
# Do stuff
model.star <- lm( Volume ~ Girth, data= trees %>% sample_frac(1, replace=TRUE) )
model.star$coefficients
}
```
15\.5 Parallel Aware Functions
------------------------------
There are many packages that address problems that are “embarassingly easily parallelized” and they will happily work with multiple cores. Methods that rely on resampling certainly fit into this category.
### 15\.5\.1 `boot::boot`
Bootstrapping relys on resampling the dataset and calculating test statistics from each resample. In R, the most common way to do this is using the package `boot` and we just need to tell the `boot` function, to use the multiple cores available. (Note, we have to have registered the cores first!)
```
model <- lm( Volume ~ Girth, data=trees)
my.fun <- function(df, index){
model.star <- lm( Volume ~ Girth, data= trees[index,] )
model.star$coefficients
}
microbenchmark(
serial = boot::boot( trees, my.fun, R=1000 ),
parallel = boot::boot( trees, my.fun, R=1000,
parallel='multicore', ncpus=2 )
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## serial 679 737 848 839 907 1658 100 b
## parallel 685 721 787 751 801 1258 100 a
```
In this case, we had a bit of a spead up, but not a factor of 2\. This is due to the overhead of splitting the job across both cores.
### 15\.5\.2 `caret::train`
The statistical learning package `caret` also handles all the work to do cross validation in a parallel computing environment. The functions in `caret` have an option `allowParallel` which by default is TRUE, which controls if we should use all the cores. Assuming we have already registered the number of cores, then by default `caret` will use them all.
```
library(faraway)
library(caret)
ctrl.serial <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = FALSE)
ctrl.parallel <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = TRUE)
grid <- data.frame(
alpha = 1, # 1 => Lasso Regression
lambda = exp(seq(-6, 1, length=50)))
microbenchmark(
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.serial, tuneGrid=grid,
lambda = grid$lambda ),
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.parallel, tuneGrid=grid,
lambda = grid$lambda )
) %>% print(digits=3)
```
```
## Unit: seconds
## expr
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.serial, tuneGrid = grid, lambda = grid$lambda)
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.parallel, tuneGrid = grid, lambda = grid$lambda)
## min lq mean median uq max neval cld
## 1.17 1.20 1.23 1.21 1.23 1.49 100 a
## 1.33 1.35 1.39 1.36 1.40 2.03 100 b
```
Again, we saw only moderate gains by using both cores, however it didn’t really cost us anything. Because the `caret` package by default allows parallel processing, it doesn’t hurt to just load the `doMC` package and register the number of cores. Even in just the two core case, it is a good habit to get into so that when you port your code to a huge computer with many cores, the only thing to change is how many cores you have access to.
15\.1 Faster for loops?
-----------------------
Often we need to perform some simple action repeatedly. It is natural to write a `for` loop to do the action and we wish to speed the up. In this first case, we will consider having to do the action millions of times and each chunk of computation within the `for` takes very little time.
Consider frame of 4 columns, and for each of \\(n\\) rows, we wish to know which column has the largest value.
```
make.data <- function(n){
data <- cbind(
rnorm(n, mean=5, sd=2),
rpois(n, lambda = 5),
rgamma(n, shape = 2, scale = 3),
rexp(n, rate = 1/5))
data <- data.frame(data)
return(data)
}
data <- make.data(100)
```
The way that you might first think about solving this problem is to write a for loop and, for each row, figure it out.
```
f1 <- function( input ){
output <- NULL
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
return(output)
}
```
We might consider that there are two ways to return a value from a function (using the `return` function and just printing it). In fact, I’ve always heard that using the `return` statment is a touch slower.
```
f2.noReturn <- function( input ){
output <- NULL
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
output
}
```
```
data <- make.data(100)
microbenchmark(
f1(data),
f2.noReturn(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f1(data) 3.3 3.43 3.70 3.56 3.71 8.23 100 a
## f2.noReturn(data) 3.3 3.41 3.84 3.54 3.84 9.20 100 a
```
In fact, it looks like it is a touch slower, but not massively compared to the run\-to\-run variability. I prefer to use the `return` statement for readability, but if we agree have the last line of code in the function be whatever needs to be returned, readability isn’t strongly effected.
We next consider whether it would be faster to allocate the output vector once we figure out the number of rows needed, or just build it on the fly?
```
f3.AllocOutput <- function( input ){
n <- nrow(input)
output <- rep(NULL, n)
for( i in 1:nrow(input) ){
output[i] <- which.max( input[i,] )
}
return(output)
}
```
```
microbenchmark(
f1(data),
f3.AllocOutput(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f1(data) 3.32 3.54 3.97 3.80 4.14 8.15 100 a
## f3.AllocOutput(data) 3.29 3.50 4.00 3.77 4.09 9.07 100 a
```
If anything, allocating the size of output first was slower. So given this, we shouldn’t feel to bad being lazy and using `output <- NULL` to initiallize things.
15\.2 Vectorizing loops
-----------------------
In general, `for` loops in R are very slow and we want to avoid them as much as possible. The `apply` family of functions can be quite helpful for applying a function to each row or column of a matrix or data.frame or to each element of a list.
To test this, instead of a `for` loop, we will use `apply`.
```
f4.apply <- function( input ){
output <- apply(input, 1, which.max)
return(output)
}
```
```
microbenchmark(
f1(data),
f4.apply(data)
) %>% print(digits=3)
```
```
## Unit: microseconds
## expr min lq mean median uq max neval cld
## f1(data) 3291 3538 3922 3742 4086 6578 100 b
## f4.apply(data) 264 305 375 337 385 2272 100 a
```
This is the type of speed up that matters. We have a 10\-fold speed up in execution time and particularly the maximum time has dropped impressively.
Unfortunately, I have always found the `apply` functions a little cumbersome and I prefer to use `dplyr` instead strictly for readability.
```
f5.dplyr <- function( input ){
output <- input %>%
mutate( max.col=which.max( c(X1, X2, X3, X4) ) )
return(output$max.col)
}
```
```
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: microseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 265 291 328 316 350 575 100 a
## f5.dplyr(data) 1772 1865 2142 2021 2200 5981 100 b
```
Unfortunately `dplyr` is a lot slower than `apply` in this case. I wonder if the dynamics would change with a larger `n`?
```
data <- make.data(10000)
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 23.3 26.20 33.2 28.96 33.29 238.9 100 b
## f5.dplyr(data) 2.2 2.43 3.0 2.67 3.02 19.6 100 a
```
```
data <- make.data(100000)
microbenchmark(
f4.apply(data),
f5.dplyr(data)
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## f4.apply(data) 288.1 374.29 493.32 463.37 579.22 1067 100 b
## f5.dplyr(data) 3.4 4.52 8.54 4.97 6.23 170 100 a
```
What just happened? The package `dplyr` is designed to work well for large data sets, and utilizes a modified structure, called a `tibble`, which provides massive benefits for large tables, but at the small scale, the overhead of converting the `data.frame` to a `tibble` overwhelms any speed up. But because the small sample case is already fast enough to not be noticable, we don’t really care about the small `n` case.
15\.3 Parallel Processing
-------------------------
Most modern computers have multiple computing cores, and can run muliple processes at the same time. Sometimes this means that you can run multiple programs and switch back and forth easily without lag, but we are now interested in using as many cores as possible to get our statistical calculations completed by using muliple processing cores at the same time. This is referred to as running the process “in parallel” and there are many tasks in modern statistical computing that are “embarrasingly easily parallelized”. In particular bootstrapping and cross validation techniques are extremely easy to implement in a parallel fashion.
However, running commands in parallel incurs some overhead cost in set up computation, as well as all the message passing from core to core. For example, to have 5 cores all perform an analysis on a set of data, all 5 cores must have access to the data, and not overwrite any of it. So parallelizing code only makes sense if the individual steps that we pass to each core is of sufficient size that the overhead incurred is substantially less than the time to run the job.
We should think of executing code in parallel as having three major steps: 1\. Tell R that there are multiple computing cores available and to set up a useable cluster to which we can pass jobs to. 2\. Decide what ‘computational chunk’ should be sent to each core and distribute all necessary data, libraries, etc to each core. 3\. Combine the results of each core back into a unified object.
15\.4 Parallelizing for loops
-----------------------------
There are a number of packages that allow you to tell R how many cores you have access to. One of the easiest ways to parallelize a for loop is using a package called `foreach`. The registration of multiple cores is actually pretty easy.
```
doMC::registerDoMC(cores = 2) # my laptop only has two cores.
```
We will consider an example that is common in modern statistics. We will examine parallel computing utilizing a bootstrap example where we create bootstrap samples for calculating confidence intervals for regression coefficients.
```
ggplot(trees, aes(x=Girth, y=Volume)) + geom_point() + geom_smooth(method='lm')
```
```
model <- lm( Volume ~ Girth, data=trees)
```
This is how we would do this previously.
```
# f is a formula
# df is the input data frame
# M is the number of bootstrap iterations
boot.for <- function( f, df, M=999){
output <- list()
for( i in 1:100 ){
# Do stuff
model.star <- lm( f, data=df %>% sample_frac(1, replace=TRUE) )
output[[i]] <- model.star$coefficients
}
# use rbind to put the list of results together into a data.frame
output <- sapply(output, rbind) %>% t() %>% data.frame()
return(output)
}
```
We will first ask about how to do the same thing using the function `foreach`
```
# f is a formula
# df is the input data frame
# M is the number of bootstrap iterations
boot.foreach <- function(f, df, M=999){
output <- foreach( i=1:100 ) %dopar% {
# Do stuff
model.star <- lm( f, data=df %>% sample_frac(1, replace=TRUE) )
model.star$coefficients
}
# use rbind to put the list of results together into a data.frame
output <- sapply(output, rbind) %>% t() %>% data.frame()
return(output)
}
```
Not much has changed in our code. Lets see which is faster.
```
microbenchmark(
boot.for( Volume~Girth, trees ),
boot.foreach( Volume~Girth, trees )
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval
## boot.for(Volume ~ Girth, trees) 233 253 319 337 360 596 100
## boot.foreach(Volume ~ Girth, trees) 358 413 450 430 453 1230 100
## cld
## a
## b
```
In this case, the overhead associated with splitting the job across two cores, copying the data over, and then combining the results back together was more than we saved by using both cores. If the nugget of computation within each pass of the `for` loop was larger, then it would pay to use both cores.
```
# massiveTrees has 31000 observations
massiveTrees <- NULL
for( i in 1:1000 ){
massiveTrees <- rbind(massiveTrees, trees)
}
microbenchmark(
boot.for( Volume~Girth, massiveTrees ) ,
boot.foreach( Volume~Girth, massiveTrees )
) %>% print(digits=3)
```
```
## Unit: seconds
## expr min lq mean median uq
## boot.for(Volume ~ Girth, massiveTrees) 3.04 3.61 3.82 3.72 3.85
## boot.foreach(Volume ~ Girth, massiveTrees) 1.65 1.98 2.32 2.37 2.54
## max neval cld
## 9.55 100 b
## 3.50 100 a
```
Because we often generate a bunch of results that we want to see as a data.frame, the `foreach` function includes and option to do it for us.
```
output <- foreach( i=1:100, .combine=data.frame ) %dopar% {
# Do stuff
model.star <- lm( Volume ~ Girth, data= trees %>% sample_frac(1, replace=TRUE) )
model$coefficients
}
```
It is important to recognize that the data.frame `trees` was utilized inside the `foreach` loop. So when we called the `foreach` loop and distributed the workload across the cores, it was smart enough to distribute the data to each core. However, if there were functions that we utilized inside the foor loop that came from a packege, we need to tell each core to load the function.
```
output <- foreach( i=1:1000, .combine=data.frame, .packages='dplyr' ) %dopar% {
# Do stuff
model.star <- lm( Volume ~ Girth, data= trees %>% sample_frac(1, replace=TRUE) )
model.star$coefficients
}
```
15\.5 Parallel Aware Functions
------------------------------
There are many packages that address problems that are “embarassingly easily parallelized” and they will happily work with multiple cores. Methods that rely on resampling certainly fit into this category.
### 15\.5\.1 `boot::boot`
Bootstrapping relys on resampling the dataset and calculating test statistics from each resample. In R, the most common way to do this is using the package `boot` and we just need to tell the `boot` function, to use the multiple cores available. (Note, we have to have registered the cores first!)
```
model <- lm( Volume ~ Girth, data=trees)
my.fun <- function(df, index){
model.star <- lm( Volume ~ Girth, data= trees[index,] )
model.star$coefficients
}
microbenchmark(
serial = boot::boot( trees, my.fun, R=1000 ),
parallel = boot::boot( trees, my.fun, R=1000,
parallel='multicore', ncpus=2 )
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## serial 679 737 848 839 907 1658 100 b
## parallel 685 721 787 751 801 1258 100 a
```
In this case, we had a bit of a spead up, but not a factor of 2\. This is due to the overhead of splitting the job across both cores.
### 15\.5\.2 `caret::train`
The statistical learning package `caret` also handles all the work to do cross validation in a parallel computing environment. The functions in `caret` have an option `allowParallel` which by default is TRUE, which controls if we should use all the cores. Assuming we have already registered the number of cores, then by default `caret` will use them all.
```
library(faraway)
library(caret)
ctrl.serial <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = FALSE)
ctrl.parallel <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = TRUE)
grid <- data.frame(
alpha = 1, # 1 => Lasso Regression
lambda = exp(seq(-6, 1, length=50)))
microbenchmark(
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.serial, tuneGrid=grid,
lambda = grid$lambda ),
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.parallel, tuneGrid=grid,
lambda = grid$lambda )
) %>% print(digits=3)
```
```
## Unit: seconds
## expr
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.serial, tuneGrid = grid, lambda = grid$lambda)
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.parallel, tuneGrid = grid, lambda = grid$lambda)
## min lq mean median uq max neval cld
## 1.17 1.20 1.23 1.21 1.23 1.49 100 a
## 1.33 1.35 1.39 1.36 1.40 2.03 100 b
```
Again, we saw only moderate gains by using both cores, however it didn’t really cost us anything. Because the `caret` package by default allows parallel processing, it doesn’t hurt to just load the `doMC` package and register the number of cores. Even in just the two core case, it is a good habit to get into so that when you port your code to a huge computer with many cores, the only thing to change is how many cores you have access to.
### 15\.5\.1 `boot::boot`
Bootstrapping relys on resampling the dataset and calculating test statistics from each resample. In R, the most common way to do this is using the package `boot` and we just need to tell the `boot` function, to use the multiple cores available. (Note, we have to have registered the cores first!)
```
model <- lm( Volume ~ Girth, data=trees)
my.fun <- function(df, index){
model.star <- lm( Volume ~ Girth, data= trees[index,] )
model.star$coefficients
}
microbenchmark(
serial = boot::boot( trees, my.fun, R=1000 ),
parallel = boot::boot( trees, my.fun, R=1000,
parallel='multicore', ncpus=2 )
) %>% print(digits=3)
```
```
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## serial 679 737 848 839 907 1658 100 b
## parallel 685 721 787 751 801 1258 100 a
```
In this case, we had a bit of a spead up, but not a factor of 2\. This is due to the overhead of splitting the job across both cores.
### 15\.5\.2 `caret::train`
The statistical learning package `caret` also handles all the work to do cross validation in a parallel computing environment. The functions in `caret` have an option `allowParallel` which by default is TRUE, which controls if we should use all the cores. Assuming we have already registered the number of cores, then by default `caret` will use them all.
```
library(faraway)
library(caret)
ctrl.serial <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = FALSE)
ctrl.parallel <- trainControl( method='repeatedcv', number=5, repeats=4,
preProcOptions = c('center','scale'),
allowParallel = TRUE)
grid <- data.frame(
alpha = 1, # 1 => Lasso Regression
lambda = exp(seq(-6, 1, length=50)))
microbenchmark(
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.serial, tuneGrid=grid,
lambda = grid$lambda ),
model <- train( lpsa ~ ., data=prostate, method='glmnet',
trControl=ctrl.parallel, tuneGrid=grid,
lambda = grid$lambda )
) %>% print(digits=3)
```
```
## Unit: seconds
## expr
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.serial, tuneGrid = grid, lambda = grid$lambda)
## model <- train(lpsa ~ ., data = prostate, method = "glmnet", trControl = ctrl.parallel, tuneGrid = grid, lambda = grid$lambda)
## min lq mean median uq max neval cld
## 1.17 1.20 1.23 1.21 1.23 1.49 100 a
## 1.33 1.35 1.39 1.36 1.40 2.03 100 b
```
Again, we saw only moderate gains by using both cores, however it didn’t really cost us anything. Because the `caret` package by default allows parallel processing, it doesn’t hurt to just load the `doMC` package and register the number of cores. Even in just the two core case, it is a good habit to get into so that when you port your code to a huge computer with many cores, the only thing to change is how many cores you have access to.
| R Programming |
dereksonderegger.github.io | https://dereksonderegger.github.io/570L/16-rmarkdown-tricks.html |
Chapter 16 Rmarkdown Tricks
===========================
We have been using RMarkdown files to combine the analysis and discussion into one nice document that contains all the analysis steps so that your research is reproducible.
There are many resources on the web about Markdown and the variant that RStudio uses (called RMarkdown), but the easiest reference is to just use the RStudio help tab to access the help. I particular like `Help -> Cheatsheets -> RMarkdown Reference Guide` because it gives me the standard Markdown information but also a bunch of information about the options I can use to customize the behavior of individual R code chunks.
Two topics that aren’t covered in the RStudio help files are how to insert mathematical text symbols and how to produce decent looking tables without too much fuss.
Most of what is presented here isn’t primarily about how to use R, but rather how to work with tools in RMarkdown so that the final product is neat and tidy. While you could print out your RMarkdown file and then clean it up in MS Word, sometimes there is a good to want as nice a starting point as possible.
16\.1 Mathematical expressions
------------------------------
The primary way to insert a mathematical expression is to use a markup language called LaTeX. This is a very powerful system and it is what most Mathematicians use to write their documents. The downside is that there is a lot to learn. However, you can get most of what you need pretty easily.
For RMarkdown to recognize you are writing math using LaTeX, you need to enclose the LaTeX with dollar signs ($). Some examples of common LaTeX patterns are given below:
| Goal | LaTeX | Output | LaTeX | Output |
| --- | --- | --- | --- | --- |
| power | `$x^2$` | \\(x^2\\) | `$y^{0.95}$` | \\(y^{0\.95}\\) |
| Subscript | `$x_i$` | \\(x\_i\\) | `$t_{24}$` | \\(t\_{24}\\) |
| Greek | `$\alpha$ $\beta$` | \\(\\alpha\\) \\(\\beta\\) | `$\theta$ $\Theta$` | \\(\\theta\\) \\(\\Theta\\) |
| Bar | `$\bar{x}$` | \\(\\bar{x}\\) | `$\bar{mu}_i$` | \\(\\bar{\\mu}\_i\\) |
| Hat | `$\hat{mu}$` | \\(\\hat{\\mu}\\) | `$\hat{y}_i$` | \\(\\hat{y}\_i\\) |
| Star | `$y^*$` | \\(y^\*\\) | `$\hat{\mu}^*_i$` | \\(\\hat{\\mu}^\*\_i\\) |
| Centered Dot | `$\cdot$` | \\(\\cdot\\) | `$\bar{y}_{i\cdot}$` | \\(\\bar{y}\_{i\\cdot}\\) |
| Sum | `$\sum x_i$` | \\(\\sum x\_i\\) | `$\sum_{i=0}^N x_i$` | \\(\\sum\_{i\=0}^N x\_i\\) |
| | | | | |
| Square Root | `$\sqrt{a}$` | \\(\\sqrt{a}\\) | `$\sqrt{a^2 + b^2}$` | \\(\\sqrt{a^2 \+ b^2}\\) |
| | | | | |
| Fractions | `$\frac{a}{b}$` | \\(\\frac{a}{b}\\) | `$\frac{x_i - \bar{x}{s/\sqrt{n}$` | \\(\\frac{x\_i \- \\bar{x}}{s/\\sqrt{n}}\\) |
Within your RMarkdown document, you can include LaTeX code by enclosing it with dollar signs. So you might write `$\alpha=0.05$` in your text, but after it is knitted to a pdf, html, or Word, you’ll see \\(\\alpha\=0\.05\\). If you want your mathematical equation to be on its own line, all by itself, enclose it with double dollar signs. So
`$$z_i = \frac{z_i-\bar{x}}{\sigma / \sqrt{n}}$$`
would be displayed as
\\\[ z\_{i}\=\\frac{x\_{i}\-\\bar{X}}{\\sigma/\\sqrt{n}} \\]
Unfortunately RMarkdown is a little picky about spaces near the $ and $$ signs and you can’t have any spaces between them and the LaTeX command. For a more information about all the different symbols you can use, google ‘LaTeX math symbols’.
16\.2 Tables
------------
For the following descriptions of the simple, grid, and pipe tables, I’ve shamelessly stolen from the Pandoc documentation. \[[http://pandoc.org/README.html\#tables](http://pandoc.org/README.html#tables)]
One way to print a table is to just print in in R and have the table presented in the code chunk. For example, suppose I want to print out the first 4 rows of the trees dataset.
```
data <- trees[1:4, ]
data
```
```
## Girth Height Volume
## 1 8.3 70 10.3
## 2 8.6 65 10.3
## 3 8.8 63 10.2
## 4 10.5 72 16.4
```
Usually this is sufficient, but suppose you want something a bit nicer because you are generating tables regularly and you don’t want to have to clean them up by hand. Tables in RMarkdown follow the table conventions from the Markdown class with a few minor exceptions. Markdown provides 4 ways to define a table and RMarkdown supports 3 of those.
### 16\.2\.1 Simple Tables
Simple tables look like this (Notice I don’t wrap these dollar signs or anything, just a blank line above and below the table):
```
Right Left Center Default
------- ------ ---------- -------
12 12 hmmm 12
123 123 123 123
1 1 1 1
```
and would be rendered like this:
| Right | Left | Center | Default |
| --- | --- | --- | --- |
| 12 | 12 | hmmm | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
The headers and table rows must each fit on one line. Column alignments are determined by the position of the header text relative to the dashed line below it.
If the dashed line is flush with the header text on the right side but extends beyond it on the left, the column is right\-aligned. If the dashed line is flush with the header text on the left side but extends beyond it on the right, the column is left\-aligned. If the dashed line extends beyond the header text on both sides, the column is centered. If the dashed line is flush with the header text on both sides, the default alignment is used (in most cases, this will be left). The table must end with a blank line, or a line of dashes followed by a blank line.
### 16\.2\.2 Grid Tables
Grid tables are a little more flexible and each cell can take an arbitrary Markdown block elements (such as lists).
```
+---------------+---------------+--------------------+
| Fruit | Price | Advantages |
+===============+===============+====================+
| Bananas | $1.34 | - built-in wrapper |
| | | - bright color |
+---------------+---------------+--------------------+
| Oranges | $2.10 | - cures scurvy |
| | | - tasty |
+---------------+---------------+--------------------+
```
which is rendered as the following:
| Fruit | Price | Advantages |
| --- | --- | --- |
| Bananas | $1\.34 | * built\-in wrapper * bright color |
| Oranges | $2\.10 | * cures scurvy * tasty |
Grid table doesn’t support Left/Center/Right alignment. Both Simple tables and Grid tables require you to format the blocks nicely inside the RMarkdown file and that can be a bit annoying if something changes and you have to fix the spacing in the rest of the table. Both Simple and Grid tables don’t require column headers.
### 16\.2\.3 Pipe Tables
Pipe tables look quite similar to grid tables but Markdown isn’t as picky about the pipes lining up. However, it does require a header row (which you could leave the elements blank in).
```
| Right | Left | Default | Center |
|------:|:-----|---------|:------:|
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
```
which will render as the following:
| Right | Left | Default | Center |
| --- | --- | --- | --- |
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
In general I prefer to use the pipe tables because it seems a little less picky about getting everything correct. However it is still pretty annoying to get the table laid out correctly.
In all of these tables, you can use the regular RMarkdown formatting tricks for italicizing and bolding. So I could have a table such as the following:
```
| Source | df | Sum of Sq | Mean Sq | F | $Pr(>F_{1,29})$ |
|:------------|-----:|--------------:|--------------:|-------:|--------------------:|
| Girth | *1* | 7581.8 | 7581.8 | 419.26 | **< 2.2e-16** |
| Residual | 29 | 524.3 | 18.1 | | |
```
and have it look like this:
| Source | df | Sum of Sq | Mean Sq | F | \\(Pr(\>F\_{1,29})\\) |
| --- | --- | --- | --- | --- | --- |
| Girth | *1* | 7581\.8 | 7581\.8 | 419\.26 | **\< 2\.2e\-16** |
| Residual | 29 | 524\.3 | 18\.1 | | |
The problem with all of this is that I don’t want to create these by hand. Instead I would like functions that take a data frame or matrix and spit out the RMarkdown code for the table.
16\.3 R functions to produce table code.
----------------------------------------
There are a couple of different packages that convert a data frame to simple/grid/pipe table. We will explore a couple of these, starting with the most basic and moving to the more complicated. The general idea is that we’ll produce the appropriate simple/grid/pipe table syntax in R, and when it gets knitted, then RMarkdown will turn our simple/grid/pipe table into something pretty.
### 16\.3\.1 `knitr::kable`
The `knitr` package includes a function that produces simple tables. It doesn’t have much customizability, but it gets the job done.
```
knitr::kable( data )
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
### 16\.3\.2 Package `pander`
The package `pander` seems to be a nice compromise between customization and not having to learn too much. It is relatively powerful in that it will take `summary()` and `anova()` output and produce tables for them. By default `pander` will produce simple tables, but you can ask for Grid or Pipe tables.
```
library(pander)
pander( data, style='rmarkdown' ) # style is pipe tables...
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
The `pander` package deals with summary and anova tables from a variety of different analyses. So you can simply ask for a nice looking version using the following:
```
model <- lm( Volume ~ Girth, data=trees ) # a simple regression
pander( summary(model) ) # my usual summary table
pander( anova( model ) ) # my usual anova table
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | \-36\.94 | 3\.365 | \-10\.98 | 7\.621e\-12 |
| **Girth** | 5\.066 | 0\.2474 | 20\.48 | 8\.644e\-19 |
Fitting linear model: Volume \~ Girth
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 31 | 4\.252 | 0\.9353 | 0\.9331 |
Analysis of Variance Table
| | Df | Sum Sq | Mean Sq | F value | Pr(\>F) |
| --- | --- | --- | --- | --- | --- |
| **Girth** | 1 | 7582 | 7582 | 419\.4 | 8\.644e\-19 |
| **Residuals** | 29 | 524\.3 | 18\.08 | NA | NA |
16\.1 Mathematical expressions
------------------------------
The primary way to insert a mathematical expression is to use a markup language called LaTeX. This is a very powerful system and it is what most Mathematicians use to write their documents. The downside is that there is a lot to learn. However, you can get most of what you need pretty easily.
For RMarkdown to recognize you are writing math using LaTeX, you need to enclose the LaTeX with dollar signs ($). Some examples of common LaTeX patterns are given below:
| Goal | LaTeX | Output | LaTeX | Output |
| --- | --- | --- | --- | --- |
| power | `$x^2$` | \\(x^2\\) | `$y^{0.95}$` | \\(y^{0\.95}\\) |
| Subscript | `$x_i$` | \\(x\_i\\) | `$t_{24}$` | \\(t\_{24}\\) |
| Greek | `$\alpha$ $\beta$` | \\(\\alpha\\) \\(\\beta\\) | `$\theta$ $\Theta$` | \\(\\theta\\) \\(\\Theta\\) |
| Bar | `$\bar{x}$` | \\(\\bar{x}\\) | `$\bar{mu}_i$` | \\(\\bar{\\mu}\_i\\) |
| Hat | `$\hat{mu}$` | \\(\\hat{\\mu}\\) | `$\hat{y}_i$` | \\(\\hat{y}\_i\\) |
| Star | `$y^*$` | \\(y^\*\\) | `$\hat{\mu}^*_i$` | \\(\\hat{\\mu}^\*\_i\\) |
| Centered Dot | `$\cdot$` | \\(\\cdot\\) | `$\bar{y}_{i\cdot}$` | \\(\\bar{y}\_{i\\cdot}\\) |
| Sum | `$\sum x_i$` | \\(\\sum x\_i\\) | `$\sum_{i=0}^N x_i$` | \\(\\sum\_{i\=0}^N x\_i\\) |
| | | | | |
| Square Root | `$\sqrt{a}$` | \\(\\sqrt{a}\\) | `$\sqrt{a^2 + b^2}$` | \\(\\sqrt{a^2 \+ b^2}\\) |
| | | | | |
| Fractions | `$\frac{a}{b}$` | \\(\\frac{a}{b}\\) | `$\frac{x_i - \bar{x}{s/\sqrt{n}$` | \\(\\frac{x\_i \- \\bar{x}}{s/\\sqrt{n}}\\) |
Within your RMarkdown document, you can include LaTeX code by enclosing it with dollar signs. So you might write `$\alpha=0.05$` in your text, but after it is knitted to a pdf, html, or Word, you’ll see \\(\\alpha\=0\.05\\). If you want your mathematical equation to be on its own line, all by itself, enclose it with double dollar signs. So
`$$z_i = \frac{z_i-\bar{x}}{\sigma / \sqrt{n}}$$`
would be displayed as
\\\[ z\_{i}\=\\frac{x\_{i}\-\\bar{X}}{\\sigma/\\sqrt{n}} \\]
Unfortunately RMarkdown is a little picky about spaces near the $ and $$ signs and you can’t have any spaces between them and the LaTeX command. For a more information about all the different symbols you can use, google ‘LaTeX math symbols’.
16\.2 Tables
------------
For the following descriptions of the simple, grid, and pipe tables, I’ve shamelessly stolen from the Pandoc documentation. \[[http://pandoc.org/README.html\#tables](http://pandoc.org/README.html#tables)]
One way to print a table is to just print in in R and have the table presented in the code chunk. For example, suppose I want to print out the first 4 rows of the trees dataset.
```
data <- trees[1:4, ]
data
```
```
## Girth Height Volume
## 1 8.3 70 10.3
## 2 8.6 65 10.3
## 3 8.8 63 10.2
## 4 10.5 72 16.4
```
Usually this is sufficient, but suppose you want something a bit nicer because you are generating tables regularly and you don’t want to have to clean them up by hand. Tables in RMarkdown follow the table conventions from the Markdown class with a few minor exceptions. Markdown provides 4 ways to define a table and RMarkdown supports 3 of those.
### 16\.2\.1 Simple Tables
Simple tables look like this (Notice I don’t wrap these dollar signs or anything, just a blank line above and below the table):
```
Right Left Center Default
------- ------ ---------- -------
12 12 hmmm 12
123 123 123 123
1 1 1 1
```
and would be rendered like this:
| Right | Left | Center | Default |
| --- | --- | --- | --- |
| 12 | 12 | hmmm | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
The headers and table rows must each fit on one line. Column alignments are determined by the position of the header text relative to the dashed line below it.
If the dashed line is flush with the header text on the right side but extends beyond it on the left, the column is right\-aligned. If the dashed line is flush with the header text on the left side but extends beyond it on the right, the column is left\-aligned. If the dashed line extends beyond the header text on both sides, the column is centered. If the dashed line is flush with the header text on both sides, the default alignment is used (in most cases, this will be left). The table must end with a blank line, or a line of dashes followed by a blank line.
### 16\.2\.2 Grid Tables
Grid tables are a little more flexible and each cell can take an arbitrary Markdown block elements (such as lists).
```
+---------------+---------------+--------------------+
| Fruit | Price | Advantages |
+===============+===============+====================+
| Bananas | $1.34 | - built-in wrapper |
| | | - bright color |
+---------------+---------------+--------------------+
| Oranges | $2.10 | - cures scurvy |
| | | - tasty |
+---------------+---------------+--------------------+
```
which is rendered as the following:
| Fruit | Price | Advantages |
| --- | --- | --- |
| Bananas | $1\.34 | * built\-in wrapper * bright color |
| Oranges | $2\.10 | * cures scurvy * tasty |
Grid table doesn’t support Left/Center/Right alignment. Both Simple tables and Grid tables require you to format the blocks nicely inside the RMarkdown file and that can be a bit annoying if something changes and you have to fix the spacing in the rest of the table. Both Simple and Grid tables don’t require column headers.
### 16\.2\.3 Pipe Tables
Pipe tables look quite similar to grid tables but Markdown isn’t as picky about the pipes lining up. However, it does require a header row (which you could leave the elements blank in).
```
| Right | Left | Default | Center |
|------:|:-----|---------|:------:|
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
```
which will render as the following:
| Right | Left | Default | Center |
| --- | --- | --- | --- |
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
In general I prefer to use the pipe tables because it seems a little less picky about getting everything correct. However it is still pretty annoying to get the table laid out correctly.
In all of these tables, you can use the regular RMarkdown formatting tricks for italicizing and bolding. So I could have a table such as the following:
```
| Source | df | Sum of Sq | Mean Sq | F | $Pr(>F_{1,29})$ |
|:------------|-----:|--------------:|--------------:|-------:|--------------------:|
| Girth | *1* | 7581.8 | 7581.8 | 419.26 | **< 2.2e-16** |
| Residual | 29 | 524.3 | 18.1 | | |
```
and have it look like this:
| Source | df | Sum of Sq | Mean Sq | F | \\(Pr(\>F\_{1,29})\\) |
| --- | --- | --- | --- | --- | --- |
| Girth | *1* | 7581\.8 | 7581\.8 | 419\.26 | **\< 2\.2e\-16** |
| Residual | 29 | 524\.3 | 18\.1 | | |
The problem with all of this is that I don’t want to create these by hand. Instead I would like functions that take a data frame or matrix and spit out the RMarkdown code for the table.
### 16\.2\.1 Simple Tables
Simple tables look like this (Notice I don’t wrap these dollar signs or anything, just a blank line above and below the table):
```
Right Left Center Default
------- ------ ---------- -------
12 12 hmmm 12
123 123 123 123
1 1 1 1
```
and would be rendered like this:
| Right | Left | Center | Default |
| --- | --- | --- | --- |
| 12 | 12 | hmmm | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
The headers and table rows must each fit on one line. Column alignments are determined by the position of the header text relative to the dashed line below it.
If the dashed line is flush with the header text on the right side but extends beyond it on the left, the column is right\-aligned. If the dashed line is flush with the header text on the left side but extends beyond it on the right, the column is left\-aligned. If the dashed line extends beyond the header text on both sides, the column is centered. If the dashed line is flush with the header text on both sides, the default alignment is used (in most cases, this will be left). The table must end with a blank line, or a line of dashes followed by a blank line.
### 16\.2\.2 Grid Tables
Grid tables are a little more flexible and each cell can take an arbitrary Markdown block elements (such as lists).
```
+---------------+---------------+--------------------+
| Fruit | Price | Advantages |
+===============+===============+====================+
| Bananas | $1.34 | - built-in wrapper |
| | | - bright color |
+---------------+---------------+--------------------+
| Oranges | $2.10 | - cures scurvy |
| | | - tasty |
+---------------+---------------+--------------------+
```
which is rendered as the following:
| Fruit | Price | Advantages |
| --- | --- | --- |
| Bananas | $1\.34 | * built\-in wrapper * bright color |
| Oranges | $2\.10 | * cures scurvy * tasty |
Grid table doesn’t support Left/Center/Right alignment. Both Simple tables and Grid tables require you to format the blocks nicely inside the RMarkdown file and that can be a bit annoying if something changes and you have to fix the spacing in the rest of the table. Both Simple and Grid tables don’t require column headers.
### 16\.2\.3 Pipe Tables
Pipe tables look quite similar to grid tables but Markdown isn’t as picky about the pipes lining up. However, it does require a header row (which you could leave the elements blank in).
```
| Right | Left | Default | Center |
|------:|:-----|---------|:------:|
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
```
which will render as the following:
| Right | Left | Default | Center |
| --- | --- | --- | --- |
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
In general I prefer to use the pipe tables because it seems a little less picky about getting everything correct. However it is still pretty annoying to get the table laid out correctly.
In all of these tables, you can use the regular RMarkdown formatting tricks for italicizing and bolding. So I could have a table such as the following:
```
| Source | df | Sum of Sq | Mean Sq | F | $Pr(>F_{1,29})$ |
|:------------|-----:|--------------:|--------------:|-------:|--------------------:|
| Girth | *1* | 7581.8 | 7581.8 | 419.26 | **< 2.2e-16** |
| Residual | 29 | 524.3 | 18.1 | | |
```
and have it look like this:
| Source | df | Sum of Sq | Mean Sq | F | \\(Pr(\>F\_{1,29})\\) |
| --- | --- | --- | --- | --- | --- |
| Girth | *1* | 7581\.8 | 7581\.8 | 419\.26 | **\< 2\.2e\-16** |
| Residual | 29 | 524\.3 | 18\.1 | | |
The problem with all of this is that I don’t want to create these by hand. Instead I would like functions that take a data frame or matrix and spit out the RMarkdown code for the table.
16\.3 R functions to produce table code.
----------------------------------------
There are a couple of different packages that convert a data frame to simple/grid/pipe table. We will explore a couple of these, starting with the most basic and moving to the more complicated. The general idea is that we’ll produce the appropriate simple/grid/pipe table syntax in R, and when it gets knitted, then RMarkdown will turn our simple/grid/pipe table into something pretty.
### 16\.3\.1 `knitr::kable`
The `knitr` package includes a function that produces simple tables. It doesn’t have much customizability, but it gets the job done.
```
knitr::kable( data )
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
### 16\.3\.2 Package `pander`
The package `pander` seems to be a nice compromise between customization and not having to learn too much. It is relatively powerful in that it will take `summary()` and `anova()` output and produce tables for them. By default `pander` will produce simple tables, but you can ask for Grid or Pipe tables.
```
library(pander)
pander( data, style='rmarkdown' ) # style is pipe tables...
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
The `pander` package deals with summary and anova tables from a variety of different analyses. So you can simply ask for a nice looking version using the following:
```
model <- lm( Volume ~ Girth, data=trees ) # a simple regression
pander( summary(model) ) # my usual summary table
pander( anova( model ) ) # my usual anova table
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | \-36\.94 | 3\.365 | \-10\.98 | 7\.621e\-12 |
| **Girth** | 5\.066 | 0\.2474 | 20\.48 | 8\.644e\-19 |
Fitting linear model: Volume \~ Girth
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 31 | 4\.252 | 0\.9353 | 0\.9331 |
Analysis of Variance Table
| | Df | Sum Sq | Mean Sq | F value | Pr(\>F) |
| --- | --- | --- | --- | --- | --- |
| **Girth** | 1 | 7582 | 7582 | 419\.4 | 8\.644e\-19 |
| **Residuals** | 29 | 524\.3 | 18\.08 | NA | NA |
### 16\.3\.1 `knitr::kable`
The `knitr` package includes a function that produces simple tables. It doesn’t have much customizability, but it gets the job done.
```
knitr::kable( data )
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
### 16\.3\.2 Package `pander`
The package `pander` seems to be a nice compromise between customization and not having to learn too much. It is relatively powerful in that it will take `summary()` and `anova()` output and produce tables for them. By default `pander` will produce simple tables, but you can ask for Grid or Pipe tables.
```
library(pander)
pander( data, style='rmarkdown' ) # style is pipe tables...
```
| Girth | Height | Volume |
| --- | --- | --- |
| 8\.3 | 70 | 10\.3 |
| 8\.6 | 65 | 10\.3 |
| 8\.8 | 63 | 10\.2 |
| 10\.5 | 72 | 16\.4 |
The `pander` package deals with summary and anova tables from a variety of different analyses. So you can simply ask for a nice looking version using the following:
```
model <- lm( Volume ~ Girth, data=trees ) # a simple regression
pander( summary(model) ) # my usual summary table
pander( anova( model ) ) # my usual anova table
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | \-36\.94 | 3\.365 | \-10\.98 | 7\.621e\-12 |
| **Girth** | 5\.066 | 0\.2474 | 20\.48 | 8\.644e\-19 |
Fitting linear model: Volume \~ Girth
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 31 | 4\.252 | 0\.9353 | 0\.9331 |
Analysis of Variance Table
| | Df | Sum Sq | Mean Sq | F value | Pr(\>F) |
| --- | --- | --- | --- | --- | --- |
| **Girth** | 1 | 7582 | 7582 | 419\.4 | 8\.644e\-19 |
| **Residuals** | 29 | 524\.3 | 18\.08 | NA | NA |
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/introduction.html |
1 Introduction
==============
This chapter introduces the book, describing the wide range of people it was written for, in terms of R and programming experience, and how you can get the most out of it. Anyone setting out to improve efficiency should have an understanding of precisely what they mean by the term, and this is discussed, with reference to *algorithmic* and *programmer* efficiency in Section [1\.2](introduction.html#what-is-efficiency), and with reference to R in particular in [1\.3](introduction.html#what-is-efficient-r-programming). It may seem obvious, but it’s also worth thinking about *why* anyone would bother with efficient code now that powerful computers are cheap and accessible. This is covered in Section [1\.4](introduction.html#why-efficiency).
This book happily is not completely R\-specific. Non R programming skills that are needed for efficient R programming, which you will develop during the course of following this book, are covered in Section [1\.5](introduction.html#cross-transferable-skills-for-efficiency). Unusually for a book about programming, this section introduces touch typing and consistency: cross\-transferable skills that should improve your efficiency beyond programming. However, this is first and foremost a book about programming and it wouldn’t be so without code examples in every chapter. Despite being more conceptual and discursive, this opening chapter is no exception: its penultimate section ([1\.6](introduction.html#benchmarking-and-profiling)) describes these two essential tools in the efficient R programmer’s toolbox, and how to use them with a couple of illustrative examples. The final thing to say at the outset is how to use this book in conjunction with the book’s associated package and its source code. This is covered in Section [1\.7](introduction.html#book-resources).
### Prerequisites
As emphasised in the next section, it’s useful to run code and experiment as you read. This *Prerequisites* section ensures you have the necessary packages for each chapter. The prerequisites for this chapter are:
* A working installation of R on your computer (see Section [2\.5\.1](set-up.html#install-rstudio)).
* Install and load the **microbenchmark**, **profvis** and **ggplot2** packages (see Section [2\.3\.3](set-up.html#installing-r-packages) for tips on installing packages and keeping them up\-to\-date). You can ensure these packages are installed by loading them as follows:
```
library("microbenchmark")
library("profvis")
library("ggplot2")
```
The prerequisites needed to run the code contained in the entire book are covered in [1\.7](introduction.html#book-resources) at the end of this chapter.
1\.1 Who this book is for and how to use it
-------------------------------------------
This book is for anyone who wants to make their R code faster to type, faster to run and more scalable. These considerations generally come *after* learning the very basics of R for data analysis: we assume you are either accustomed to R or proficient at programming in other languages, although this book could still be of use to beginners. Thus the book should be useful to people with a range of skill levels, who can broadly be divided into three groups:
* For **programmers with little experience with R** this book will help you navigate the quirks of R to make it work efficiently: it is easy to write slow R code if you treat as if it were another language.
* For **R users with little experience of programming** this book will show you many concepts and ‘tricks of the trade’, some of which are borrowed from Computer Science, that will make your work more time effective.
* For **R beginners with little experience of programming** this book can help steer you towards getting things right (or at least less wrong) at the outset. Bad habits are easy to gain but hard to lose. Reading this book at the outset of your programming career could save the future you many hours searching the web for issues covered in this book.
Identifying which group you best fit into and how this book is most likely to help you will help get the most out of it.
For everyone, we recommend reading *Efficient R Programming* while you have an active R project on the go, whether it’s a collaborative task at work or simply a personal interest project at home. Why? The scope of this book is wider than that of most programming textbooks (Chapter 4 covers project management) and working on a project outside the confines of the book will help put the concepts, recommendations and code into practice. Going directly from words into action in this way will help ensure that the information is consolidated: learn by doing.
If you’re an R novice and fit into the final category, we recommend that this ‘active R project’ is not an important deliverable, but another R resource. While this book is generic, it is likely that your usage of R will be largely domain\-specific. For this reason we recommend reading it alongside teaching material in your chosen area. Furthermore, we advocate that all readers use this book alongside other R resources such as the numerous, vignettes, tutorials and online articles that the R community has produced (described in the *tip* below). At a bare minimum you should be familiar with data frames, looping and simple plots.
There are many places to find generic and domain specific R teaching materials. For complete R and programming beginners, there are a number of introductory resources, such as the excellent [Student’s Guide to R](https://github.com/ProjectMOSAIC/LittleBooks/tree/master/StudentGuide) and the more technical [IcebreakeR](https://cran.r-project.org/other-docs.html) tutorial.
R also comes pre\-installed with guidance, revealed by entering `help.start()` into the R console, including the classic official guide *An Introduction to R* which is excellent but daunting to many. Entering `vignette()` will display a list of guides packaged *within your R installation* (and hence free from the need of an internet connection). To see the vignette for a specific topic, just enter the vignette’s name into the same command, e.g. `vignette(package = “dplyr”, “dplyr”)` to see the introductory vignette for the **dplyr** package.
Another early port of call should be the CRAN website. The [Contributed Documentation](https://cran.r-project.org/other-docs.html) page contains a list of contributed resources, mainly tutorials, on subjects ranging from [map making](https://github.com/Robinlovelace/Creating-maps-in-R) to [Econometrics](https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf). The new [bookdown website](https://bookdown.org/) contains a list of complete (or near complete) books, which cover domains including [*R for Data Science*](http://r4ds.had.co.nz/) and [Authoring Books with R Markdown](https://bookdown.org/yihui/bookdown/). We recommend keeping your eye on the ‘R\-o\-sphere’, e.g. via the [R\-Bloggers](http://r-bloggers.com/) website, popular Twitter feeds and [CRAN\-affiliated email lists](https://www.r-project.org/mail.html) for up\-to\-date materials that can be used in conjunction with this book.
1\.2 What is efficiency?
------------------------
In everyday life efficiency roughly means ‘working well’. An efficient vehicle goes far without guzzling gas. An efficient worker gets the job done fast without stress. And an efficient light shines bright with a minimum of energy consumption. In this final sense, efficiency (\\(\\eta\\)) has a formal definition as the ratio of work done (\\(W\\), e.g. light output) per unit effort (\\(Q\\), energy consumption in this case):
\\\[
\\eta \= \\frac{W}{Q}
\\]
How does this translate into programming? Efficient code can be defined narrowly or broadly. The first, narrower definition is *algorithmic efficiency*: how quickly the *computer* can undertake a piece of work given a particular piece of code. This concept dates back to the very origins of computing, as illustrated by the following quote from Ada Lovelace in her notes on the work of Charles Babbage, one of the pioneers of early computing (Lovelace [1842](#ref-lovelace1842translator)):
> In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.
The second, broader definition of efficient computing is *programmer productivity*. This is the amount of *useful* work a *person* (not a computer) can do per unit time. It may be possible to rewrite your code base in C to make it \\(100\\) times faster. But if this takes \\(100\\) human hours it may not be worth it. Computers can chug away day and night. People cannot. Human productivity is the subject of Chapter [4](workflow.html#workflow).
By the end of this book you should know how to write code that is efficient from both *algorithmic* and *productivity* perspectives. Efficient code is also concise, elegant and easy to maintain, vital when working on large projects. But this raises the wider question: what is different about efficient R code compared with efficient code in any other language.
1\.3 What is efficient R programming?
-------------------------------------
The issue flagged by Ada of having a ‘great variety’ of ways to solve a problem is key to understanding how efficient R programming differs from efficient programming in other languages. R is notorious for allowing users to solve problems in many ways. This is due to R’s inherent flexibility, in which almost “anything can be modified after it is created” (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)). R’s inventors, Ross Ihaka and Robert Gentleman, designed it to be this way: a cell in a data frame can be selected in multiple ways in *base R* alone (three of which are illustrated later in this Chapter, in Section [1\.6\.2](introduction.html#benchmarking-example)). This is useful, allowing programmers to use the language as best suits their needs, but can be confusing for people looking for the ‘right’ way of doing things and can cause inefficiencies if you don’t understand the language well.
R’s notoriety for being able to solve a problem in multiple different ways has grown with the proliferation of community contributed packages. In this book we focus on the *best* way of solving problems, from an efficiency perspective. Often it is instructive to discover *why* a certain way of doing things is faster than others. However, if your aim is simply to *get stuff done*, you only need to know what is likely to be the most efficient way. In this way R’s flexibility can be inefficient: although it is likely easier to find *a* way of solving any given problem in R than other languages, solving the problem with R may make it harder to find *the best* way to solve that problem, as there are so many. This book tackles this issue head\-on by recommending what we believe are the most efficient approaches. We hope you trust our views, based on years of R using and teaching, but we also hope that you challenge them at times and test them, with benchmarks, if you suspect there’s a better way of doing things (thanks to R’s flexibility and ability to interface with other languages there may well be).
It is well known that R code can promote *algorithmic efficiency* compared with low level languages for certain tasks, especially if the code was written by someone who doesn’t fully understand the language. But it is worth highlighting the numerous ways that R *encourages* and *guides* efficiency, especially programmer efficiency:
* R is not compiled but it calls compiled code. This means that you get the best of both worlds: R thankfully removes the laborious stage of compiling your code before being able to run it, but provides impressive speed gains by calling compiled C, Fortran and other languages behind the scenes.
* R is a functional and object orientated language (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)). This means that it is possible to write complex and flexible functions in R that get a huge amount of work done with a single line of code.
* R uses RAM for memory. This may seem obvious but it’s worth saying: RAM is much faster than any hard disk system. Compared with databases, R is therefore very fast at common data manipulation, processing and modelling operations. RAM is now cheaper than ever, meaning the potential downside of this feature is further away than ever.
* R is supported by excellent Integrated Development Environments (IDEs). The environment in which you program can have a huge impact on *programmer efficiency* as it can provide supporting help quickly, allow for interactive plotting, and allow your R projects to be tightly integrated with other aspects of your project such as file management, version management and interactive visualisation systems, as discussed in [2\.5](set-up.html#rstudio).
* R has a strong user community. This boosts efficiency because if you encounter a problem that has not yet been solved, you can simply ask the community. If it is a new, clearly stated and reproducible question asked on a popular forum such as [StackOverflow](http://stackoverflow.com/questions/tagged/r) or an appropriate [R list](https://www.r-project.org/mail.html), you are likely to get a response from an accomplished R programmer within minutes. The obvious benefit of this crowd\-sourced support system is that the efficiency benefits of the answer will from that moment be available to everyone.
Efficient R programming is the implementation of efficient programming practices in R. All languages are different, so efficient R code does not look like efficient code in another language. Many packages have been optimised for performance so, for some operations, achieving maximum computational efficiency may simply be a case of selecting the appropriate package and using it correctly. There are many ways to get the same result in R, and some are very slow. Therefore *not* writing slow code should be prioritized over writing fast code.
Returning to the analogy of the two cars sketched in the preface, efficient R programming for some use cases can simply mean trading in your old, heavy, and gas guzzling hummer function for a lightweight velomobile. The search for optimal performance often has diminishing returns so it is important to find bottlenecks in your code to prioritise work for maximum increases in computational efficiency. Linking back to R’s notoriety as a flexible language, efficient R programming can be interpretted as finding a solution that is **fast enough** in terms of *computational efficiency* but **as fast as possible** in terms of *programmer efficiency*. After all, you and your co\-workers probably have better and more valuable pastimes outside work so it is more important for you to get the job done quickly and take the time off for other interesting pursuits.
1\.4 Why efficiency?
--------------------
Computers are always getting more powerful. Does this not reduce the need for efficient computing? The answer is simple: no. In an age of Big Data and stagnating computer clock speeds (see Chapter [8](hardware.html#hardware)), computational bottlenecks are more likely than ever before to hamper your work. An efficient programmer can “solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research” (Visser et al. [2015](#ref-visser_speeding_2015)).
A concrete example illustrates the importance of efficiency in mission critical situations. Robin was working on a tight contract for the UK’s Department for Transport, to build the Propensity to Cycle Tool, an online application which had to be ready for national deployment in less than 4 months. For this work he developed the function, `line2route()` in the **stplanr** package, to generate routes via the [cyclestreets.net](http://www.cyclestreets.net/) API.
Hundreds of thousands of routes were needed but, to his dismay, code slowed to a standstill after only a few thousand routes. This endangered the contract. After eliminating other issues and via code profiling (covered in section [7\.2](performance.html#performance-profvis)), it was found that the slowdown was due to a bug in `line2route()`: it suffered from the ‘vector growing problem’, discussed in Section [3\.2\.1](programming.html#memory-allocation).
The solution was simple. A [single commit](https://github.com/ropensci/stplanr/commit/c834abf7d0020c6fbb33845572d6be4801f31f47) made `line2route()` more than *ten times faster* and substantially shorter. This potentially saved the project from failure. The moral of this story is that efficient programming is not merely a desirable skill: it can be *essential*.
There are many concepts and skills that are language agnostic. Much of the knowledge imparted in this book should be relevant to programming in other languages (and other technical activities beyond programming). There are strong reasons for focussing on efficiency in one language, however in R simply using replacement functions from a different package can greatly improve efficiency, as discussed in relation to reading in text files Chapter [5](input-output.html#input-output). This level of detail, with reproducible examples, would not be possible in a general purpose ‘efficient programming’ book. Skills for efficient working, that apply beyond R programming, are covered in the next section.
1\.5 Cross\-transferable skills for efficiency
----------------------------------------------
The meaning of ‘efficient R code’, as opposed to generic ‘efficient code’, should be clear from the preceding two sections. However, that does not mean that the skills and concepts covered in this book are not transferable to other languages and non\-programming tasks. Likewise working on these *cross\-transferable* skills will improve your R programming (as well as other aspects of your working life). Two of these skills are especially important: touch typing and use of a consistent style.
### 1\.5\.1 Touch typing
The other side of the efficiency coin is programmer efficiency. There are many things that will help increase the productivity of yourself and your collaborators, not least following the advice of Janert ([2010](#ref-janert2010data)) to ‘think more work less’. The evidence suggests that good diet, physical activity, plenty of sleep and a healthy work\-life balance can all boost your speed and effectiveness at work (Jensen [2011](#ref-jensen2011can); Pereira et al. [2015](#ref-pereira2015impact); Grant, Wallace, and Spurgeon [2013](#ref-grant2013exploration)).
While we recommend the reader to reflect on this evidence and their own well\-being, this is not a self help book. It is about programming. However, there is one non\-programming skill that *can* have a huge impact on productivity: touch typing. This skill can be relatively painless to learn, and can have a huge impact on your ability to write, modify and test R code quickly. Learning to touch type properly will pay off in small increments throughout the rest of your programming life (of course, the benefits are not constrained to R programming).
The key difference between a touch typist and someone who constantly looks down at the keyboard, or who uses only two or three fingers for typing, is hand placement. Touch typing involves positioning your hands on the keyboard with each finger of both hands touching or hovering over a specific letter (Figure [1\.1](introduction.html#fig:1-1)). This takes time and some discipline to learn. Fortunately there are many resources that will help you get in the habit of touch typing early, including open source software projects [Klavaro](https://sourceforge.net/projects/klavaro/) and [TypeFaster](https://sourceforge.net/projects/typefaster/).
Figure 1\.1: The starting position for touch typing, with the fingers over the ‘home keys’. Source: [Wikimedia](https://commons.wikimedia.org/wiki/File:QWERTY-home-keys-position.svg) under the Creative Commons license.
### 1\.5\.2 Consistent style and code conventions
Getting into the habit of clear and consistent style when writing anything, be it code or poetry, will have benefits in many other projects, programming or non\-programming. As outlined in Section [9\.2](collaboration.html#coding-style), style is to some extent a personal preference. However, it is worth noting at the outset the conventions we use, in order to maximise readability. Throughout this book we use a consistent set of conventions to refer to code.
* Package names are in bold, e.g. **dplyr**.
* Functions are in a code font, followed by parentheses, like `plot()`, or `median()`.
* Other R objects, such as data or function arguments, are in a code font, without parentheses, like `x` and `name`.
* Occasionally we’ll highlight the package of the function, using two colons, e.g. `microbenchmark::microbenchmark()`.
Note, this notation can be efficient if you only need to use a package’s function once, as it avoids loading the package with `library()`.
The concepts of benchmarking and profiling are not R specific. However, they are done in a particular way in R, as outlined in the next section.
1\.6 Benchmarking and profiling
-------------------------------
Benchmarking and profiling are key to efficient programming, especially in R. Benchmarking is the process of testing the performance of specific operations repeatedly. Profiling involves running many lines of code to find out where bottlenecks lie. Both are vital for understanding efficiency and we use them throughout the book. Their centrality to efficient programming practice means they must be covered in this introductory chapter, despite being seen by many as an intermediate or advanced R programming topic.
In some ways benchmarks can be seen as the building blocks of profiles. Profiling can be understood as automatically running many benchmarks, for every line in a script, and comparing the results line\-by\-line. Because benchmarks are smaller, easier and more modular, we will cover them first.
### 1\.6\.1 Benchmarking
Modifying elements from one benchmark to the next and recording the results after the alteration enables us to determine the fastest piece of code. Benchmarking is important in the efficient programmer’s tool\-kit: you may *think* that your code is faster than mine but benchmarking allows you to *prove* it. The easiest way to benchmark a function is to use `system.time()`. However it is important to remember that we are taking a sample. We wouldn’t expect a single person in London to be representative of the entire UK population, similarly, a single benchmark provides us with a single observation on our functions behaviour. Therefore, we’ll need to repeat the timing many times with a loop.
An alternative way of benchmarking, is via the flexible **microbenchmark** package. This allows us to easily run each function multiple times (by default \\(100\\)), enabling the user to detect microsecond differences in code performance. We then get a convenient summary of the results: the minimum/maximum, lower/upper quartiles and the mean/median times. We suggest focusing on the median time to get a feel for the standard time and the quartiles to understand the variability.
### 1\.6\.2 Benchmarking example
A good example is testing different methods to look\-up a single value in a data frame. Note that each argument in the benchmark below is a term to be evaluated (for multi\-line benchmarks, the term to be evaluated can be surrounded by curly brackets, `{}`).
```
library("microbenchmark")
df = data.frame(v = 1:4, name = letters[1:4])
microbenchmark(df[3, 2], df[3, "name"], df$name[3])
# Unit: microseconds
# expr min lq mean median uq max neval cld
# df[3, 2] 17.99 18.96 20.16 19.38 19.77 35.14 100 b
# df[3, "name"] 17.97 19.13 21.45 19.64 20.15 74.00 100 b
# df$name[3] 12.48 13.81 15.81 14.48 15.14 67.24 100 a
```
The results summarise how long each query took: the minimum (`min`), lower and upper quartiles (`lq` and `uq`, respectively) and the mean, median and maximum, for each of the number of evaluations (`neval`, with the default value of 100 used in this case). `cld` reports the relative rank of each row in the form of ‘compact letter display’: in this case `df$name[3]` performs best, with a rank of `a` and a mean time around 25% lower than the other two functions.
When using `microbenchmark()`, you should pay careful attention to the units. In the above example, each function call takes approximately 20 *microseconds*, implying around 50,000 function calls could be done in a second. When comparing quick functions, the standard units are:
* milliseconds (ms), one thousand function calls take a second;
* microseconds (\\(\\mu\\)s), one million function calls take a second;
* nanoseconds (ns), one billion function calls take a second.
We can set the units we want to use with the `unit` argument, e.g. the results are reported
in seconds if we set `unit = "s"`.
When thinking about computational efficiency, there are (at least) two measures:
* Relative time: `df$name[3]` is 25% faster than `df[3, "name"]`;
* Absolute time: `df$name[3]` is 5 microseconds faster than `df[3, "name"]`.
Both measures are useful, but its important not to forget the underlying
time scale: it makes little sense to optimise a function that takes *microseconds* to complete if there are operations that take *seconds* to complete in your code.
### 1\.6\.3 Profiling
Benchmarking generally tests the execution time of one function against another. Profiling, on the other hand, is about testing large chunks of code.
It is difficult to over\-emphasise the importance of profiling for efficient R programming. Without a profile of what took longest, you will have only a vague idea of why your code is taking so long to run. The example below (which generates Figure [1\.3](introduction.html#fig:1-3), an image of ice\-sheet retreat from 1985 to 2015\) shows how profiling can be used to identify bottlenecks in your R scripts:
```
library("profvis")
profvis(expr = {
# Stage 1: load packages
# library("rnoaa") # not necessary as data pre-saved
library("ggplot2")
# Stage 2: load and process data
out = readRDS("extdata/out-ice.Rds")
df = dplyr::rbind_all(out, id = "Year")
# Stage 3: visualise output
ggplot(df, aes(long, lat, group = paste(group, Year))) +
geom_path(aes(colour = Year))
ggsave("figures/icesheet-test.png")
}, interval = 0.01, prof_output = "ice-prof")
```
The result of this profiling exercise are displayed in Figure [1\.2](introduction.html#fig:1-2).
Figure 1\.2: Profiling results of loading and plotting NASA data on icesheet retreat.
Figure 1\.3: Visualisation of North Pole icesheet decline, generated using the code profiled using the profvis package.
For more information about profiling and benchmarking, please refer to the [Optimising code](http://adv-r.had.co.nz/Profiling.html) chapter in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)), and Section [7\.2](performance.html#performance-profvis) in this book. We recommend reading these additional resources while performing benchmarks and profiles on your own code, for example, based on the exercises below.
#### 1\.6\.3\.1 Exercises
Consider the following benchmark to evaluate different functions for calculating the cumulative sum of the whole numbers from 1 to 100:
```
x = 1:100 # initiate vector to cumulatively sum
# Method 1: with a for loop (10 lines)
cs_for = function(x) {
for (i in x) {
if (i == 1) {
xc = x[i]
} else {
xc = c(xc, sum(x[1:i]))
}
}
xc
}
# Method 2: with apply (3 lines)
cs_apply = function(x) {
sapply(x, function(x) sum(1:x))
}
# Method 3: cumsum (1 line, not shown)
microbenchmark(cs_for(x), cs_apply(x), cumsum(x))
#> Unit: nanoseconds
#> expr min lq mean median uq max neval
#> cs_for(x) 112700 122616 183260 127327 134686 5416221 100
#> cs_apply(x) 83651 87048 121638 93694 103208 2505961 100
#> cumsum(x) 672 790 1142 929 1036 19960 100
```
1. Which method is fastest and how many times faster is it?
2. Run the same benchmark, but with the results reported in seconds, on a vector of all the whole numbers from 1 to 50,000\. Hint: also use the argument `times = 1` so that each command is only run once to ensure the results complete (even with a single evaluation the benchmark may take up to or more than a minute to complete, depending on your system). Does the *relative* time difference increase or decrease? By how much?
3. Test how long the different methods for subsetting the data frame `df`, presented in Section [1\.6\.2](introduction.html#benchmarking-example), take on your computer. Is it faster or slower at subsetting than the computer on which this book was compiled?
4. Use `system.time()` and a `for()` loop to test how long it takes to perform the subsetting operation 50,000 times. Before testing this, do you think it will be more or less than 1 second, for each subsetting method? Hint: the test for the first method is shown below:
```
# Test how long it takes to subset the data frame 50,000 times:
system.time(
for (i in 1:50000) {
df[3, 2]
}
)
```
5. Bonus exercise: try profiling a section of code you have written using **profvis**. Where are the bottlenecks? Were they where you expected?
1\.7 Book resources
-------------------
### 1\.7\.1 R package
This book has an associated R package that contains datasets and functions referenced in the book. The package is hosted on [github](https://github.com/csgillespie/efficient) and can be installed using the **devtools** package:
```
devtools::install_github("csgillespie/efficient", build_vignettes = TRUE, dependencies = TRUE)
```
The package also contains solutions (as vignettes) to the exercises found in this book. They can be browsed with the following command:
```
browseVignettes(package = "efficient")
```
The following command will install all packages used to generate this book:
```
devtools::install_github("csgillespie/efficientR")
```
### 1\.7\.2 Online version
We are grateful to O’Reilly Press for allowing us to develop this book [online](https://csgillespie.github.io/efficientR/). The online version constitutes a substantial additional resource to supplement this book, and will continue to evolve in between reprints of the physical book. The book’s code also represents a substantial learning opportunity in itself as it was written using R Markdown and the **bookdown** package, allowing us to run the R code each time we compile the book to ensure that it works, and allowing others to contribute to its future longevity.
To edit this chapter, for example, simply navigate to [github.com/csgillespie/efficientR/edit/master/01\-introduction.Rmd](https://github.com/csgillespie/efficientR/edit/master/01-introduction.Rmd) while logged into a [GitHub account](https://help.github.com/articles/signing-up-for-a-new-github-account/). The full source of the book is available at <https://github.com/csgillespie/efficientR> where we welcome comments/questions on the [Issue Tracker](https://github.com/csgillespie/efficientR/issues) and Pull Requests.
### Prerequisites
As emphasised in the next section, it’s useful to run code and experiment as you read. This *Prerequisites* section ensures you have the necessary packages for each chapter. The prerequisites for this chapter are:
* A working installation of R on your computer (see Section [2\.5\.1](set-up.html#install-rstudio)).
* Install and load the **microbenchmark**, **profvis** and **ggplot2** packages (see Section [2\.3\.3](set-up.html#installing-r-packages) for tips on installing packages and keeping them up\-to\-date). You can ensure these packages are installed by loading them as follows:
```
library("microbenchmark")
library("profvis")
library("ggplot2")
```
The prerequisites needed to run the code contained in the entire book are covered in [1\.7](introduction.html#book-resources) at the end of this chapter.
1\.1 Who this book is for and how to use it
-------------------------------------------
This book is for anyone who wants to make their R code faster to type, faster to run and more scalable. These considerations generally come *after* learning the very basics of R for data analysis: we assume you are either accustomed to R or proficient at programming in other languages, although this book could still be of use to beginners. Thus the book should be useful to people with a range of skill levels, who can broadly be divided into three groups:
* For **programmers with little experience with R** this book will help you navigate the quirks of R to make it work efficiently: it is easy to write slow R code if you treat as if it were another language.
* For **R users with little experience of programming** this book will show you many concepts and ‘tricks of the trade’, some of which are borrowed from Computer Science, that will make your work more time effective.
* For **R beginners with little experience of programming** this book can help steer you towards getting things right (or at least less wrong) at the outset. Bad habits are easy to gain but hard to lose. Reading this book at the outset of your programming career could save the future you many hours searching the web for issues covered in this book.
Identifying which group you best fit into and how this book is most likely to help you will help get the most out of it.
For everyone, we recommend reading *Efficient R Programming* while you have an active R project on the go, whether it’s a collaborative task at work or simply a personal interest project at home. Why? The scope of this book is wider than that of most programming textbooks (Chapter 4 covers project management) and working on a project outside the confines of the book will help put the concepts, recommendations and code into practice. Going directly from words into action in this way will help ensure that the information is consolidated: learn by doing.
If you’re an R novice and fit into the final category, we recommend that this ‘active R project’ is not an important deliverable, but another R resource. While this book is generic, it is likely that your usage of R will be largely domain\-specific. For this reason we recommend reading it alongside teaching material in your chosen area. Furthermore, we advocate that all readers use this book alongside other R resources such as the numerous, vignettes, tutorials and online articles that the R community has produced (described in the *tip* below). At a bare minimum you should be familiar with data frames, looping and simple plots.
There are many places to find generic and domain specific R teaching materials. For complete R and programming beginners, there are a number of introductory resources, such as the excellent [Student’s Guide to R](https://github.com/ProjectMOSAIC/LittleBooks/tree/master/StudentGuide) and the more technical [IcebreakeR](https://cran.r-project.org/other-docs.html) tutorial.
R also comes pre\-installed with guidance, revealed by entering `help.start()` into the R console, including the classic official guide *An Introduction to R* which is excellent but daunting to many. Entering `vignette()` will display a list of guides packaged *within your R installation* (and hence free from the need of an internet connection). To see the vignette for a specific topic, just enter the vignette’s name into the same command, e.g. `vignette(package = “dplyr”, “dplyr”)` to see the introductory vignette for the **dplyr** package.
Another early port of call should be the CRAN website. The [Contributed Documentation](https://cran.r-project.org/other-docs.html) page contains a list of contributed resources, mainly tutorials, on subjects ranging from [map making](https://github.com/Robinlovelace/Creating-maps-in-R) to [Econometrics](https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf). The new [bookdown website](https://bookdown.org/) contains a list of complete (or near complete) books, which cover domains including [*R for Data Science*](http://r4ds.had.co.nz/) and [Authoring Books with R Markdown](https://bookdown.org/yihui/bookdown/). We recommend keeping your eye on the ‘R\-o\-sphere’, e.g. via the [R\-Bloggers](http://r-bloggers.com/) website, popular Twitter feeds and [CRAN\-affiliated email lists](https://www.r-project.org/mail.html) for up\-to\-date materials that can be used in conjunction with this book.
1\.2 What is efficiency?
------------------------
In everyday life efficiency roughly means ‘working well’. An efficient vehicle goes far without guzzling gas. An efficient worker gets the job done fast without stress. And an efficient light shines bright with a minimum of energy consumption. In this final sense, efficiency (\\(\\eta\\)) has a formal definition as the ratio of work done (\\(W\\), e.g. light output) per unit effort (\\(Q\\), energy consumption in this case):
\\\[
\\eta \= \\frac{W}{Q}
\\]
How does this translate into programming? Efficient code can be defined narrowly or broadly. The first, narrower definition is *algorithmic efficiency*: how quickly the *computer* can undertake a piece of work given a particular piece of code. This concept dates back to the very origins of computing, as illustrated by the following quote from Ada Lovelace in her notes on the work of Charles Babbage, one of the pioneers of early computing (Lovelace [1842](#ref-lovelace1842translator)):
> In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.
The second, broader definition of efficient computing is *programmer productivity*. This is the amount of *useful* work a *person* (not a computer) can do per unit time. It may be possible to rewrite your code base in C to make it \\(100\\) times faster. But if this takes \\(100\\) human hours it may not be worth it. Computers can chug away day and night. People cannot. Human productivity is the subject of Chapter [4](workflow.html#workflow).
By the end of this book you should know how to write code that is efficient from both *algorithmic* and *productivity* perspectives. Efficient code is also concise, elegant and easy to maintain, vital when working on large projects. But this raises the wider question: what is different about efficient R code compared with efficient code in any other language.
1\.3 What is efficient R programming?
-------------------------------------
The issue flagged by Ada of having a ‘great variety’ of ways to solve a problem is key to understanding how efficient R programming differs from efficient programming in other languages. R is notorious for allowing users to solve problems in many ways. This is due to R’s inherent flexibility, in which almost “anything can be modified after it is created” (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)). R’s inventors, Ross Ihaka and Robert Gentleman, designed it to be this way: a cell in a data frame can be selected in multiple ways in *base R* alone (three of which are illustrated later in this Chapter, in Section [1\.6\.2](introduction.html#benchmarking-example)). This is useful, allowing programmers to use the language as best suits their needs, but can be confusing for people looking for the ‘right’ way of doing things and can cause inefficiencies if you don’t understand the language well.
R’s notoriety for being able to solve a problem in multiple different ways has grown with the proliferation of community contributed packages. In this book we focus on the *best* way of solving problems, from an efficiency perspective. Often it is instructive to discover *why* a certain way of doing things is faster than others. However, if your aim is simply to *get stuff done*, you only need to know what is likely to be the most efficient way. In this way R’s flexibility can be inefficient: although it is likely easier to find *a* way of solving any given problem in R than other languages, solving the problem with R may make it harder to find *the best* way to solve that problem, as there are so many. This book tackles this issue head\-on by recommending what we believe are the most efficient approaches. We hope you trust our views, based on years of R using and teaching, but we also hope that you challenge them at times and test them, with benchmarks, if you suspect there’s a better way of doing things (thanks to R’s flexibility and ability to interface with other languages there may well be).
It is well known that R code can promote *algorithmic efficiency* compared with low level languages for certain tasks, especially if the code was written by someone who doesn’t fully understand the language. But it is worth highlighting the numerous ways that R *encourages* and *guides* efficiency, especially programmer efficiency:
* R is not compiled but it calls compiled code. This means that you get the best of both worlds: R thankfully removes the laborious stage of compiling your code before being able to run it, but provides impressive speed gains by calling compiled C, Fortran and other languages behind the scenes.
* R is a functional and object orientated language (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)). This means that it is possible to write complex and flexible functions in R that get a huge amount of work done with a single line of code.
* R uses RAM for memory. This may seem obvious but it’s worth saying: RAM is much faster than any hard disk system. Compared with databases, R is therefore very fast at common data manipulation, processing and modelling operations. RAM is now cheaper than ever, meaning the potential downside of this feature is further away than ever.
* R is supported by excellent Integrated Development Environments (IDEs). The environment in which you program can have a huge impact on *programmer efficiency* as it can provide supporting help quickly, allow for interactive plotting, and allow your R projects to be tightly integrated with other aspects of your project such as file management, version management and interactive visualisation systems, as discussed in [2\.5](set-up.html#rstudio).
* R has a strong user community. This boosts efficiency because if you encounter a problem that has not yet been solved, you can simply ask the community. If it is a new, clearly stated and reproducible question asked on a popular forum such as [StackOverflow](http://stackoverflow.com/questions/tagged/r) or an appropriate [R list](https://www.r-project.org/mail.html), you are likely to get a response from an accomplished R programmer within minutes. The obvious benefit of this crowd\-sourced support system is that the efficiency benefits of the answer will from that moment be available to everyone.
Efficient R programming is the implementation of efficient programming practices in R. All languages are different, so efficient R code does not look like efficient code in another language. Many packages have been optimised for performance so, for some operations, achieving maximum computational efficiency may simply be a case of selecting the appropriate package and using it correctly. There are many ways to get the same result in R, and some are very slow. Therefore *not* writing slow code should be prioritized over writing fast code.
Returning to the analogy of the two cars sketched in the preface, efficient R programming for some use cases can simply mean trading in your old, heavy, and gas guzzling hummer function for a lightweight velomobile. The search for optimal performance often has diminishing returns so it is important to find bottlenecks in your code to prioritise work for maximum increases in computational efficiency. Linking back to R’s notoriety as a flexible language, efficient R programming can be interpretted as finding a solution that is **fast enough** in terms of *computational efficiency* but **as fast as possible** in terms of *programmer efficiency*. After all, you and your co\-workers probably have better and more valuable pastimes outside work so it is more important for you to get the job done quickly and take the time off for other interesting pursuits.
1\.4 Why efficiency?
--------------------
Computers are always getting more powerful. Does this not reduce the need for efficient computing? The answer is simple: no. In an age of Big Data and stagnating computer clock speeds (see Chapter [8](hardware.html#hardware)), computational bottlenecks are more likely than ever before to hamper your work. An efficient programmer can “solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research” (Visser et al. [2015](#ref-visser_speeding_2015)).
A concrete example illustrates the importance of efficiency in mission critical situations. Robin was working on a tight contract for the UK’s Department for Transport, to build the Propensity to Cycle Tool, an online application which had to be ready for national deployment in less than 4 months. For this work he developed the function, `line2route()` in the **stplanr** package, to generate routes via the [cyclestreets.net](http://www.cyclestreets.net/) API.
Hundreds of thousands of routes were needed but, to his dismay, code slowed to a standstill after only a few thousand routes. This endangered the contract. After eliminating other issues and via code profiling (covered in section [7\.2](performance.html#performance-profvis)), it was found that the slowdown was due to a bug in `line2route()`: it suffered from the ‘vector growing problem’, discussed in Section [3\.2\.1](programming.html#memory-allocation).
The solution was simple. A [single commit](https://github.com/ropensci/stplanr/commit/c834abf7d0020c6fbb33845572d6be4801f31f47) made `line2route()` more than *ten times faster* and substantially shorter. This potentially saved the project from failure. The moral of this story is that efficient programming is not merely a desirable skill: it can be *essential*.
There are many concepts and skills that are language agnostic. Much of the knowledge imparted in this book should be relevant to programming in other languages (and other technical activities beyond programming). There are strong reasons for focussing on efficiency in one language, however in R simply using replacement functions from a different package can greatly improve efficiency, as discussed in relation to reading in text files Chapter [5](input-output.html#input-output). This level of detail, with reproducible examples, would not be possible in a general purpose ‘efficient programming’ book. Skills for efficient working, that apply beyond R programming, are covered in the next section.
1\.5 Cross\-transferable skills for efficiency
----------------------------------------------
The meaning of ‘efficient R code’, as opposed to generic ‘efficient code’, should be clear from the preceding two sections. However, that does not mean that the skills and concepts covered in this book are not transferable to other languages and non\-programming tasks. Likewise working on these *cross\-transferable* skills will improve your R programming (as well as other aspects of your working life). Two of these skills are especially important: touch typing and use of a consistent style.
### 1\.5\.1 Touch typing
The other side of the efficiency coin is programmer efficiency. There are many things that will help increase the productivity of yourself and your collaborators, not least following the advice of Janert ([2010](#ref-janert2010data)) to ‘think more work less’. The evidence suggests that good diet, physical activity, plenty of sleep and a healthy work\-life balance can all boost your speed and effectiveness at work (Jensen [2011](#ref-jensen2011can); Pereira et al. [2015](#ref-pereira2015impact); Grant, Wallace, and Spurgeon [2013](#ref-grant2013exploration)).
While we recommend the reader to reflect on this evidence and their own well\-being, this is not a self help book. It is about programming. However, there is one non\-programming skill that *can* have a huge impact on productivity: touch typing. This skill can be relatively painless to learn, and can have a huge impact on your ability to write, modify and test R code quickly. Learning to touch type properly will pay off in small increments throughout the rest of your programming life (of course, the benefits are not constrained to R programming).
The key difference between a touch typist and someone who constantly looks down at the keyboard, or who uses only two or three fingers for typing, is hand placement. Touch typing involves positioning your hands on the keyboard with each finger of both hands touching or hovering over a specific letter (Figure [1\.1](introduction.html#fig:1-1)). This takes time and some discipline to learn. Fortunately there are many resources that will help you get in the habit of touch typing early, including open source software projects [Klavaro](https://sourceforge.net/projects/klavaro/) and [TypeFaster](https://sourceforge.net/projects/typefaster/).
Figure 1\.1: The starting position for touch typing, with the fingers over the ‘home keys’. Source: [Wikimedia](https://commons.wikimedia.org/wiki/File:QWERTY-home-keys-position.svg) under the Creative Commons license.
### 1\.5\.2 Consistent style and code conventions
Getting into the habit of clear and consistent style when writing anything, be it code or poetry, will have benefits in many other projects, programming or non\-programming. As outlined in Section [9\.2](collaboration.html#coding-style), style is to some extent a personal preference. However, it is worth noting at the outset the conventions we use, in order to maximise readability. Throughout this book we use a consistent set of conventions to refer to code.
* Package names are in bold, e.g. **dplyr**.
* Functions are in a code font, followed by parentheses, like `plot()`, or `median()`.
* Other R objects, such as data or function arguments, are in a code font, without parentheses, like `x` and `name`.
* Occasionally we’ll highlight the package of the function, using two colons, e.g. `microbenchmark::microbenchmark()`.
Note, this notation can be efficient if you only need to use a package’s function once, as it avoids loading the package with `library()`.
The concepts of benchmarking and profiling are not R specific. However, they are done in a particular way in R, as outlined in the next section.
### 1\.5\.1 Touch typing
The other side of the efficiency coin is programmer efficiency. There are many things that will help increase the productivity of yourself and your collaborators, not least following the advice of Janert ([2010](#ref-janert2010data)) to ‘think more work less’. The evidence suggests that good diet, physical activity, plenty of sleep and a healthy work\-life balance can all boost your speed and effectiveness at work (Jensen [2011](#ref-jensen2011can); Pereira et al. [2015](#ref-pereira2015impact); Grant, Wallace, and Spurgeon [2013](#ref-grant2013exploration)).
While we recommend the reader to reflect on this evidence and their own well\-being, this is not a self help book. It is about programming. However, there is one non\-programming skill that *can* have a huge impact on productivity: touch typing. This skill can be relatively painless to learn, and can have a huge impact on your ability to write, modify and test R code quickly. Learning to touch type properly will pay off in small increments throughout the rest of your programming life (of course, the benefits are not constrained to R programming).
The key difference between a touch typist and someone who constantly looks down at the keyboard, or who uses only two or three fingers for typing, is hand placement. Touch typing involves positioning your hands on the keyboard with each finger of both hands touching or hovering over a specific letter (Figure [1\.1](introduction.html#fig:1-1)). This takes time and some discipline to learn. Fortunately there are many resources that will help you get in the habit of touch typing early, including open source software projects [Klavaro](https://sourceforge.net/projects/klavaro/) and [TypeFaster](https://sourceforge.net/projects/typefaster/).
Figure 1\.1: The starting position for touch typing, with the fingers over the ‘home keys’. Source: [Wikimedia](https://commons.wikimedia.org/wiki/File:QWERTY-home-keys-position.svg) under the Creative Commons license.
### 1\.5\.2 Consistent style and code conventions
Getting into the habit of clear and consistent style when writing anything, be it code or poetry, will have benefits in many other projects, programming or non\-programming. As outlined in Section [9\.2](collaboration.html#coding-style), style is to some extent a personal preference. However, it is worth noting at the outset the conventions we use, in order to maximise readability. Throughout this book we use a consistent set of conventions to refer to code.
* Package names are in bold, e.g. **dplyr**.
* Functions are in a code font, followed by parentheses, like `plot()`, or `median()`.
* Other R objects, such as data or function arguments, are in a code font, without parentheses, like `x` and `name`.
* Occasionally we’ll highlight the package of the function, using two colons, e.g. `microbenchmark::microbenchmark()`.
Note, this notation can be efficient if you only need to use a package’s function once, as it avoids loading the package with `library()`.
The concepts of benchmarking and profiling are not R specific. However, they are done in a particular way in R, as outlined in the next section.
1\.6 Benchmarking and profiling
-------------------------------
Benchmarking and profiling are key to efficient programming, especially in R. Benchmarking is the process of testing the performance of specific operations repeatedly. Profiling involves running many lines of code to find out where bottlenecks lie. Both are vital for understanding efficiency and we use them throughout the book. Their centrality to efficient programming practice means they must be covered in this introductory chapter, despite being seen by many as an intermediate or advanced R programming topic.
In some ways benchmarks can be seen as the building blocks of profiles. Profiling can be understood as automatically running many benchmarks, for every line in a script, and comparing the results line\-by\-line. Because benchmarks are smaller, easier and more modular, we will cover them first.
### 1\.6\.1 Benchmarking
Modifying elements from one benchmark to the next and recording the results after the alteration enables us to determine the fastest piece of code. Benchmarking is important in the efficient programmer’s tool\-kit: you may *think* that your code is faster than mine but benchmarking allows you to *prove* it. The easiest way to benchmark a function is to use `system.time()`. However it is important to remember that we are taking a sample. We wouldn’t expect a single person in London to be representative of the entire UK population, similarly, a single benchmark provides us with a single observation on our functions behaviour. Therefore, we’ll need to repeat the timing many times with a loop.
An alternative way of benchmarking, is via the flexible **microbenchmark** package. This allows us to easily run each function multiple times (by default \\(100\\)), enabling the user to detect microsecond differences in code performance. We then get a convenient summary of the results: the minimum/maximum, lower/upper quartiles and the mean/median times. We suggest focusing on the median time to get a feel for the standard time and the quartiles to understand the variability.
### 1\.6\.2 Benchmarking example
A good example is testing different methods to look\-up a single value in a data frame. Note that each argument in the benchmark below is a term to be evaluated (for multi\-line benchmarks, the term to be evaluated can be surrounded by curly brackets, `{}`).
```
library("microbenchmark")
df = data.frame(v = 1:4, name = letters[1:4])
microbenchmark(df[3, 2], df[3, "name"], df$name[3])
# Unit: microseconds
# expr min lq mean median uq max neval cld
# df[3, 2] 17.99 18.96 20.16 19.38 19.77 35.14 100 b
# df[3, "name"] 17.97 19.13 21.45 19.64 20.15 74.00 100 b
# df$name[3] 12.48 13.81 15.81 14.48 15.14 67.24 100 a
```
The results summarise how long each query took: the minimum (`min`), lower and upper quartiles (`lq` and `uq`, respectively) and the mean, median and maximum, for each of the number of evaluations (`neval`, with the default value of 100 used in this case). `cld` reports the relative rank of each row in the form of ‘compact letter display’: in this case `df$name[3]` performs best, with a rank of `a` and a mean time around 25% lower than the other two functions.
When using `microbenchmark()`, you should pay careful attention to the units. In the above example, each function call takes approximately 20 *microseconds*, implying around 50,000 function calls could be done in a second. When comparing quick functions, the standard units are:
* milliseconds (ms), one thousand function calls take a second;
* microseconds (\\(\\mu\\)s), one million function calls take a second;
* nanoseconds (ns), one billion function calls take a second.
We can set the units we want to use with the `unit` argument, e.g. the results are reported
in seconds if we set `unit = "s"`.
When thinking about computational efficiency, there are (at least) two measures:
* Relative time: `df$name[3]` is 25% faster than `df[3, "name"]`;
* Absolute time: `df$name[3]` is 5 microseconds faster than `df[3, "name"]`.
Both measures are useful, but its important not to forget the underlying
time scale: it makes little sense to optimise a function that takes *microseconds* to complete if there are operations that take *seconds* to complete in your code.
### 1\.6\.3 Profiling
Benchmarking generally tests the execution time of one function against another. Profiling, on the other hand, is about testing large chunks of code.
It is difficult to over\-emphasise the importance of profiling for efficient R programming. Without a profile of what took longest, you will have only a vague idea of why your code is taking so long to run. The example below (which generates Figure [1\.3](introduction.html#fig:1-3), an image of ice\-sheet retreat from 1985 to 2015\) shows how profiling can be used to identify bottlenecks in your R scripts:
```
library("profvis")
profvis(expr = {
# Stage 1: load packages
# library("rnoaa") # not necessary as data pre-saved
library("ggplot2")
# Stage 2: load and process data
out = readRDS("extdata/out-ice.Rds")
df = dplyr::rbind_all(out, id = "Year")
# Stage 3: visualise output
ggplot(df, aes(long, lat, group = paste(group, Year))) +
geom_path(aes(colour = Year))
ggsave("figures/icesheet-test.png")
}, interval = 0.01, prof_output = "ice-prof")
```
The result of this profiling exercise are displayed in Figure [1\.2](introduction.html#fig:1-2).
Figure 1\.2: Profiling results of loading and plotting NASA data on icesheet retreat.
Figure 1\.3: Visualisation of North Pole icesheet decline, generated using the code profiled using the profvis package.
For more information about profiling and benchmarking, please refer to the [Optimising code](http://adv-r.had.co.nz/Profiling.html) chapter in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)), and Section [7\.2](performance.html#performance-profvis) in this book. We recommend reading these additional resources while performing benchmarks and profiles on your own code, for example, based on the exercises below.
#### 1\.6\.3\.1 Exercises
Consider the following benchmark to evaluate different functions for calculating the cumulative sum of the whole numbers from 1 to 100:
```
x = 1:100 # initiate vector to cumulatively sum
# Method 1: with a for loop (10 lines)
cs_for = function(x) {
for (i in x) {
if (i == 1) {
xc = x[i]
} else {
xc = c(xc, sum(x[1:i]))
}
}
xc
}
# Method 2: with apply (3 lines)
cs_apply = function(x) {
sapply(x, function(x) sum(1:x))
}
# Method 3: cumsum (1 line, not shown)
microbenchmark(cs_for(x), cs_apply(x), cumsum(x))
#> Unit: nanoseconds
#> expr min lq mean median uq max neval
#> cs_for(x) 112700 122616 183260 127327 134686 5416221 100
#> cs_apply(x) 83651 87048 121638 93694 103208 2505961 100
#> cumsum(x) 672 790 1142 929 1036 19960 100
```
1. Which method is fastest and how many times faster is it?
2. Run the same benchmark, but with the results reported in seconds, on a vector of all the whole numbers from 1 to 50,000\. Hint: also use the argument `times = 1` so that each command is only run once to ensure the results complete (even with a single evaluation the benchmark may take up to or more than a minute to complete, depending on your system). Does the *relative* time difference increase or decrease? By how much?
3. Test how long the different methods for subsetting the data frame `df`, presented in Section [1\.6\.2](introduction.html#benchmarking-example), take on your computer. Is it faster or slower at subsetting than the computer on which this book was compiled?
4. Use `system.time()` and a `for()` loop to test how long it takes to perform the subsetting operation 50,000 times. Before testing this, do you think it will be more or less than 1 second, for each subsetting method? Hint: the test for the first method is shown below:
```
# Test how long it takes to subset the data frame 50,000 times:
system.time(
for (i in 1:50000) {
df[3, 2]
}
)
```
5. Bonus exercise: try profiling a section of code you have written using **profvis**. Where are the bottlenecks? Were they where you expected?
### 1\.6\.1 Benchmarking
Modifying elements from one benchmark to the next and recording the results after the alteration enables us to determine the fastest piece of code. Benchmarking is important in the efficient programmer’s tool\-kit: you may *think* that your code is faster than mine but benchmarking allows you to *prove* it. The easiest way to benchmark a function is to use `system.time()`. However it is important to remember that we are taking a sample. We wouldn’t expect a single person in London to be representative of the entire UK population, similarly, a single benchmark provides us with a single observation on our functions behaviour. Therefore, we’ll need to repeat the timing many times with a loop.
An alternative way of benchmarking, is via the flexible **microbenchmark** package. This allows us to easily run each function multiple times (by default \\(100\\)), enabling the user to detect microsecond differences in code performance. We then get a convenient summary of the results: the minimum/maximum, lower/upper quartiles and the mean/median times. We suggest focusing on the median time to get a feel for the standard time and the quartiles to understand the variability.
### 1\.6\.2 Benchmarking example
A good example is testing different methods to look\-up a single value in a data frame. Note that each argument in the benchmark below is a term to be evaluated (for multi\-line benchmarks, the term to be evaluated can be surrounded by curly brackets, `{}`).
```
library("microbenchmark")
df = data.frame(v = 1:4, name = letters[1:4])
microbenchmark(df[3, 2], df[3, "name"], df$name[3])
# Unit: microseconds
# expr min lq mean median uq max neval cld
# df[3, 2] 17.99 18.96 20.16 19.38 19.77 35.14 100 b
# df[3, "name"] 17.97 19.13 21.45 19.64 20.15 74.00 100 b
# df$name[3] 12.48 13.81 15.81 14.48 15.14 67.24 100 a
```
The results summarise how long each query took: the minimum (`min`), lower and upper quartiles (`lq` and `uq`, respectively) and the mean, median and maximum, for each of the number of evaluations (`neval`, with the default value of 100 used in this case). `cld` reports the relative rank of each row in the form of ‘compact letter display’: in this case `df$name[3]` performs best, with a rank of `a` and a mean time around 25% lower than the other two functions.
When using `microbenchmark()`, you should pay careful attention to the units. In the above example, each function call takes approximately 20 *microseconds*, implying around 50,000 function calls could be done in a second. When comparing quick functions, the standard units are:
* milliseconds (ms), one thousand function calls take a second;
* microseconds (\\(\\mu\\)s), one million function calls take a second;
* nanoseconds (ns), one billion function calls take a second.
We can set the units we want to use with the `unit` argument, e.g. the results are reported
in seconds if we set `unit = "s"`.
When thinking about computational efficiency, there are (at least) two measures:
* Relative time: `df$name[3]` is 25% faster than `df[3, "name"]`;
* Absolute time: `df$name[3]` is 5 microseconds faster than `df[3, "name"]`.
Both measures are useful, but its important not to forget the underlying
time scale: it makes little sense to optimise a function that takes *microseconds* to complete if there are operations that take *seconds* to complete in your code.
### 1\.6\.3 Profiling
Benchmarking generally tests the execution time of one function against another. Profiling, on the other hand, is about testing large chunks of code.
It is difficult to over\-emphasise the importance of profiling for efficient R programming. Without a profile of what took longest, you will have only a vague idea of why your code is taking so long to run. The example below (which generates Figure [1\.3](introduction.html#fig:1-3), an image of ice\-sheet retreat from 1985 to 2015\) shows how profiling can be used to identify bottlenecks in your R scripts:
```
library("profvis")
profvis(expr = {
# Stage 1: load packages
# library("rnoaa") # not necessary as data pre-saved
library("ggplot2")
# Stage 2: load and process data
out = readRDS("extdata/out-ice.Rds")
df = dplyr::rbind_all(out, id = "Year")
# Stage 3: visualise output
ggplot(df, aes(long, lat, group = paste(group, Year))) +
geom_path(aes(colour = Year))
ggsave("figures/icesheet-test.png")
}, interval = 0.01, prof_output = "ice-prof")
```
The result of this profiling exercise are displayed in Figure [1\.2](introduction.html#fig:1-2).
Figure 1\.2: Profiling results of loading and plotting NASA data on icesheet retreat.
Figure 1\.3: Visualisation of North Pole icesheet decline, generated using the code profiled using the profvis package.
For more information about profiling and benchmarking, please refer to the [Optimising code](http://adv-r.had.co.nz/Profiling.html) chapter in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)), and Section [7\.2](performance.html#performance-profvis) in this book. We recommend reading these additional resources while performing benchmarks and profiles on your own code, for example, based on the exercises below.
#### 1\.6\.3\.1 Exercises
Consider the following benchmark to evaluate different functions for calculating the cumulative sum of the whole numbers from 1 to 100:
```
x = 1:100 # initiate vector to cumulatively sum
# Method 1: with a for loop (10 lines)
cs_for = function(x) {
for (i in x) {
if (i == 1) {
xc = x[i]
} else {
xc = c(xc, sum(x[1:i]))
}
}
xc
}
# Method 2: with apply (3 lines)
cs_apply = function(x) {
sapply(x, function(x) sum(1:x))
}
# Method 3: cumsum (1 line, not shown)
microbenchmark(cs_for(x), cs_apply(x), cumsum(x))
#> Unit: nanoseconds
#> expr min lq mean median uq max neval
#> cs_for(x) 112700 122616 183260 127327 134686 5416221 100
#> cs_apply(x) 83651 87048 121638 93694 103208 2505961 100
#> cumsum(x) 672 790 1142 929 1036 19960 100
```
1. Which method is fastest and how many times faster is it?
2. Run the same benchmark, but with the results reported in seconds, on a vector of all the whole numbers from 1 to 50,000\. Hint: also use the argument `times = 1` so that each command is only run once to ensure the results complete (even with a single evaluation the benchmark may take up to or more than a minute to complete, depending on your system). Does the *relative* time difference increase or decrease? By how much?
3. Test how long the different methods for subsetting the data frame `df`, presented in Section [1\.6\.2](introduction.html#benchmarking-example), take on your computer. Is it faster or slower at subsetting than the computer on which this book was compiled?
4. Use `system.time()` and a `for()` loop to test how long it takes to perform the subsetting operation 50,000 times. Before testing this, do you think it will be more or less than 1 second, for each subsetting method? Hint: the test for the first method is shown below:
```
# Test how long it takes to subset the data frame 50,000 times:
system.time(
for (i in 1:50000) {
df[3, 2]
}
)
```
5. Bonus exercise: try profiling a section of code you have written using **profvis**. Where are the bottlenecks? Were they where you expected?
1\.7 Book resources
-------------------
### 1\.7\.1 R package
This book has an associated R package that contains datasets and functions referenced in the book. The package is hosted on [github](https://github.com/csgillespie/efficient) and can be installed using the **devtools** package:
```
devtools::install_github("csgillespie/efficient", build_vignettes = TRUE, dependencies = TRUE)
```
The package also contains solutions (as vignettes) to the exercises found in this book. They can be browsed with the following command:
```
browseVignettes(package = "efficient")
```
The following command will install all packages used to generate this book:
```
devtools::install_github("csgillespie/efficientR")
```
### 1\.7\.2 Online version
We are grateful to O’Reilly Press for allowing us to develop this book [online](https://csgillespie.github.io/efficientR/). The online version constitutes a substantial additional resource to supplement this book, and will continue to evolve in between reprints of the physical book. The book’s code also represents a substantial learning opportunity in itself as it was written using R Markdown and the **bookdown** package, allowing us to run the R code each time we compile the book to ensure that it works, and allowing others to contribute to its future longevity.
To edit this chapter, for example, simply navigate to [github.com/csgillespie/efficientR/edit/master/01\-introduction.Rmd](https://github.com/csgillespie/efficientR/edit/master/01-introduction.Rmd) while logged into a [GitHub account](https://help.github.com/articles/signing-up-for-a-new-github-account/). The full source of the book is available at <https://github.com/csgillespie/efficientR> where we welcome comments/questions on the [Issue Tracker](https://github.com/csgillespie/efficientR/issues) and Pull Requests.
### 1\.7\.1 R package
This book has an associated R package that contains datasets and functions referenced in the book. The package is hosted on [github](https://github.com/csgillespie/efficient) and can be installed using the **devtools** package:
```
devtools::install_github("csgillespie/efficient", build_vignettes = TRUE, dependencies = TRUE)
```
The package also contains solutions (as vignettes) to the exercises found in this book. They can be browsed with the following command:
```
browseVignettes(package = "efficient")
```
The following command will install all packages used to generate this book:
```
devtools::install_github("csgillespie/efficientR")
```
### 1\.7\.2 Online version
We are grateful to O’Reilly Press for allowing us to develop this book [online](https://csgillespie.github.io/efficientR/). The online version constitutes a substantial additional resource to supplement this book, and will continue to evolve in between reprints of the physical book. The book’s code also represents a substantial learning opportunity in itself as it was written using R Markdown and the **bookdown** package, allowing us to run the R code each time we compile the book to ensure that it works, and allowing others to contribute to its future longevity.
To edit this chapter, for example, simply navigate to [github.com/csgillespie/efficientR/edit/master/01\-introduction.Rmd](https://github.com/csgillespie/efficientR/edit/master/01-introduction.Rmd) while logged into a [GitHub account](https://help.github.com/articles/signing-up-for-a-new-github-account/). The full source of the book is available at <https://github.com/csgillespie/efficientR> where we welcome comments/questions on the [Issue Tracker](https://github.com/csgillespie/efficientR/issues) and Pull Requests.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/set-up.html |
2 Efficient set\-up
===================
An efficient computer set\-up is analogous to a well\-tuned vehicle. Its components work in harmony. It is well\-serviced. It’s fast!
This chapter describes the set\-up that will enable a productive workflow. It explores how the operating system, R version, startup files and IDE can make your R work faster. Understanding and at times changing these set\-up options can have many knock\-on benefits. That’s why we cover them at this early stage (hardware is covered in Chapter [8](hardware.html#hardware)). By the end of this chapter you should understand how to set\-up your computer and R installation for optimal efficiency. It covers the following topics:
* R and the operating systems: system monitoring on Linux, Mac and Windows
* R version: how to keep your base R installation and packages up\-to\-date
* R start\-up: how and why to adjust your `.Rprofile` and `.Renviron` files
* RStudio: an integrated development environment (IDE) to boost your programming productivity
* BLAS and alternative R interpreters: looks at ways to make R faster
Efficient programming is more than a series of tips: there is no substitute for in\-depth understanding. However, to help remember the key messages buried within the detail of this book, each chapter from now on contains a ‘top 5 tips’ section, after the pre\-requisites.
### Prerequisites
Only one package needs to be installed to run the code in this chapter:
```
library("benchmarkme")
```
2\.1 Top 5 tips for an efficient R set\-up
------------------------------------------
1. Use system monitoring to identify bottlenecks in your hardware/code.
2. Keep your R installation and packages up\-to\-date.
3. Make use of RStudio’s powerful autocompletion capabilities and shortcuts.
4. Store API keys in the `.Renviron` file.
5. Use BLAS if your R number crunching is too slow.
2\.2 Operating system
---------------------
R supports all three major operating system (OS) types: Linux, Mac and Windows.[1](#fn1) R is platform\-independent, although there are some OS\-specific quirks, e.g. in relation to file path notation (see Section [2\.4\.3](set-up.html#location)).
Basic OS\-specific information can be queried from within R using `Sys.info()`:
```
Sys.info()
#R> sysname release machine user
#R> "Linux" "4.2.0-35-generic" "x86_64" "robin"
```
Translated into English, the above output means that R is running on a 64 bit (`x86_64`) Linux distribution (`4.2.0-35-generic` is the Linux version) and that the current user is `robin`. Four other pieces of information (not shown) are also produced by the command, the meaning of which is well documented in a help file revealed by entering `?Sys.info` in the R console.
The **assertive.reflection** package can be used to report additional information about your computer’s operating system and R set\-up with functions for asserting operating system and other system characteristics. The `assert_*()` functions work by testing the truth of the statement and erroring if the statement is untrue. On a Linux system `assert_is_linux()` will run silently, whereas `assert_is_windows()` will cause an error. The package can also test for the IDE you are using (e.g. `assert_is_rstudio()`), the capabilities of R (`assert_r_has_libcurl_capability()` etc.), and what OS tools are available (e.g. `assert_r_can_compile_code()`). These functions can be useful for running code that is designed only to run on one type of set\-up.
### 2\.2\.1 Operating system and resource monitoring
Minor differences aside, R’s computational efficiency is broadly the same across different operating systems.[2](#fn2)
Beyond the \\(32\\) vs \\(64\\) bit issue (covered in the next chapter) and *process forking* (covered in Chapter [7](performance.html#performance)), another OS\-related issue to consider is external dependencies: programs that R packages depend on. Sometimes external package dependencies must be installed manually (i.e. not using `install.packages()`). This is especially common with Unix\-based systems (Linux and Mac). On Debian\-based operating systems such as Ubuntu, many R packages can be installed at the OS level, to ensure external dependencies are also installed (see Section [2\.3\.4](set-up.html#deps)).
Resource monitoring is the process of checking the status of key OS variables. For computationally intensive work, it is sensible to monitor system resources in this way. Resource monitoring can help identify computational bottlenecks. Alongside R profiling functions such as **profvis** (see Section [7\.2](performance.html#performance-profvis)), system monitoring provides a useful tool for understanding how R is performing in relation to variables reporting the OS state, such as how much RAM is in use, which relates to the wider question of whether more is needed (covered in Chapter [3](programming.html#programming)).
CPU resource allocated over time is another common OS variable that is worth monitoring. A basic use case is to check whether your code is running in parallel (see Figure [2\.1](set-up.html#fig:2-1)), and whether there is spare CPU capacity on the OS that could be harnessed by parallel code.
Figure 2\.1: Output from a system monitor (`gnome-system-monitor` running on Ubuntu) showing the resources consumed by running the code presented in the second of the Exercises at the end of this section. The first increases RAM use, the second is single\-threaded and the third is multi\-threaded.
System monitoring is a complex topic that spills over into system administration and server management. Fortunately there are many tools designed to ease monitoring on all major operating systems.
* On Linux, the shell command `top` displays key resource use figures for most distributions. `htop` and Gnome’s **System Monitor** (`gnome-system-monitor`, see Figure [2\.1](set-up.html#fig:2-1)) are more refined alternatives which use command\-line and graphical user interfaces respectively. A number of options such as `nethogs` monitor internet usage.
* On Mac, the **Activity Monitor** provides similar functionality. This can be initiated from the Utilities folder in Launchpad.
* On Windows, the **Task Manager** provides key information on RAM and CPU use by process. This can be started in modern Windows versions by typing `Ctrl-Alt-Del` or by clicking the task bar and ‘Start Task Manager’.
#### Exercises
1. What is the exact version of your computer’s operating system?
2. Start an activity monitor then execute the following code chunk. In it `lapply()` (or its parallel version `mclapply()`) is used to *apply* a function, `median()`, over every column in the data frame object `X` (see Section [3\.5](programming.html#the-apply-family) for more on the ‘apply family’ of functions).\[The reason this works is that a data frame is really a list of vectors, each vector forming a column.]
How do the system output logs (results) on your system compare to those presented in Figure [2\.1](set-up.html#fig:2-1)?
```
# Note: uses 2+ GB RAM and several seconds or more depending on hardware
# 1: Create large dataset
X = as.data.frame(matrix(rnorm(1e8), nrow = 1e7))
# 2: Find the median of each column using a single core
r1 = lapply(X, median)
# 3: Find the median of each column using many cores
r2 = parallel::mclapply(X, median)
```
`mclapply` only works in parallel on Mac and Linux. In Chapter 7 you’ll learn about an equivalent function `parLapply()` that works in parallel on Windows.
3. What do you notice regarding CPU usage, RAM and system time, during and after each of the three operations?
4. Bonus question: how would the results change depending on operating system?
2\.3 R version
--------------
It is important to be aware that R is an evolving software project, whose behaviour changes over time. In general, base R is very conservative about making changes that break backwards compatibility. However, packages occasionally change substantially from one release to the next; typically it depends on the age of the package. For most use cases we recommend always using the most up\-to\-date version of R and packages, so you have the latest code. In some circumstances (e.g. on a production server or working in a team) you may alternatively want to use specific versions which have been tested, to ensure stability. Keeping packages up\-to\-date is desirable because new code tends to be more efficient, intuitive, robust and feature rich. This section explains how.
Previous R versions can be installed from CRAN’s archive or previous R releases. The binary versions for all OSs can be found at [cran.r\-project.org/bin/](https://cran.r-project.org/bin/). To download binary versions for Ubuntu ‘Xenial’, for example, see [cran.r\-project.org/bin/linux/ubuntu/xenial/](https://cran.r-project.org/bin/linux/ubuntu/xenial/). To ‘pin’ specific versions of R packages you can use the **packrat** package. For more on pinning R versions and R packages see articles on RStudio’s website [Using\-Different\-Versions\-of\-R](https://support.rstudio.com/hc/en-us/articles/200486138-Using-Different-Versions-of-R) and [rstudio.github.io/packrat/](https://rstudio.github.io/packrat/).
### 2\.3\.1 Installing R
The method of installing R varies for Windows, Linux and Mac.
On Windows, a single `.exe` file (hosted at [cran.r\-project.org/bin/windows/base/](https://cran.r-project.org/bin/windows/base/)) will install the base R package.
On a Mac, the latest version should be installed by downloading the `.pkg` files hosted at [cran.r\-project.org/bin/macosx/](https://cran.r-project.org/bin/macosx/).
On Linux, the installation method depends on the distribution of Linux installed, althogh the principles are the same. We’ll cover how to install R on Debian\-based systems, with links at the end for details on other Linux distributions. First stage is to add the CRAN repository, to ensure that the latest version is installed. If you are running Ubuntu 16\.04, for example, append the following line to the file `/etc/apt/sources.list`:
```
deb http://cran.rstudio.com/bin/linux/ubuntu xenial/
```
`http://cran.rstudio.com` is the mirror (which can be replaced by any listed at [cran.r\-project.org/mirrors.html](https://cran.r-project.org/mirrors.html)) and `xenial` is the release. See the [Debian](https://cran.r-project.org/bin/linux/debian/) and [Ubuntu](https://cran.r-project.org/bin/linux/ubuntu/) installation pages on CRAN from further details.
Once the appropriate repository has been added and the system updated (e.g. with `sudo apt-get update`), `r-base` and other `r-` packages can be installed using the `apt` system. The following two commands, for example, would install the base R package (a ‘bare\-bones’ install) and the package **rcurl**, which has an external dependency:
```
sudo apt-get install r-cran-base # install base R
sudo apt-get install r-cran-rcurl # install the rcurl package
```
`apt-cache search "^r-.*" | sort` will display all R packages that can be installed from `apt` in Debian\-based systems. In Fedora\-based systems, the equivalent command is `yum list R-\*`.
Typical output from the second command is illustrated below:
```
The following extra packages will be installed:
libcurl3-nss
The following NEW packages will be installed
libcurl3-nss r-cran-rcurl
0 to upgrade, 2 to newly install, 0 to remove and 16 not to upgrade.
Need to get 699 kB of archives.
After this operation, 2,132 kB of additional disk space will be used.
Do you want to continue? [Y/n]
```
Further details are provided at [cran.r\-project.org/bin/linux/](https://cran.r-project.org/bin/linux/) for Debian, Redhat and Suse OSs. R also works on FreeBSD and other Unix\-based systems.[3](#fn3)
Once R is installed it should be kept up\-to\-date.
### 2\.3\.2 Updating R
R is a mature and stable language so well\-written code in base R should work on most versions. However, it is important to keep your R version relatively up\-to\-date, because:
* Bug fixes are introduced in each version, making errors less likely;
* Performance enhancements are made from one version to the next, meaning your code may run faster in later versions;
* Many R packages only work on recent versions on R.
Release notes with details on each of these issues are hosted at [cran.r\-project.org/src/base/NEWS](https://cran.r-project.org/src/base/NEWS). R release versions have 3 components corresponding to major.minor.patch changes. Generally 2 or 3 patches are released before the next minor increment \- each ‘patch’ is released roughly every 3 months. R 3\.2, for example, has consisted of 3 versions: 3\.2\.0, 3\.2\.1 and 3\.2\.2\.
* On Ubuntu\-based systems, new versions of R should be automatically detected through the software management system, and can be installed with `apt-get upgrade`.
* On Mac, the latest version should be installed by the user from the `.pkg` files mentioned above.
* On Windows **installr** package makes updating easy:
```
# check and install the latest R version
installr::updateR()
```
For information about changes to expect in the next version, you can subscribe to the R’s NEWS RSS feed: [developer.r\-project.org/blosxom.cgi/R\-devel/NEWS/index.rss](http://developer.r-project.org/blosxom.cgi/R-devel/NEWS/index.rss). It’s a good way of keeping up\-to\-date.
### 2\.3\.3 Installing R packages
Large projects may need several packages to be installed. In this case, the required packages can be installed at once. Using the example of packages for handling spatial data, this can be done quickly and concisely with the following code:
```
pkgs = c("raster", "leaflet", "rgeos") # package names
install.packages(pkgs)
```
In the above code all the required packages are installed with two not three lines, reducing typing. Note that we can now re\-use the `pkgs` object to load them all:
```
inst = lapply(pkgs, library, character.only = TRUE) # load them
```
In the above code `library(pkgs[i])` is executed for every package stored in the text string vector. We use `library` here instead of `require` because the former produces an error if the package is not available.
Loading all packages at the beginning of a script is good practice as it ensures all dependencies have been installed *before* time is spent executing code. Storing package names in a character vector object such as `pkgs` is also useful because it allows us to refer back to them again and again.
### 2\.3\.4 Installing R packages with dependencies
Some packages have external dependencies (i.e. they call libraries outside R). On Unix\-like systems, these are best installed onto the operating system, bypassing `install.packages`. This will ensure the necessary dependencies are installed and setup correctly alongside the R package. On Debian\-based distributions such as Ubuntu, for example, packages with names starting with `r-cran-` can be searched for and installed as follows (see [cran.r\-project.org/bin/linux/ubuntu/](https://cran.r-project.org/bin/linux/ubuntu/) for a list of these):
```
apt-cache search r-cran- # search for available cran Debian packages
sudo apt-get-install r-cran-rgdal # install the rgdal package (with dependencies)
```
On Windows the **installr** package helps manage and update R packages with system\-level dependencies. For example the **Rtools** package for compiling C/C\+\+ code on Windows can be installed with the following command:
```
installr::install.rtools()
```
### 2\.3\.5 Updating R packages
An efficient R set\-up will contain up\-to\-date packages.
This can be done *for all packages* with:
```
update.packages() # update installed CRAN packages
```
The default for this function is for the `ask` argument to be set to `TRUE`, giving control over what is downloaded onto your system. This is generally desirable as updating dozens of large packages can consume a large proportion of available system resources.
To update packages automatically, you can add the line `update.packages(ask = FALSE)` to your `.Rprofile` startup file (see the next section for more on `.Rprofile`). Thanks to Richard Cotton for this tip.
An even more interactive method for updating packages in R is provided by RStudio via Tools \> Check for Package Updates. Many such time saving tricks are enabled by RStudio, as described in [a subsequent section](set-up.html#install-rstudio). Next (after the exercises) we take a look at how to configure R using start\-up files.
#### Exercises
1. What version of R are you using? Is it the most up\-to\-date?
2. Do any of your packages need updating?
2\.4 R startup
--------------
Every time R starts a couple of file scripts are run by default, as documented in `?Startup`. This section explains how to customise these files, allowing you to save API keys or load frequently used functions. Before learning how to modify these files, we’ll take a look at how to ignore them, with R’s startup arguments. If you want to turn custom set\-up ‘on’ it’s useful to be able to turn it ‘off’, e.g. for debugging.
Some of R’s startup arguments can be controlled interactively in RStudio. See the online help file [Customizing RStudio](https://support.rstudio.com/hc/en-us/articles/200549016-Customizing-RStudio) for more on this.
### 2\.4\.1 R startup arguments
A number of arguments can be appended to the R startup command (`R` in a shell environment) which relate to startup.
The following are particularly important:
* `--no-environ` and `--no-init` arguments tell R to only look for startup files (described in the next section) in the current working directory.
* `--no-restore` tells R not to load a file called `.RData` (the default name for R session files) that may be present in the current working directory.
* `--no-save` tells R not to ask the user if they want to save objects saved in RAM when the session is ended with `q()`.
Adding each of these will make R load slightly faster, and mean that slightly less user input is needed when you quit. R’s default setting of loading data from the last session automatically is potentially problematic in this context. See [An Introduction to R](https://cran.r-project.org/doc/manuals/R-intro.pdf), Appendix B, for more startup arguments.
A concise way to load a ‘vanilla’ version of R, with all of the above options enabled is with an option of the same name:
```
R --vanilla
```
### 2\.4\.2 An overview of R’s startup files
Two files are read each time R starts (unless one of the command line options outlined above is used):
* `.Renviron`, the primary purpose of which is to set *environment variables*. These tell R where to find external programs and can hold user\-specific information than needs to be kept secret, typically *API keys*.
* `.Rprofile` is a plain text file (which is always called `.Rprofile`, hence its name) that simply runs lines of R code every time R starts. If you want R to check for package updates each time it starts (as explained in the previous section), you simply add the relevant line somewhere in this file.
When R starts (unless it was launched with `--no-environ`) it first searches for `.Renviron` and then `.Rprofile`, in that order.
Although `.Renviron` is searched for first, we will look at `.Rprofile` first as it is simpler and for many set\-up tasks more frequently useful. Both files can exist in three directories on your computer.
Modification of R’s startup files should not be taken lightly. This is an advanced topic. If you modify your startup files in the wrong way, it can cause problems: a seemingly innocent call to `setwd()` in `.Rprofile`, for example, will break **devtools** `build` and `check` functions.
Proceed with caution and, if you mess things up, just delete the offending files!
### 2\.4\.3 The location of startup files
Confusingly, multiple versions of these files can exist on the same computer, only one of which will be used per session. Note also that these files should only be changed with caution and if you know what you are doing. This is because they can make your R version behave differently to other R installations, potentially reducing the reproducibility of your code.
Files in three folders are important in this process:
* `R_HOME`, the directory in which R is installed. The `etc` sub\-directory can contain start\-up files read early on in the start\-up process. Find out where your `R_HOME` is with the `R.home()` command.
* `HOME`, the user’s home directory. Typically this is `/home/username` on Unix machines or `C:\Users\username` on Windows (since Windows 7\). Ask R where your home directory is with, `Sys.getenv("HOME")`.
* R’s current working directory. This is reported by `getwd()`.
It is important to know the location of the `.Rprofile` and `.Renviron` set\-up files that are being used out of these three options.
R only uses one `.Rprofile` and one `.Renviron` in any session: if you have a `.Rprofile` file in your current project, R will ignore `.Rprofile` in `R_HOME` and `HOME`.
Likewise, `.Rprofile` in `HOME` overrides `.Rprofile` in `R_HOME`.
The same applies to `.Renviron`: you should remember that adding project specific environment variables with `.Renviron` will de\-activate other `.Renviron` files.
To create a project\-specific start\-up script, simply create a `.Rprofile` file in the project’s root directory and start adding R code, e.g. via `file.edit(".Rprofile")`.
Remember that this will make `.Rprofile` in the home directory be ignored.
The following commands will open your `.Rprofile` from within an R editor:
```
file.edit("~/.Rprofile") # edit .Rprofile in HOME
file.edit(".Rprofile") # edit project specific .Rprofile
```
File paths provided by Windows operating systems will not always work in R. Specifically, if you use a path that contains single backslashes, such as `C:\DATA\data.csv`, as provided by Windows, this will generate the error: `Error: unexpected input in “C:\”`. To overcome this issue R provides two functions, `file.path()` and `normalizePath()`. The former can be used to specify file locations without having to use symbols to represent relative file paths, as follows: `file.path(“C:”, “DATA”, “data.csv”)`. The latter takes any input string for a file name and outputs a text string that is standard (canonical) for the operating system. `normalizePath(“C:/DATA/data.csv”)`, for example, outputs `C:\DATA\data.csv` on a Windows machine but `C:/DATA/data.csv` on Unix\-based platforms. Note that only the latter would work on both platforms so standard Unix file path notation is safe for all operating systems.
Editing the `.Renviron` file in the same locations will have the same effect.
The following code will create a user specific `.Renviron` file (where API keys and other cross\-project environment variables can be stored), without overwriting any existing file.
```
user_renviron = path.expand(file.path("~", ".Renviron"))
file.edit(user_renviron) # open with another text editor if this fails
```
The **pathological** package can help find where `.Rprofile` and `.Renviron` files are located on your system, thanks to the `os_path()` function. The output of `example(Startup)` is also instructive.
The location, contents and uses of each is outlined in more detail below.
### 2\.4\.4 The `.Rprofile` file
By default, R looks for and runs `.Rprofile` files in the three locations described above, in a specific order. `.Rprofile` files are simply R scripts that run each time R runs and they can be found within `R_HOME`, `HOME` and the project’s home directory, found with `getwd()`. To check if you have a site\-wide `.Rprofile`, which will run for all users on start\-up, run:
```
site_path = R.home(component = "home")
fname = file.path(site_path, "etc", "Rprofile.site")
file.exists(fname)
```
The above code checks for the presence of `Rprofile.site` in that directory. As outlined above, the `.Rprofile` located in your home directory is user\-specific. Again, we can test whether this file exists using
```
file.exists("~/.Rprofile")
```
We can use R to create and edit `.Rprofile` (warning: do not overwrite your previous `.Rprofile` \- we suggest you try a project\-specific `.Rprofile` first):
```
file.edit("~/.Rprofile")
```
### 2\.4\.5 An example `.Rprofile` file
The example below provides a taster of what goes into `.Rprofile`.
Note that this is simply a usual R script, but with an unusual name.
The best way to understand what is going on is to create this same script, save it as `.Rprofile` in your current working directory and then restart your R session to observe what changes. To restart your R session from within RStudio you can click `Session > Restart R` or use the keyboard shortcut `Ctrl+Shift+F10`.
```
# A fun welcome message
message("Hi Robin, welcome to R")
# Customise the R prompt that prefixes every command
# (use " " for a blank prompt)
options(prompt = "R4geo> ")
```
To quickly explain each line of code: the first simply prints a message in the console each time a new R session is started. The latter modifies the console prompt in the console (set to `>` by default). Note that simply adding more lines to the `.Rprofile` will set more features. An important aspect of `.Rprofile` (and `.Renviron`) is that *each line is run once and only once for each R session*. That means that the options set within `.Rprofile` can easily be changed during the session. The following command run mid\-session, for example, will return the default prompt:
```
options(prompt = "> ")
```
More details on these, and other potentially useful `.Rprofile` options are described subsequently. For more suggestions of useful startup settings, see Examples in `help("Startup")` and online resources such as those at [statmethods.net](http://www.statmethods.net/interface/customizing.html). The help pages for R options (accessible with `?options`) are also worth a read before writing your own `.Rprofile`.
Ever been frustrated by unwanted `+` symbols that prevent copied and pasted multi\-line functions from working? These potentially annoying `+`s can be eradicated by adding `options(continue = " ")` to your `.Rprofile`.
#### 2\.4\.5\.1 Setting options
The function `options`, used above, contains a number of default settings. Typing `options()` provides a good indication of what can be configured. Since `options()` are often related to personal preferences (with few implications for reproducibility), that you will want for many of your R sessions, `.Rprofile` in your home directory or in your project’s folder are sensible places to set them. Other illustrative options are shown below:
```
# With a customised prompt
options(prompt = "R> ", digits = 4, show.signif.stars = FALSE, continue = " ")
# With a longer prompt and empty 'continue' indent (default is "+ ")
options(prompt = "R4Geo> ", digits = 3, continue = " ")
```
The first option changes four default options in a single line.
* The R prompt, from the boring `>` to the exciting `R>`.
* The number of digits displayed.
* Removing the stars after significant \\(p\\)\-values.
* Removing the `+` in multi\-line functions.
Try to avoid adding options to the start\-up file that make your code non\-portable. For example, adding `options(stringsAsFactors = FALSE)` to your start\-up script has knock\-on effects for `read.table` and related functions including `read.csv`, making them convert text strings into characters rather than into factors as is default. This may be useful for you, but can make your code less portable, so be warned.
#### 2\.4\.5\.2 Setting the CRAN mirror
To avoid setting the CRAN mirror each time you run `install.packages()` you can permanently set the mirror in your `.Rprofile`.
```
# `local` creates a new, empty environment
# This avoids polluting .GlobalEnv with the object r
local({
r = getOption("repos")
r["CRAN"] = "https://cran.rstudio.com/"
options(repos = r)
})
```
The RStudio mirror is a virtual machine run by Amazon’s EC2 service, and it syncs with the main CRAN mirror in Austria once per day. Since RStudio is using Amazon’s CloudFront, the repository is automatically distributed around the world, so no matter where you are in the world, the data doesn’t need to travel very far, and is therefore fast to download.
#### 2\.4\.5\.3 The **fortunes** package
This section illustrates the power of `.Rprofile` customisation with reference to a package that was developed for fun. The code below could easily be altered to automatically connect to a database, or ensure that the latest packages have been downloaded.
The **fortunes** package contains a number of memorable quotes that the community has collected over many years, called R fortunes. Each fortune has a number. To get fortune number \\(50\\), for example, enter
```
fortunes::fortune(50)
#>
#> To paraphrase provocatively, 'machine learning is statistics minus any checking
#> of models and assumptions'.
#> -- Brian D. Ripley (about the difference between machine learning and
#> statistics)
#> useR! 2004, Vienna (May 2004)
```
It is easy to make R print out one of these nuggets of truth each time you start a session, by adding the following to `.Rprofile`:
```
if (interactive())
try(fortunes::fortune(), silent = TRUE)
```
The `interactive()` function tests whether R is being used interactively in a terminal. The `fortune()` function is called within `try()`. If the **fortunes** package is not available, we avoid raising an error and move on. By using `::` we avoid adding the **fortunes** package to our list of attached packages.
Typing `search()`, gives the list of attached packages. By using `fortunes::fortune()` we avoid adding the **fortunes** package to that list.
The function `.Last()`, if it exists in the `.Rprofile`, is always run at the end of the session. We can use it to install the **fortunes** package if needed. To load the package, we use `require()`, since if the package isn’t installed, the `require()` function returns `FALSE` and raises a warning.
```
.Last = function() {
cond = suppressWarnings(!require(fortunes, quietly = TRUE))
if (cond)
try(install.packages("fortunes"), silent = TRUE)
message("Goodbye at ", date(), "\n")
}
```
#### 2\.4\.5\.4 Useful functions
You can use `.Rprofile` to define new ‘helper’ functions or redefine existing ones so they’re faster to type.
For example, we could load the following two functions for examining data frames:
```
# ht == headtail
# Show the first 6 rows & last 6 rows of a data frame
ht = function(d, n=6) rbind(head(d, n), tail(d, n))
# Show the first 5 rows & first 5 columns of a data frame
hh = function(d) d[1:5, 1:5]
```
and a function for setting a nice plotting window:
```
nice_par = function(mar = c(3, 3, 2, 1), mgp = c(2, 0.4, 0), tck = -0.01,
cex.axis = 0.9, las = 1, mfrow = c(1, 1), ...) {
par(mar = mar, mgp = mgp, tck = tck, cex.axis = cex.axis, las = las,
mfrow = mfrow, ...)
}
```
Note that these functions are for personal use and are unlikely to interfere with code from other people.
For this reason even if you use a certain package every day, we don’t recommend loading it in your `.Rprofile`.
Shortening long function names for interactive (but not reproducible) code writing is another option for using `.Rprofile` to increase efficiency.
If you frequently use `View()`, for example, you may be able to save time by referring to it in abbreviated form. This is illustrated below to make it faster to view datasets (although with IDE\-driven autocompletion, outlined in the next section, the time savings is less).
```
v = utils::View
```
Also beware the dangers of loading many functions by default: it may make your code less portable.
Another potentially useful setting to change in `.Rprofile` is R’s current working directory.
If you want R to automatically set the working directory to the R folder of your project, for example, one would add the following line of code to the **project**\-specific `.Rprofile`:
```
setwd("R")
```
#### 2\.4\.5\.5 Creating hidden environments with .Rprofile
Beyond making your code less portable, another downside of putting functions in your `.Rprofile` is that it can clutter\-up your work space:
when you run the `ls()` command, your `.Rprofile` functions will appear. Also if you run `rm(list = ls())`, your functions will be deleted. One neat trick to overcome this issue is to use hidden objects and environments. When an object name starts with `.`, by default it doesn’t appear in the output of the `ls()` function
```
.obj = 1
".obj" %in% ls()
#> [1] FALSE
```
This concept also works with environments. In the `.Rprofile` file we can create a *hidden* environment
```
.env = new.env()
```
and then add functions to this environment
```
.env$ht = function(d, n = 6) rbind(head(d, n), tail(d, n))
```
At the end of the `.Rprofile` file, we use `attach`, which makes it possible to refer to objects in the environment by their names alone.
```
attach(.env)
```
### 2\.4\.6 The `.Renviron` file
The `.Renviron` file is used to store system variables. It follows a similar start\-up routine to the `.Rprofile` file: R first looks for a global `.Renviron` file, then for local versions. A typical use of the `.Renviron` file is to specify the `R_LIBS` path, which determines where new packages are installed:
```
# Linux
R_LIBS=~/R/library
# Windows
R_LIBS=C:/R/library
```
After setting this, `install.packages()` saves packages in the directory specified by `R_LIBS`.
The location of this directory can be referred back to subsequently as follows:
```
Sys.getenv("R_LIBS_USER")
#> [1] "/home/travis/R/Library"
```
All currently stored environment variables can be seen by calling `Sys.getenv()` with no arguments. Note that many environment variables are already pre\-set and do not need to be specified in `.Renviron`. `HOME`, for example, which can be seen with `Sys.getenv("HOME")`, is taken from the operating system’s list of environment variables. A list of the most important environment variables that can affect R’s behaviour is documented in the little known help page `help("environment variables")`.
To set or unset an environment variable for the duration of a session, use the following commands:
```
Sys.setenv("TEST" = "test-string") # set an environment variable for the session
Sys.unsetenv("TEST") # unset it
```
Another common use of `.Renviron` is to store API keys and authentication tokens that will be available from one session to another.[4](#fn4)
A common use case is setting the ‘envvar’ `GITHUB_PAT`, which will be detected by the **devtools** package via the function `github_pat()`. To take another example, the following line in `.Renviron` sets the `ZEIT_KEY` environment variable which is used in the **[diezeit](https://cran.r-project.org/web/packages/diezeit/)** package:
```
ZEIT_KEY=PUT_YOUR_KEY_HERE
```
You will need to sign\-in and start a new R session for the environment variable (accessed by `Sys.getenv()`) to be visible. To test if the example API key has been successfully added as an environment variable, run the following:
```
Sys.getenv("ZEIT_KEY")
```
Use of the `.Renviron` file for storing settings such as library paths and API keys is efficient because it reduces the need to update your settings for every R session. Furthermore, the same `.Renviron` file will work across different platforms so keep it stored safely.
#### 2\.4\.6\.1 Example `.Renviron` file
My `.Renviron` file has grown over the years. I often switch between my desktop and laptop computers, so to maintain a consistent working environment, I have the same `.Renviron` file on all of my machines. As well as containing an `R_LIBS` entry and some API keys, my `.Renviron` has a few other lines:
* `TMPDIR=/data/R_tmp/`. When R is running, it creates temporary copies. On my work machine, the default directory is a network drive.
* `R_COMPILE_PKGS=3`. Byte compile all packages (covered in Chapter [3](programming.html#programming)).
* `R_LIBS_SITE=/usr/lib/R/site-library:/usr/lib/R/library` I explicitly state where to look for packages. My University has a site\-wide directory that contains out of date packages. I want to avoid using this directory.
* `R_DEFAULT_PACKAGES=utils,grDevices,graphics,stats,methods`. Explicitly state the packages to load. Note I don’t load the `datasets` package, but I ensure that `methods` is always loaded. Due to historical reasons, the `methods` package isn’t loaded by default in certain applications, e.g. `Rscript`.
#### Exercises
1. What are the three locations where the startup files are stored? Where are these locations on your computer?
2. For each location, does a `.Rprofile` or `.Renviron` file exist?
3. Create a `.Rprofile` file in your current working directory that prints the message `Happy efficient R programming` each time you start R at this location.
4. What happens to the startup files in `R_HOME` if you create them in `HOME` or local project directories?
2\.5 RStudio
------------
RStudio is an Integrated Development Environment (IDE) for R.
It makes life easy for R users and developers with its intuitive and flexible interface. RStudio encourages good programming practice. Through its wide range of features RStudio can help make you a more efficient and productive R programmer. RStudio can, for example, greatly reduce the amount of time spent remembering and typing function names thanks to intelligent autocompletion.
Some of the most important features of RStudio include:
* Flexible window pane layouts to optimise use of screen space and enable fast interactive visual feed\-back.
* Intelligent autocompletion of function names, packages and R objects.
* A wide range of keyboard shortcuts.
* Visual display of objects, including a searchable data display table.
* Real\-time code checking, debugging and error detection.
* Menus to install and update packages.
* Project management and integration with version control.
* Quick display of function source code and help documents.
The above list of features should make it clear that a well set\-up IDE can be as important as a well set\-up R installation for becoming an efficient R programmer.[5](#fn5)
As with R itself, the best way to learn about RStudio is by using it.
It is therefore worth reading through this section in parallel with using RStudio to boost your productivity.
### 2\.5\.1 Installing and updating RStudio
RStudio is a mature, feature rich and powerful Integrated Development Environment (IDE) optimised for R programming and has become popular among R developers. The Open Source Edition is completely open source (as can be seen from the project’s GitHub repo). It can be installed on all major OSs from the RStudio website [rstudio.com](https://www.rstudio.com/products/rstudio/download/).
If you already have RStudio and would like to update it, simply click `Help > Check for Updates` in the menu.
For fast and efficient work, keyboard shortcuts should be used wherever possible, reducing the reliance on the mouse.
RStudio has many keyboard shortcuts that will help with this.
To get into good habits early, try accessing the RStudio Update interface without touching the mouse.
On Linux and Windows, dropdown menus are activated with the `Alt` button, so the menu item can be found with:
```
Alt+H U
```
On Mac, it works differently.
`Cmd+?` should activate a search across menu items, allowing the same operation can be achieved with:
```
Cmd+? update
```
In RStudio the keyboard shortcuts differ between Linux and Windows versions on one hand and Mac on the other. In this section we generally only use the Windows/Linux shortcut keys for brevity. The Mac equivalent is usually found by simply replacing `Ctrl` and `Alt` with the Mac\-specific `Cmd` button.
### 2\.5\.2 Window pane layout
RStudio has four main window ‘panes’ (see Figure [2\.2](set-up.html#fig:2-2)), each of which serves a range of purposes:
* The **Source pane**, for editing, saving, and dispatching R code to the console (top left). Note that this pane does not exist by default when you start RStudio: it appears when you open an R script, e.g. via `File -> New File -> R Script`. A common task in this pane is to send code on the current line to the console, via `Ctrl/Cmd+Enter`.
* The **Console pane**. Any code entered here is processed by R, line by line. This pane is ideal for interactively testing ideas before saving the final results in the Source pane above.
* The **Environment pane** (top right) contains information about the current objects loaded in the workspace including their class, dimension (if they are a data frame) and name. This pane also contains tabbed sub\-panes with a searchable history that was dispatched to the console and (if applicable to the project) Build and Git options.
* The **Files pane** (bottom right) contains a simple file browser, a Plots tab, Packages and Help tabs and a Viewer for visualising interactive R output such as those produced by the leaflet package and HTML ‘widgets’.
Figure 2\.2: RStudio Panels
Using each of the panels effectively and navigating between them quickly is a skill that will develop over time, and will only improve with practice.
#### Exercises
You are developing a project to visualise data.
Test out the multi\-panel RStudio workflow by following the steps below:
1. Create a new folder for the input data using the **Files pane**.
2. Type in `downl` in the **Source pane** and hit `Enter` to make the function `download.file()` autocomplete. Then type `"`, which will autocomplete to `""`, paste the URL of a file to download (e.g. `https://www.census.gov/2010census/csv/pop_change.csv`) and a file name (e.g. `pop_change.csv`).
3. Execute the full command with `Ctrl+Enter`:
```
download.file("https://www.census.gov/2010census/csv/pop_change.csv",
"extdata/pop_change.csv")
```
4. Write and execute a command to read\-in the data, such as
```
pop_change = read.csv("extdata/pop_change.csv", skip = 2)
```
5. Use the **Environment pane** to click on the data object `pop_change`. Note that this runs the command `View(pop_change)`, which launches an interactive data explore pane in the top left panel (see Figure [2\.3](set-up.html#fig:2-3)).
Figure 2\.3: The data viewing tab in RStudio.
6. Use the **Console** to test different plot commands to visualise the data, saving the code you want to keep back into the **Source pane**, as `pop_change.R`.
7. Use the **Plots tab** in the Files pane to scroll through past plots. Save the best using the Export dropdown button.
The above example shows understanding of these panes and how to use them interactively can help with the speed and productivity of your R programming.
Further, there are a number of RStudio settings that can help ensure that it works for your needs.
### 2\.5\.3 RStudio options
A range of `Project Options` and `Global Options` are available in RStudio from the `Tools` menu (accessible in Linux and Windows from the keyboard via `Alt+T`).
Most of these are self\-explanatory but it is worth mentioning a few that can boost your programming efficiency:
* GIT/SVN project settings allow RStudio to provide a graphical interface to your version control system, described in Chapter [9](collaboration.html#collaboration).
* R version settings allow RStudio to ‘point’ to different R versions/interpreters, which may be faster for some projects.
* `Restore .RData`: Unticking this default prevents loading previously created R objects. This will make starting R quicker and also reduce the chance of getting bugs due to previously created objects. For this reason we recommend you untick this box.
* Code editing options can make RStudio adapt to your coding style, for example, by preventing the autocompletion of braces, which some experienced programmers may find annoying. Enabling `Vim mode` makes RStudio act as a (partial) Vim emulator.
* Diagnostic settings can make RStudio more efficient by adding additional diagnostics or by removing diagnostics if they are slowing down your work. This may be an issue for people using RStudio to analyse large datasets on older low\-spec computers.
* Appearance: if you are struggling to see the source code, changing the default font size may make you a more efficient programmer by reducing the time overheads associated with squinting at the screen. Other options in this area relate more to aesthetics. Settings such as font type and background color are also important because feeling comfortable in your programming environment can boost productivity. Go to `Tools > Global Options` to modify these.
### 2\.5\.4 Autocompletion
R provides some basic autocompletion functionality.
Typing the beginning of a function name, for example `rn` (short for `rnorm()`), and hitting `Tab` twice will result in the full function names associated with this text string being printed.
In this case two options would be displayed: `rnbinom` and `rnorm`, providing a useful reminder to the user about what is available. The same applies to file names enclosed in quote marks: typing `te` in the console in a project which contains a file called `test.R` should result in the full name `"test.R"` being auto completed.
RStudio builds on this functionality and takes it to a new level.
The default settings for autocompletion in RStudio work well. They are intuitive and are likely to work well for many users, especially beginners. However, RStudio’s autocompletion options can be modified, by navigating to **Tools \> Global Options \> Code \> Completion** in RStudio’s top level menu.
Instead of only auto completing options when `Tab` is pressed, RStudio auto completes them at any point.
Building on the previous example, RStudio’s autocompletion triggers when the first three characters are typed: `rno`.
The same functionality works when only the first characters are typed, followed by `Tab`:
automatic autocompletion does not replace `Tab` autocompletion but supplements it.
Note that in RStudio two more options are provided to the user after entering `rn Tab` compared with entering the same text into base R’s console described in the previous paragraph: `RNGkind` and `RNGversion`.
This illustrates that RStudio’s autocompletion functionality is not case sensitive in the same way that R is.
This is a good thing because R has no consistent function name style!
RStudio also has more intelligent autocompletion of objects and file names than R’s built\-in command line.
To test this functionality, try typing `US`, followed by the Tab key.
After pressing down until `USArrests` is selected, press Enter so it autocompletes.
Finally, typing `$` should leave the following text on the screen and the four columns should be shown in a drop\-down box, ready for you to select the variable of interest with the down arrow.
```
USArrests$ # a dropdown menu of columns should appear in RStudio
```
To take a more complex example, variable names stored in the `data` slot of the class `SpatialPolygonsDataFrame` (a class defined by the foundational spatial package **sp**) are referred to in the long form
`spdf@data$varname`.[6](#fn6)
In this case `spdf` is the object name, `data` is the slot and `varname` is the variable name.
RStudio makes such `S4` objects easier to use by enabling autocompletion of the short form `spdf$varname`.
Another example is RStudio’s ability to find files hidden away in sub\-folders.
Typing `"te` will find `test.R` even if it is located in a sub\-folder such as `R/test.R`.
There are a number of other clever autocompletion tricks that can boost R’s productivity when using RStudio which are best found by experimenting and hitting `Tab` frequently during your R programming work.
### 2\.5\.5 Keyboard shortcuts
RStudio has many useful shortcuts that can help make your programming more efficient by reducing the need to reach for the mouse and point and click your way around code and RStudio.
These can be viewed by using a little known but extremely useful keyboard shortcut (this can also be accessed via the **Tools** menu).
```
Alt+Shift+K
```
This will display the default shortcuts in RStudio.
It is worth spending time identifying which of these could be useful in your work and practising interacting with RStudio rapidly with minimal reliance on the mouse.
The power of these autocompletion capabilities can be further enhanced by setting your own keyboard shortcuts.
However, as with setting `.Rprofile` and `.Renviron` settings, this risks reducing the portability of your workflow.
Some more useful shortcuts are listed below:
* `Ctrl+Z/Shift+Z`: Undo/Redo.
* `Ctrl+Enter`: Execute the current line or code selection in the Source pane.
* `Ctrl+Alt+R`: Execute all the R code in the currently open file in the Source pane.
* `Ctrl+Left/Right`: Navigate code quickly, word by word.
* `Home/End`: Navigate to the beginning/end of the current line.
* `Alt+Shift+Up/Down`: Duplicate the current line up or down.
* `Ctrl+D`: Delete the current line.
To set your own RStudio keyboard shortcuts, navigate to **Tools \> Modify Keyboard Shortcuts**.
### 2\.5\.6 Object display and output table
It is useful to know what is in your current R environment.
This information can be revealed with `ls()`, but this function only provides object names.
RStudio provides an efficient mechanism to show currently loaded objects, and their details, in real\-time: the Environment tab in the top right corner.
It makes sense to keep an eye on which objects are loaded and to delete objects that are no longer useful.
Doing so will minimise the probability of confusion in your workflow (e.g. by using the wrong version of an object) and reduce the amount of RAM R needs.
The details provided in the Environment tab include the object’s dimension and some additional details depending on the object’s class (e.g. size in MB for large datasets).
A very useful feature of RStudio is its advanced viewing functionality.
This is triggered either by executing `View(object)` or by clicking on the object name in the Environment tab.
Although you cannot edit data in the Viewer (this should be considered a good thing from a data integrity perspective), recent versions of RStudio provide an efficient search mechanism to rapidly filter and view the records that are of most interest (see Figure [2\.3](set-up.html#fig:2-3)).
### 2\.5\.7 Project management
In the far top\-right of RStudio there is a diminutive drop\-down menu illustrated with R inside a transparent box.
This menu may be small and simple, but it is hugely efficient in terms of organising large, complex and long\-term projects.
The idea of RStudio projects is that the bulk of R programming work is part of a wider task, which will likely consist of input data, R code, graphical and numerical outputs and documents describing the work.
It is possible to scatter each of these elements at random across your hard\-discs but this is not recommended.
Instead, the concept of projects encourages reproducible work, such that anyone who opens the particular project folder that you are working from should be able to repeat your analyses and replicate your results.
It is therefore *highly recommended* that you use projects to organise your work. It could save hours in the long\-run.
Organizing data, code and outputs also makes sense from a portability perspective: if you copy the folder (e.g. via GitHub) you can work on it from any computer without worrying about having the right files on your current machine.
These tasks are implemented using RStudio’s simple project system, in which the following things happen each time you open an existing project:
* The working directory automatically switches to the project’s folder. This enables data and script files to be referred to using relative file paths, which are much shorter than absolute file paths. This means that switching directory using `setwd()`, a common source of error for R users, is rarely, if ever, needed.
* The last previously open file is loaded into the Source pane. The history of R commands executed in previous sessions is also loaded into the History tab. This assists with continuity between one session and the next.
* The `File` tab displays the associated files and folders in the project, allowing you to quickly find your previous work.
* Any settings associated with the project, such as Git settings, are loaded. This assists with collaboration and project\-specific set\-up.
Each project is different but most contain input data, R code and outputs.
To keep things tidy, we recommend a sub\-directory structure resembling the following:
```
project/
- README.Rmd # Project description
- set-up.R # Required packages
- R/ # For R code
- input # Data files
- graphics/
- output/ # Results
```
Proper use of projects ensures that all R source files are neatly stashed in one folder with a meaningful structure. This way data and documentation can be found where one would expect them. Under this system, figures and project outputs are ‘first class citizens’ within the project’s design, each with their own folder.
Another approach to project management is to treat projects as R packages.
This is not recommended for most use cases, as it places restrictions on where you can put files. However, if the aim is *code development and sharing*, creating a small R package may be the way forward, even if you never intend to submit it on CRAN. Creating R packages is easier than ever before, as documented in (Cotton [2013](#ref-cotton_learning_2013)) and, more recently (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). The **devtools** package helps manage R’s quirks, making the process much less painful.
If you use GitHub, the advantage of this approach is that anyone should be able to reproduce your working using `devtools::install_github("username/projectname")`, although the administrative overheads of creating an entire package for each small project will outweigh the benefits for many.
Note that a `set-up.R` or even a `.Rprofile` file in the project’s root directory enable project\-specific settings to be loaded each time people work on the project.
As described in the previous section, `.Rprofile` can be used to tweak how R works at start\-up.
It is also a portable way to manage R’s configuration on a project\-by\-project basis.
Another capability that RStudio has is excellent debugging support. Rather than re\-invent the wheel, we would like to direct interested readers to the [RStudio website](https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio).
#### Exercises
1. Try modifying the look and appearance of your RStudio setup.
2. What is the keyboard shortcut to show the other shortcuts? (Hint: it begins with `Alt+Shift` on Linux and Windows.)
3. Try as many of the shortcuts revealed by the previous step as you like. Write down the ones that you think will save you time, perhaps on a post\-it note to go on your computer.
2\.6 BLAS and alternative R interpreters
----------------------------------------
In this section we cover a few system\-level options available to speed\-up R’s performance.
Note that for many applications stability rather than speed is a priority, so these should only be considered if a) you have exhausted options for writing your R code more efficiently and b) you are confident tweaking system\-level settings.
This should therefore be seen as an advanced section: if you are not interested in speeding\-up base R, feel free to skip to the next section of hardware.
Many statistical algorithms manipulate matrices. R uses the Basic Linear Algebra System (BLAS) framework for linear algebra operations. Whenever we carry out a matrix operation, such as transpose or finding the inverse, we use the underlying BLAS library. By switching to a different BLAS library, it may be possible to speed\-up your R code. Changing your BLAS library is straightforward if you are using Linux, but can be tricky for Windows users.
The two open source alternative BLAS libraries are [ATLAS](http://math-atlas.sourceforge.net/) and [OpenBLAS](https://github.com/xianyi/OpenBLAS). The [Intel MKL](https://software.intel.com/en-us/intel-mkl) is another implementation,
designed for Intel processors by Intel and used in Revolution R (described in the next section) but it requires licensing fees. The MKL library is provided with the Revolution analytics system. Depending on your application, by switching your BLAS library, linear algebra operations can run several times faster than with the base BLAS routines.
If you use macOS or Linux, you can check whether you have a BLAS library setting with the following function, from **benchmarkme**:
```
library("benchmarkme")
get_linear_algebra()
```
### 2\.6\.1 Testing performance gains from BLAS
As an illustrative test of the performance gains offered by BLAS, the following test was run on a new laptop running Ubuntu 15\.10 on a 6th generation Core i7 processor, before and after OpenBLAS was installed.[7](#fn7)
```
res = benchmark_std() # run a suit of tests to test R's performance
```
It was found that the installation of OpenBLAS led to a 2\-fold speed\-up (from around 150 to 70 seconds). The majority of the speed gain was from the matrix algebra tests, as can be seen in Figure [2\.4](set-up.html#fig:blas-bench). Note that the results of such tests are highly dependent on the particularities of each computer. However, it clearly shows that ‘programming’ benchmarks (e.g. the calculation of 3,500,000 Fibonacci numbers) are not much faster, whereas matrix calculations and functions receive a substantial speed boost. This demonstrates that the speed\-up you can expect from BLAS depends heavily on the type of computations you are undertaking.
Figure 2\.4: Performance gains obtained changing the underlying BLAS library (tests from `benchmark_std()`).
### 2\.6\.2 Other interpreters
The R *language* can be separated from the R *interpreter*. The former refers to the meaning of R commands, the latter refers to how the computer executes the commands. Alternative interpreters have been developed to try to make R faster and, while promising, none of the following options has fully taken off.
* [Microsoft R Open](https://mran.microsoft.com/open), formerly known as Revolution R Open (RRO), is the enhanced distribution of R from Microsoft. The key enhancement is that it uses multi\-threaded mathematics libraries, which can improve performance.
* [Rho](https://github.com/rho-devel/rho) (previously called CXXR, short for C\+\+), a re\-implementation of the R interpreter for speed and efficiency. Of the new interpreters, this is the one that has the most recent development activity (as of April 2016\).
* [pqrR](http://www.pqr-project.org/) (pretty quick R) is a new version of the R interpreter. One major downside, is that it is based on R\-2\.15\.0\. The developer (Radford Neal) has made many improvements, some of which have now been incorporated into base R. **pqR** is an open\-source project licensed under the GPL. One notable improvement in pqR is that it is able to do some numeric computations in parallel with each other, and with other operations of the interpreter, on systems with multiple processors or processor cores.
* [Renjin](http://www.renjin.org/) reimplements the R interpreter in Java, so it can run on the Java Virtual Machine (JVM). Since R will be pure Java, it can run anywhere.
* [Tibco](http://spotfire.tibco.com/) created a C\+\+ based interpreter called TERR (TIBCO Enterprise Runtime for R) that is incorporated into their analytics platform, Spotfire.
* Oracle also offer an R\-interpreter that uses Intel’s mathematics library and therefore achieves a higher performance without changing R’s core.
At the time of writing, switching interpreters is something to consider carefully. But in the future, it may become more routine.
### 2\.6\.3 Useful BLAS/benchmarking resources
* The [gcbd](https://cran.r-project.org/web/packages/gcbd/) package benchmarks performance of a few standard linear algebra operations across a number of different BLAS libraries as well as a GPU implementation. It has an excellent vignette summarising the results.
* [Brett Klamer](http://brettklamer.com/diversions/statistical/faster-blas-in-r/) provides a nice comparison of ATLAS, OpenBLAS and Intel MKL BLAS libraries. He also gives a description of how to install the different libraries.
* The official R manual [section](https://cran.r-project.org/doc/manuals/r-release/R-admin.html#BLAS) on BLAS.
### Exercises
1. What BLAS system is your version of R using?
### Prerequisites
Only one package needs to be installed to run the code in this chapter:
```
library("benchmarkme")
```
2\.1 Top 5 tips for an efficient R set\-up
------------------------------------------
1. Use system monitoring to identify bottlenecks in your hardware/code.
2. Keep your R installation and packages up\-to\-date.
3. Make use of RStudio’s powerful autocompletion capabilities and shortcuts.
4. Store API keys in the `.Renviron` file.
5. Use BLAS if your R number crunching is too slow.
2\.2 Operating system
---------------------
R supports all three major operating system (OS) types: Linux, Mac and Windows.[1](#fn1) R is platform\-independent, although there are some OS\-specific quirks, e.g. in relation to file path notation (see Section [2\.4\.3](set-up.html#location)).
Basic OS\-specific information can be queried from within R using `Sys.info()`:
```
Sys.info()
#R> sysname release machine user
#R> "Linux" "4.2.0-35-generic" "x86_64" "robin"
```
Translated into English, the above output means that R is running on a 64 bit (`x86_64`) Linux distribution (`4.2.0-35-generic` is the Linux version) and that the current user is `robin`. Four other pieces of information (not shown) are also produced by the command, the meaning of which is well documented in a help file revealed by entering `?Sys.info` in the R console.
The **assertive.reflection** package can be used to report additional information about your computer’s operating system and R set\-up with functions for asserting operating system and other system characteristics. The `assert_*()` functions work by testing the truth of the statement and erroring if the statement is untrue. On a Linux system `assert_is_linux()` will run silently, whereas `assert_is_windows()` will cause an error. The package can also test for the IDE you are using (e.g. `assert_is_rstudio()`), the capabilities of R (`assert_r_has_libcurl_capability()` etc.), and what OS tools are available (e.g. `assert_r_can_compile_code()`). These functions can be useful for running code that is designed only to run on one type of set\-up.
### 2\.2\.1 Operating system and resource monitoring
Minor differences aside, R’s computational efficiency is broadly the same across different operating systems.[2](#fn2)
Beyond the \\(32\\) vs \\(64\\) bit issue (covered in the next chapter) and *process forking* (covered in Chapter [7](performance.html#performance)), another OS\-related issue to consider is external dependencies: programs that R packages depend on. Sometimes external package dependencies must be installed manually (i.e. not using `install.packages()`). This is especially common with Unix\-based systems (Linux and Mac). On Debian\-based operating systems such as Ubuntu, many R packages can be installed at the OS level, to ensure external dependencies are also installed (see Section [2\.3\.4](set-up.html#deps)).
Resource monitoring is the process of checking the status of key OS variables. For computationally intensive work, it is sensible to monitor system resources in this way. Resource monitoring can help identify computational bottlenecks. Alongside R profiling functions such as **profvis** (see Section [7\.2](performance.html#performance-profvis)), system monitoring provides a useful tool for understanding how R is performing in relation to variables reporting the OS state, such as how much RAM is in use, which relates to the wider question of whether more is needed (covered in Chapter [3](programming.html#programming)).
CPU resource allocated over time is another common OS variable that is worth monitoring. A basic use case is to check whether your code is running in parallel (see Figure [2\.1](set-up.html#fig:2-1)), and whether there is spare CPU capacity on the OS that could be harnessed by parallel code.
Figure 2\.1: Output from a system monitor (`gnome-system-monitor` running on Ubuntu) showing the resources consumed by running the code presented in the second of the Exercises at the end of this section. The first increases RAM use, the second is single\-threaded and the third is multi\-threaded.
System monitoring is a complex topic that spills over into system administration and server management. Fortunately there are many tools designed to ease monitoring on all major operating systems.
* On Linux, the shell command `top` displays key resource use figures for most distributions. `htop` and Gnome’s **System Monitor** (`gnome-system-monitor`, see Figure [2\.1](set-up.html#fig:2-1)) are more refined alternatives which use command\-line and graphical user interfaces respectively. A number of options such as `nethogs` monitor internet usage.
* On Mac, the **Activity Monitor** provides similar functionality. This can be initiated from the Utilities folder in Launchpad.
* On Windows, the **Task Manager** provides key information on RAM and CPU use by process. This can be started in modern Windows versions by typing `Ctrl-Alt-Del` or by clicking the task bar and ‘Start Task Manager’.
#### Exercises
1. What is the exact version of your computer’s operating system?
2. Start an activity monitor then execute the following code chunk. In it `lapply()` (or its parallel version `mclapply()`) is used to *apply* a function, `median()`, over every column in the data frame object `X` (see Section [3\.5](programming.html#the-apply-family) for more on the ‘apply family’ of functions).\[The reason this works is that a data frame is really a list of vectors, each vector forming a column.]
How do the system output logs (results) on your system compare to those presented in Figure [2\.1](set-up.html#fig:2-1)?
```
# Note: uses 2+ GB RAM and several seconds or more depending on hardware
# 1: Create large dataset
X = as.data.frame(matrix(rnorm(1e8), nrow = 1e7))
# 2: Find the median of each column using a single core
r1 = lapply(X, median)
# 3: Find the median of each column using many cores
r2 = parallel::mclapply(X, median)
```
`mclapply` only works in parallel on Mac and Linux. In Chapter 7 you’ll learn about an equivalent function `parLapply()` that works in parallel on Windows.
3. What do you notice regarding CPU usage, RAM and system time, during and after each of the three operations?
4. Bonus question: how would the results change depending on operating system?
### 2\.2\.1 Operating system and resource monitoring
Minor differences aside, R’s computational efficiency is broadly the same across different operating systems.[2](#fn2)
Beyond the \\(32\\) vs \\(64\\) bit issue (covered in the next chapter) and *process forking* (covered in Chapter [7](performance.html#performance)), another OS\-related issue to consider is external dependencies: programs that R packages depend on. Sometimes external package dependencies must be installed manually (i.e. not using `install.packages()`). This is especially common with Unix\-based systems (Linux and Mac). On Debian\-based operating systems such as Ubuntu, many R packages can be installed at the OS level, to ensure external dependencies are also installed (see Section [2\.3\.4](set-up.html#deps)).
Resource monitoring is the process of checking the status of key OS variables. For computationally intensive work, it is sensible to monitor system resources in this way. Resource monitoring can help identify computational bottlenecks. Alongside R profiling functions such as **profvis** (see Section [7\.2](performance.html#performance-profvis)), system monitoring provides a useful tool for understanding how R is performing in relation to variables reporting the OS state, such as how much RAM is in use, which relates to the wider question of whether more is needed (covered in Chapter [3](programming.html#programming)).
CPU resource allocated over time is another common OS variable that is worth monitoring. A basic use case is to check whether your code is running in parallel (see Figure [2\.1](set-up.html#fig:2-1)), and whether there is spare CPU capacity on the OS that could be harnessed by parallel code.
Figure 2\.1: Output from a system monitor (`gnome-system-monitor` running on Ubuntu) showing the resources consumed by running the code presented in the second of the Exercises at the end of this section. The first increases RAM use, the second is single\-threaded and the third is multi\-threaded.
System monitoring is a complex topic that spills over into system administration and server management. Fortunately there are many tools designed to ease monitoring on all major operating systems.
* On Linux, the shell command `top` displays key resource use figures for most distributions. `htop` and Gnome’s **System Monitor** (`gnome-system-monitor`, see Figure [2\.1](set-up.html#fig:2-1)) are more refined alternatives which use command\-line and graphical user interfaces respectively. A number of options such as `nethogs` monitor internet usage.
* On Mac, the **Activity Monitor** provides similar functionality. This can be initiated from the Utilities folder in Launchpad.
* On Windows, the **Task Manager** provides key information on RAM and CPU use by process. This can be started in modern Windows versions by typing `Ctrl-Alt-Del` or by clicking the task bar and ‘Start Task Manager’.
#### Exercises
1. What is the exact version of your computer’s operating system?
2. Start an activity monitor then execute the following code chunk. In it `lapply()` (or its parallel version `mclapply()`) is used to *apply* a function, `median()`, over every column in the data frame object `X` (see Section [3\.5](programming.html#the-apply-family) for more on the ‘apply family’ of functions).\[The reason this works is that a data frame is really a list of vectors, each vector forming a column.]
How do the system output logs (results) on your system compare to those presented in Figure [2\.1](set-up.html#fig:2-1)?
```
# Note: uses 2+ GB RAM and several seconds or more depending on hardware
# 1: Create large dataset
X = as.data.frame(matrix(rnorm(1e8), nrow = 1e7))
# 2: Find the median of each column using a single core
r1 = lapply(X, median)
# 3: Find the median of each column using many cores
r2 = parallel::mclapply(X, median)
```
`mclapply` only works in parallel on Mac and Linux. In Chapter 7 you’ll learn about an equivalent function `parLapply()` that works in parallel on Windows.
3. What do you notice regarding CPU usage, RAM and system time, during and after each of the three operations?
4. Bonus question: how would the results change depending on operating system?
#### Exercises
1. What is the exact version of your computer’s operating system?
2. Start an activity monitor then execute the following code chunk. In it `lapply()` (or its parallel version `mclapply()`) is used to *apply* a function, `median()`, over every column in the data frame object `X` (see Section [3\.5](programming.html#the-apply-family) for more on the ‘apply family’ of functions).\[The reason this works is that a data frame is really a list of vectors, each vector forming a column.]
How do the system output logs (results) on your system compare to those presented in Figure [2\.1](set-up.html#fig:2-1)?
```
# Note: uses 2+ GB RAM and several seconds or more depending on hardware
# 1: Create large dataset
X = as.data.frame(matrix(rnorm(1e8), nrow = 1e7))
# 2: Find the median of each column using a single core
r1 = lapply(X, median)
# 3: Find the median of each column using many cores
r2 = parallel::mclapply(X, median)
```
`mclapply` only works in parallel on Mac and Linux. In Chapter 7 you’ll learn about an equivalent function `parLapply()` that works in parallel on Windows.
3. What do you notice regarding CPU usage, RAM and system time, during and after each of the three operations?
4. Bonus question: how would the results change depending on operating system?
2\.3 R version
--------------
It is important to be aware that R is an evolving software project, whose behaviour changes over time. In general, base R is very conservative about making changes that break backwards compatibility. However, packages occasionally change substantially from one release to the next; typically it depends on the age of the package. For most use cases we recommend always using the most up\-to\-date version of R and packages, so you have the latest code. In some circumstances (e.g. on a production server or working in a team) you may alternatively want to use specific versions which have been tested, to ensure stability. Keeping packages up\-to\-date is desirable because new code tends to be more efficient, intuitive, robust and feature rich. This section explains how.
Previous R versions can be installed from CRAN’s archive or previous R releases. The binary versions for all OSs can be found at [cran.r\-project.org/bin/](https://cran.r-project.org/bin/). To download binary versions for Ubuntu ‘Xenial’, for example, see [cran.r\-project.org/bin/linux/ubuntu/xenial/](https://cran.r-project.org/bin/linux/ubuntu/xenial/). To ‘pin’ specific versions of R packages you can use the **packrat** package. For more on pinning R versions and R packages see articles on RStudio’s website [Using\-Different\-Versions\-of\-R](https://support.rstudio.com/hc/en-us/articles/200486138-Using-Different-Versions-of-R) and [rstudio.github.io/packrat/](https://rstudio.github.io/packrat/).
### 2\.3\.1 Installing R
The method of installing R varies for Windows, Linux and Mac.
On Windows, a single `.exe` file (hosted at [cran.r\-project.org/bin/windows/base/](https://cran.r-project.org/bin/windows/base/)) will install the base R package.
On a Mac, the latest version should be installed by downloading the `.pkg` files hosted at [cran.r\-project.org/bin/macosx/](https://cran.r-project.org/bin/macosx/).
On Linux, the installation method depends on the distribution of Linux installed, althogh the principles are the same. We’ll cover how to install R on Debian\-based systems, with links at the end for details on other Linux distributions. First stage is to add the CRAN repository, to ensure that the latest version is installed. If you are running Ubuntu 16\.04, for example, append the following line to the file `/etc/apt/sources.list`:
```
deb http://cran.rstudio.com/bin/linux/ubuntu xenial/
```
`http://cran.rstudio.com` is the mirror (which can be replaced by any listed at [cran.r\-project.org/mirrors.html](https://cran.r-project.org/mirrors.html)) and `xenial` is the release. See the [Debian](https://cran.r-project.org/bin/linux/debian/) and [Ubuntu](https://cran.r-project.org/bin/linux/ubuntu/) installation pages on CRAN from further details.
Once the appropriate repository has been added and the system updated (e.g. with `sudo apt-get update`), `r-base` and other `r-` packages can be installed using the `apt` system. The following two commands, for example, would install the base R package (a ‘bare\-bones’ install) and the package **rcurl**, which has an external dependency:
```
sudo apt-get install r-cran-base # install base R
sudo apt-get install r-cran-rcurl # install the rcurl package
```
`apt-cache search "^r-.*" | sort` will display all R packages that can be installed from `apt` in Debian\-based systems. In Fedora\-based systems, the equivalent command is `yum list R-\*`.
Typical output from the second command is illustrated below:
```
The following extra packages will be installed:
libcurl3-nss
The following NEW packages will be installed
libcurl3-nss r-cran-rcurl
0 to upgrade, 2 to newly install, 0 to remove and 16 not to upgrade.
Need to get 699 kB of archives.
After this operation, 2,132 kB of additional disk space will be used.
Do you want to continue? [Y/n]
```
Further details are provided at [cran.r\-project.org/bin/linux/](https://cran.r-project.org/bin/linux/) for Debian, Redhat and Suse OSs. R also works on FreeBSD and other Unix\-based systems.[3](#fn3)
Once R is installed it should be kept up\-to\-date.
### 2\.3\.2 Updating R
R is a mature and stable language so well\-written code in base R should work on most versions. However, it is important to keep your R version relatively up\-to\-date, because:
* Bug fixes are introduced in each version, making errors less likely;
* Performance enhancements are made from one version to the next, meaning your code may run faster in later versions;
* Many R packages only work on recent versions on R.
Release notes with details on each of these issues are hosted at [cran.r\-project.org/src/base/NEWS](https://cran.r-project.org/src/base/NEWS). R release versions have 3 components corresponding to major.minor.patch changes. Generally 2 or 3 patches are released before the next minor increment \- each ‘patch’ is released roughly every 3 months. R 3\.2, for example, has consisted of 3 versions: 3\.2\.0, 3\.2\.1 and 3\.2\.2\.
* On Ubuntu\-based systems, new versions of R should be automatically detected through the software management system, and can be installed with `apt-get upgrade`.
* On Mac, the latest version should be installed by the user from the `.pkg` files mentioned above.
* On Windows **installr** package makes updating easy:
```
# check and install the latest R version
installr::updateR()
```
For information about changes to expect in the next version, you can subscribe to the R’s NEWS RSS feed: [developer.r\-project.org/blosxom.cgi/R\-devel/NEWS/index.rss](http://developer.r-project.org/blosxom.cgi/R-devel/NEWS/index.rss). It’s a good way of keeping up\-to\-date.
### 2\.3\.3 Installing R packages
Large projects may need several packages to be installed. In this case, the required packages can be installed at once. Using the example of packages for handling spatial data, this can be done quickly and concisely with the following code:
```
pkgs = c("raster", "leaflet", "rgeos") # package names
install.packages(pkgs)
```
In the above code all the required packages are installed with two not three lines, reducing typing. Note that we can now re\-use the `pkgs` object to load them all:
```
inst = lapply(pkgs, library, character.only = TRUE) # load them
```
In the above code `library(pkgs[i])` is executed for every package stored in the text string vector. We use `library` here instead of `require` because the former produces an error if the package is not available.
Loading all packages at the beginning of a script is good practice as it ensures all dependencies have been installed *before* time is spent executing code. Storing package names in a character vector object such as `pkgs` is also useful because it allows us to refer back to them again and again.
### 2\.3\.4 Installing R packages with dependencies
Some packages have external dependencies (i.e. they call libraries outside R). On Unix\-like systems, these are best installed onto the operating system, bypassing `install.packages`. This will ensure the necessary dependencies are installed and setup correctly alongside the R package. On Debian\-based distributions such as Ubuntu, for example, packages with names starting with `r-cran-` can be searched for and installed as follows (see [cran.r\-project.org/bin/linux/ubuntu/](https://cran.r-project.org/bin/linux/ubuntu/) for a list of these):
```
apt-cache search r-cran- # search for available cran Debian packages
sudo apt-get-install r-cran-rgdal # install the rgdal package (with dependencies)
```
On Windows the **installr** package helps manage and update R packages with system\-level dependencies. For example the **Rtools** package for compiling C/C\+\+ code on Windows can be installed with the following command:
```
installr::install.rtools()
```
### 2\.3\.5 Updating R packages
An efficient R set\-up will contain up\-to\-date packages.
This can be done *for all packages* with:
```
update.packages() # update installed CRAN packages
```
The default for this function is for the `ask` argument to be set to `TRUE`, giving control over what is downloaded onto your system. This is generally desirable as updating dozens of large packages can consume a large proportion of available system resources.
To update packages automatically, you can add the line `update.packages(ask = FALSE)` to your `.Rprofile` startup file (see the next section for more on `.Rprofile`). Thanks to Richard Cotton for this tip.
An even more interactive method for updating packages in R is provided by RStudio via Tools \> Check for Package Updates. Many such time saving tricks are enabled by RStudio, as described in [a subsequent section](set-up.html#install-rstudio). Next (after the exercises) we take a look at how to configure R using start\-up files.
#### Exercises
1. What version of R are you using? Is it the most up\-to\-date?
2. Do any of your packages need updating?
### 2\.3\.1 Installing R
The method of installing R varies for Windows, Linux and Mac.
On Windows, a single `.exe` file (hosted at [cran.r\-project.org/bin/windows/base/](https://cran.r-project.org/bin/windows/base/)) will install the base R package.
On a Mac, the latest version should be installed by downloading the `.pkg` files hosted at [cran.r\-project.org/bin/macosx/](https://cran.r-project.org/bin/macosx/).
On Linux, the installation method depends on the distribution of Linux installed, althogh the principles are the same. We’ll cover how to install R on Debian\-based systems, with links at the end for details on other Linux distributions. First stage is to add the CRAN repository, to ensure that the latest version is installed. If you are running Ubuntu 16\.04, for example, append the following line to the file `/etc/apt/sources.list`:
```
deb http://cran.rstudio.com/bin/linux/ubuntu xenial/
```
`http://cran.rstudio.com` is the mirror (which can be replaced by any listed at [cran.r\-project.org/mirrors.html](https://cran.r-project.org/mirrors.html)) and `xenial` is the release. See the [Debian](https://cran.r-project.org/bin/linux/debian/) and [Ubuntu](https://cran.r-project.org/bin/linux/ubuntu/) installation pages on CRAN from further details.
Once the appropriate repository has been added and the system updated (e.g. with `sudo apt-get update`), `r-base` and other `r-` packages can be installed using the `apt` system. The following two commands, for example, would install the base R package (a ‘bare\-bones’ install) and the package **rcurl**, which has an external dependency:
```
sudo apt-get install r-cran-base # install base R
sudo apt-get install r-cran-rcurl # install the rcurl package
```
`apt-cache search "^r-.*" | sort` will display all R packages that can be installed from `apt` in Debian\-based systems. In Fedora\-based systems, the equivalent command is `yum list R-\*`.
Typical output from the second command is illustrated below:
```
The following extra packages will be installed:
libcurl3-nss
The following NEW packages will be installed
libcurl3-nss r-cran-rcurl
0 to upgrade, 2 to newly install, 0 to remove and 16 not to upgrade.
Need to get 699 kB of archives.
After this operation, 2,132 kB of additional disk space will be used.
Do you want to continue? [Y/n]
```
Further details are provided at [cran.r\-project.org/bin/linux/](https://cran.r-project.org/bin/linux/) for Debian, Redhat and Suse OSs. R also works on FreeBSD and other Unix\-based systems.[3](#fn3)
Once R is installed it should be kept up\-to\-date.
### 2\.3\.2 Updating R
R is a mature and stable language so well\-written code in base R should work on most versions. However, it is important to keep your R version relatively up\-to\-date, because:
* Bug fixes are introduced in each version, making errors less likely;
* Performance enhancements are made from one version to the next, meaning your code may run faster in later versions;
* Many R packages only work on recent versions on R.
Release notes with details on each of these issues are hosted at [cran.r\-project.org/src/base/NEWS](https://cran.r-project.org/src/base/NEWS). R release versions have 3 components corresponding to major.minor.patch changes. Generally 2 or 3 patches are released before the next minor increment \- each ‘patch’ is released roughly every 3 months. R 3\.2, for example, has consisted of 3 versions: 3\.2\.0, 3\.2\.1 and 3\.2\.2\.
* On Ubuntu\-based systems, new versions of R should be automatically detected through the software management system, and can be installed with `apt-get upgrade`.
* On Mac, the latest version should be installed by the user from the `.pkg` files mentioned above.
* On Windows **installr** package makes updating easy:
```
# check and install the latest R version
installr::updateR()
```
For information about changes to expect in the next version, you can subscribe to the R’s NEWS RSS feed: [developer.r\-project.org/blosxom.cgi/R\-devel/NEWS/index.rss](http://developer.r-project.org/blosxom.cgi/R-devel/NEWS/index.rss). It’s a good way of keeping up\-to\-date.
### 2\.3\.3 Installing R packages
Large projects may need several packages to be installed. In this case, the required packages can be installed at once. Using the example of packages for handling spatial data, this can be done quickly and concisely with the following code:
```
pkgs = c("raster", "leaflet", "rgeos") # package names
install.packages(pkgs)
```
In the above code all the required packages are installed with two not three lines, reducing typing. Note that we can now re\-use the `pkgs` object to load them all:
```
inst = lapply(pkgs, library, character.only = TRUE) # load them
```
In the above code `library(pkgs[i])` is executed for every package stored in the text string vector. We use `library` here instead of `require` because the former produces an error if the package is not available.
Loading all packages at the beginning of a script is good practice as it ensures all dependencies have been installed *before* time is spent executing code. Storing package names in a character vector object such as `pkgs` is also useful because it allows us to refer back to them again and again.
### 2\.3\.4 Installing R packages with dependencies
Some packages have external dependencies (i.e. they call libraries outside R). On Unix\-like systems, these are best installed onto the operating system, bypassing `install.packages`. This will ensure the necessary dependencies are installed and setup correctly alongside the R package. On Debian\-based distributions such as Ubuntu, for example, packages with names starting with `r-cran-` can be searched for and installed as follows (see [cran.r\-project.org/bin/linux/ubuntu/](https://cran.r-project.org/bin/linux/ubuntu/) for a list of these):
```
apt-cache search r-cran- # search for available cran Debian packages
sudo apt-get-install r-cran-rgdal # install the rgdal package (with dependencies)
```
On Windows the **installr** package helps manage and update R packages with system\-level dependencies. For example the **Rtools** package for compiling C/C\+\+ code on Windows can be installed with the following command:
```
installr::install.rtools()
```
### 2\.3\.5 Updating R packages
An efficient R set\-up will contain up\-to\-date packages.
This can be done *for all packages* with:
```
update.packages() # update installed CRAN packages
```
The default for this function is for the `ask` argument to be set to `TRUE`, giving control over what is downloaded onto your system. This is generally desirable as updating dozens of large packages can consume a large proportion of available system resources.
To update packages automatically, you can add the line `update.packages(ask = FALSE)` to your `.Rprofile` startup file (see the next section for more on `.Rprofile`). Thanks to Richard Cotton for this tip.
An even more interactive method for updating packages in R is provided by RStudio via Tools \> Check for Package Updates. Many such time saving tricks are enabled by RStudio, as described in [a subsequent section](set-up.html#install-rstudio). Next (after the exercises) we take a look at how to configure R using start\-up files.
#### Exercises
1. What version of R are you using? Is it the most up\-to\-date?
2. Do any of your packages need updating?
#### Exercises
1. What version of R are you using? Is it the most up\-to\-date?
2. Do any of your packages need updating?
2\.4 R startup
--------------
Every time R starts a couple of file scripts are run by default, as documented in `?Startup`. This section explains how to customise these files, allowing you to save API keys or load frequently used functions. Before learning how to modify these files, we’ll take a look at how to ignore them, with R’s startup arguments. If you want to turn custom set\-up ‘on’ it’s useful to be able to turn it ‘off’, e.g. for debugging.
Some of R’s startup arguments can be controlled interactively in RStudio. See the online help file [Customizing RStudio](https://support.rstudio.com/hc/en-us/articles/200549016-Customizing-RStudio) for more on this.
### 2\.4\.1 R startup arguments
A number of arguments can be appended to the R startup command (`R` in a shell environment) which relate to startup.
The following are particularly important:
* `--no-environ` and `--no-init` arguments tell R to only look for startup files (described in the next section) in the current working directory.
* `--no-restore` tells R not to load a file called `.RData` (the default name for R session files) that may be present in the current working directory.
* `--no-save` tells R not to ask the user if they want to save objects saved in RAM when the session is ended with `q()`.
Adding each of these will make R load slightly faster, and mean that slightly less user input is needed when you quit. R’s default setting of loading data from the last session automatically is potentially problematic in this context. See [An Introduction to R](https://cran.r-project.org/doc/manuals/R-intro.pdf), Appendix B, for more startup arguments.
A concise way to load a ‘vanilla’ version of R, with all of the above options enabled is with an option of the same name:
```
R --vanilla
```
### 2\.4\.2 An overview of R’s startup files
Two files are read each time R starts (unless one of the command line options outlined above is used):
* `.Renviron`, the primary purpose of which is to set *environment variables*. These tell R where to find external programs and can hold user\-specific information than needs to be kept secret, typically *API keys*.
* `.Rprofile` is a plain text file (which is always called `.Rprofile`, hence its name) that simply runs lines of R code every time R starts. If you want R to check for package updates each time it starts (as explained in the previous section), you simply add the relevant line somewhere in this file.
When R starts (unless it was launched with `--no-environ`) it first searches for `.Renviron` and then `.Rprofile`, in that order.
Although `.Renviron` is searched for first, we will look at `.Rprofile` first as it is simpler and for many set\-up tasks more frequently useful. Both files can exist in three directories on your computer.
Modification of R’s startup files should not be taken lightly. This is an advanced topic. If you modify your startup files in the wrong way, it can cause problems: a seemingly innocent call to `setwd()` in `.Rprofile`, for example, will break **devtools** `build` and `check` functions.
Proceed with caution and, if you mess things up, just delete the offending files!
### 2\.4\.3 The location of startup files
Confusingly, multiple versions of these files can exist on the same computer, only one of which will be used per session. Note also that these files should only be changed with caution and if you know what you are doing. This is because they can make your R version behave differently to other R installations, potentially reducing the reproducibility of your code.
Files in three folders are important in this process:
* `R_HOME`, the directory in which R is installed. The `etc` sub\-directory can contain start\-up files read early on in the start\-up process. Find out where your `R_HOME` is with the `R.home()` command.
* `HOME`, the user’s home directory. Typically this is `/home/username` on Unix machines or `C:\Users\username` on Windows (since Windows 7\). Ask R where your home directory is with, `Sys.getenv("HOME")`.
* R’s current working directory. This is reported by `getwd()`.
It is important to know the location of the `.Rprofile` and `.Renviron` set\-up files that are being used out of these three options.
R only uses one `.Rprofile` and one `.Renviron` in any session: if you have a `.Rprofile` file in your current project, R will ignore `.Rprofile` in `R_HOME` and `HOME`.
Likewise, `.Rprofile` in `HOME` overrides `.Rprofile` in `R_HOME`.
The same applies to `.Renviron`: you should remember that adding project specific environment variables with `.Renviron` will de\-activate other `.Renviron` files.
To create a project\-specific start\-up script, simply create a `.Rprofile` file in the project’s root directory and start adding R code, e.g. via `file.edit(".Rprofile")`.
Remember that this will make `.Rprofile` in the home directory be ignored.
The following commands will open your `.Rprofile` from within an R editor:
```
file.edit("~/.Rprofile") # edit .Rprofile in HOME
file.edit(".Rprofile") # edit project specific .Rprofile
```
File paths provided by Windows operating systems will not always work in R. Specifically, if you use a path that contains single backslashes, such as `C:\DATA\data.csv`, as provided by Windows, this will generate the error: `Error: unexpected input in “C:\”`. To overcome this issue R provides two functions, `file.path()` and `normalizePath()`. The former can be used to specify file locations without having to use symbols to represent relative file paths, as follows: `file.path(“C:”, “DATA”, “data.csv”)`. The latter takes any input string for a file name and outputs a text string that is standard (canonical) for the operating system. `normalizePath(“C:/DATA/data.csv”)`, for example, outputs `C:\DATA\data.csv` on a Windows machine but `C:/DATA/data.csv` on Unix\-based platforms. Note that only the latter would work on both platforms so standard Unix file path notation is safe for all operating systems.
Editing the `.Renviron` file in the same locations will have the same effect.
The following code will create a user specific `.Renviron` file (where API keys and other cross\-project environment variables can be stored), without overwriting any existing file.
```
user_renviron = path.expand(file.path("~", ".Renviron"))
file.edit(user_renviron) # open with another text editor if this fails
```
The **pathological** package can help find where `.Rprofile` and `.Renviron` files are located on your system, thanks to the `os_path()` function. The output of `example(Startup)` is also instructive.
The location, contents and uses of each is outlined in more detail below.
### 2\.4\.4 The `.Rprofile` file
By default, R looks for and runs `.Rprofile` files in the three locations described above, in a specific order. `.Rprofile` files are simply R scripts that run each time R runs and they can be found within `R_HOME`, `HOME` and the project’s home directory, found with `getwd()`. To check if you have a site\-wide `.Rprofile`, which will run for all users on start\-up, run:
```
site_path = R.home(component = "home")
fname = file.path(site_path, "etc", "Rprofile.site")
file.exists(fname)
```
The above code checks for the presence of `Rprofile.site` in that directory. As outlined above, the `.Rprofile` located in your home directory is user\-specific. Again, we can test whether this file exists using
```
file.exists("~/.Rprofile")
```
We can use R to create and edit `.Rprofile` (warning: do not overwrite your previous `.Rprofile` \- we suggest you try a project\-specific `.Rprofile` first):
```
file.edit("~/.Rprofile")
```
### 2\.4\.5 An example `.Rprofile` file
The example below provides a taster of what goes into `.Rprofile`.
Note that this is simply a usual R script, but with an unusual name.
The best way to understand what is going on is to create this same script, save it as `.Rprofile` in your current working directory and then restart your R session to observe what changes. To restart your R session from within RStudio you can click `Session > Restart R` or use the keyboard shortcut `Ctrl+Shift+F10`.
```
# A fun welcome message
message("Hi Robin, welcome to R")
# Customise the R prompt that prefixes every command
# (use " " for a blank prompt)
options(prompt = "R4geo> ")
```
To quickly explain each line of code: the first simply prints a message in the console each time a new R session is started. The latter modifies the console prompt in the console (set to `>` by default). Note that simply adding more lines to the `.Rprofile` will set more features. An important aspect of `.Rprofile` (and `.Renviron`) is that *each line is run once and only once for each R session*. That means that the options set within `.Rprofile` can easily be changed during the session. The following command run mid\-session, for example, will return the default prompt:
```
options(prompt = "> ")
```
More details on these, and other potentially useful `.Rprofile` options are described subsequently. For more suggestions of useful startup settings, see Examples in `help("Startup")` and online resources such as those at [statmethods.net](http://www.statmethods.net/interface/customizing.html). The help pages for R options (accessible with `?options`) are also worth a read before writing your own `.Rprofile`.
Ever been frustrated by unwanted `+` symbols that prevent copied and pasted multi\-line functions from working? These potentially annoying `+`s can be eradicated by adding `options(continue = " ")` to your `.Rprofile`.
#### 2\.4\.5\.1 Setting options
The function `options`, used above, contains a number of default settings. Typing `options()` provides a good indication of what can be configured. Since `options()` are often related to personal preferences (with few implications for reproducibility), that you will want for many of your R sessions, `.Rprofile` in your home directory or in your project’s folder are sensible places to set them. Other illustrative options are shown below:
```
# With a customised prompt
options(prompt = "R> ", digits = 4, show.signif.stars = FALSE, continue = " ")
# With a longer prompt and empty 'continue' indent (default is "+ ")
options(prompt = "R4Geo> ", digits = 3, continue = " ")
```
The first option changes four default options in a single line.
* The R prompt, from the boring `>` to the exciting `R>`.
* The number of digits displayed.
* Removing the stars after significant \\(p\\)\-values.
* Removing the `+` in multi\-line functions.
Try to avoid adding options to the start\-up file that make your code non\-portable. For example, adding `options(stringsAsFactors = FALSE)` to your start\-up script has knock\-on effects for `read.table` and related functions including `read.csv`, making them convert text strings into characters rather than into factors as is default. This may be useful for you, but can make your code less portable, so be warned.
#### 2\.4\.5\.2 Setting the CRAN mirror
To avoid setting the CRAN mirror each time you run `install.packages()` you can permanently set the mirror in your `.Rprofile`.
```
# `local` creates a new, empty environment
# This avoids polluting .GlobalEnv with the object r
local({
r = getOption("repos")
r["CRAN"] = "https://cran.rstudio.com/"
options(repos = r)
})
```
The RStudio mirror is a virtual machine run by Amazon’s EC2 service, and it syncs with the main CRAN mirror in Austria once per day. Since RStudio is using Amazon’s CloudFront, the repository is automatically distributed around the world, so no matter where you are in the world, the data doesn’t need to travel very far, and is therefore fast to download.
#### 2\.4\.5\.3 The **fortunes** package
This section illustrates the power of `.Rprofile` customisation with reference to a package that was developed for fun. The code below could easily be altered to automatically connect to a database, or ensure that the latest packages have been downloaded.
The **fortunes** package contains a number of memorable quotes that the community has collected over many years, called R fortunes. Each fortune has a number. To get fortune number \\(50\\), for example, enter
```
fortunes::fortune(50)
#>
#> To paraphrase provocatively, 'machine learning is statistics minus any checking
#> of models and assumptions'.
#> -- Brian D. Ripley (about the difference between machine learning and
#> statistics)
#> useR! 2004, Vienna (May 2004)
```
It is easy to make R print out one of these nuggets of truth each time you start a session, by adding the following to `.Rprofile`:
```
if (interactive())
try(fortunes::fortune(), silent = TRUE)
```
The `interactive()` function tests whether R is being used interactively in a terminal. The `fortune()` function is called within `try()`. If the **fortunes** package is not available, we avoid raising an error and move on. By using `::` we avoid adding the **fortunes** package to our list of attached packages.
Typing `search()`, gives the list of attached packages. By using `fortunes::fortune()` we avoid adding the **fortunes** package to that list.
The function `.Last()`, if it exists in the `.Rprofile`, is always run at the end of the session. We can use it to install the **fortunes** package if needed. To load the package, we use `require()`, since if the package isn’t installed, the `require()` function returns `FALSE` and raises a warning.
```
.Last = function() {
cond = suppressWarnings(!require(fortunes, quietly = TRUE))
if (cond)
try(install.packages("fortunes"), silent = TRUE)
message("Goodbye at ", date(), "\n")
}
```
#### 2\.4\.5\.4 Useful functions
You can use `.Rprofile` to define new ‘helper’ functions or redefine existing ones so they’re faster to type.
For example, we could load the following two functions for examining data frames:
```
# ht == headtail
# Show the first 6 rows & last 6 rows of a data frame
ht = function(d, n=6) rbind(head(d, n), tail(d, n))
# Show the first 5 rows & first 5 columns of a data frame
hh = function(d) d[1:5, 1:5]
```
and a function for setting a nice plotting window:
```
nice_par = function(mar = c(3, 3, 2, 1), mgp = c(2, 0.4, 0), tck = -0.01,
cex.axis = 0.9, las = 1, mfrow = c(1, 1), ...) {
par(mar = mar, mgp = mgp, tck = tck, cex.axis = cex.axis, las = las,
mfrow = mfrow, ...)
}
```
Note that these functions are for personal use and are unlikely to interfere with code from other people.
For this reason even if you use a certain package every day, we don’t recommend loading it in your `.Rprofile`.
Shortening long function names for interactive (but not reproducible) code writing is another option for using `.Rprofile` to increase efficiency.
If you frequently use `View()`, for example, you may be able to save time by referring to it in abbreviated form. This is illustrated below to make it faster to view datasets (although with IDE\-driven autocompletion, outlined in the next section, the time savings is less).
```
v = utils::View
```
Also beware the dangers of loading many functions by default: it may make your code less portable.
Another potentially useful setting to change in `.Rprofile` is R’s current working directory.
If you want R to automatically set the working directory to the R folder of your project, for example, one would add the following line of code to the **project**\-specific `.Rprofile`:
```
setwd("R")
```
#### 2\.4\.5\.5 Creating hidden environments with .Rprofile
Beyond making your code less portable, another downside of putting functions in your `.Rprofile` is that it can clutter\-up your work space:
when you run the `ls()` command, your `.Rprofile` functions will appear. Also if you run `rm(list = ls())`, your functions will be deleted. One neat trick to overcome this issue is to use hidden objects and environments. When an object name starts with `.`, by default it doesn’t appear in the output of the `ls()` function
```
.obj = 1
".obj" %in% ls()
#> [1] FALSE
```
This concept also works with environments. In the `.Rprofile` file we can create a *hidden* environment
```
.env = new.env()
```
and then add functions to this environment
```
.env$ht = function(d, n = 6) rbind(head(d, n), tail(d, n))
```
At the end of the `.Rprofile` file, we use `attach`, which makes it possible to refer to objects in the environment by their names alone.
```
attach(.env)
```
### 2\.4\.6 The `.Renviron` file
The `.Renviron` file is used to store system variables. It follows a similar start\-up routine to the `.Rprofile` file: R first looks for a global `.Renviron` file, then for local versions. A typical use of the `.Renviron` file is to specify the `R_LIBS` path, which determines where new packages are installed:
```
# Linux
R_LIBS=~/R/library
# Windows
R_LIBS=C:/R/library
```
After setting this, `install.packages()` saves packages in the directory specified by `R_LIBS`.
The location of this directory can be referred back to subsequently as follows:
```
Sys.getenv("R_LIBS_USER")
#> [1] "/home/travis/R/Library"
```
All currently stored environment variables can be seen by calling `Sys.getenv()` with no arguments. Note that many environment variables are already pre\-set and do not need to be specified in `.Renviron`. `HOME`, for example, which can be seen with `Sys.getenv("HOME")`, is taken from the operating system’s list of environment variables. A list of the most important environment variables that can affect R’s behaviour is documented in the little known help page `help("environment variables")`.
To set or unset an environment variable for the duration of a session, use the following commands:
```
Sys.setenv("TEST" = "test-string") # set an environment variable for the session
Sys.unsetenv("TEST") # unset it
```
Another common use of `.Renviron` is to store API keys and authentication tokens that will be available from one session to another.[4](#fn4)
A common use case is setting the ‘envvar’ `GITHUB_PAT`, which will be detected by the **devtools** package via the function `github_pat()`. To take another example, the following line in `.Renviron` sets the `ZEIT_KEY` environment variable which is used in the **[diezeit](https://cran.r-project.org/web/packages/diezeit/)** package:
```
ZEIT_KEY=PUT_YOUR_KEY_HERE
```
You will need to sign\-in and start a new R session for the environment variable (accessed by `Sys.getenv()`) to be visible. To test if the example API key has been successfully added as an environment variable, run the following:
```
Sys.getenv("ZEIT_KEY")
```
Use of the `.Renviron` file for storing settings such as library paths and API keys is efficient because it reduces the need to update your settings for every R session. Furthermore, the same `.Renviron` file will work across different platforms so keep it stored safely.
#### 2\.4\.6\.1 Example `.Renviron` file
My `.Renviron` file has grown over the years. I often switch between my desktop and laptop computers, so to maintain a consistent working environment, I have the same `.Renviron` file on all of my machines. As well as containing an `R_LIBS` entry and some API keys, my `.Renviron` has a few other lines:
* `TMPDIR=/data/R_tmp/`. When R is running, it creates temporary copies. On my work machine, the default directory is a network drive.
* `R_COMPILE_PKGS=3`. Byte compile all packages (covered in Chapter [3](programming.html#programming)).
* `R_LIBS_SITE=/usr/lib/R/site-library:/usr/lib/R/library` I explicitly state where to look for packages. My University has a site\-wide directory that contains out of date packages. I want to avoid using this directory.
* `R_DEFAULT_PACKAGES=utils,grDevices,graphics,stats,methods`. Explicitly state the packages to load. Note I don’t load the `datasets` package, but I ensure that `methods` is always loaded. Due to historical reasons, the `methods` package isn’t loaded by default in certain applications, e.g. `Rscript`.
#### Exercises
1. What are the three locations where the startup files are stored? Where are these locations on your computer?
2. For each location, does a `.Rprofile` or `.Renviron` file exist?
3. Create a `.Rprofile` file in your current working directory that prints the message `Happy efficient R programming` each time you start R at this location.
4. What happens to the startup files in `R_HOME` if you create them in `HOME` or local project directories?
### 2\.4\.1 R startup arguments
A number of arguments can be appended to the R startup command (`R` in a shell environment) which relate to startup.
The following are particularly important:
* `--no-environ` and `--no-init` arguments tell R to only look for startup files (described in the next section) in the current working directory.
* `--no-restore` tells R not to load a file called `.RData` (the default name for R session files) that may be present in the current working directory.
* `--no-save` tells R not to ask the user if they want to save objects saved in RAM when the session is ended with `q()`.
Adding each of these will make R load slightly faster, and mean that slightly less user input is needed when you quit. R’s default setting of loading data from the last session automatically is potentially problematic in this context. See [An Introduction to R](https://cran.r-project.org/doc/manuals/R-intro.pdf), Appendix B, for more startup arguments.
A concise way to load a ‘vanilla’ version of R, with all of the above options enabled is with an option of the same name:
```
R --vanilla
```
### 2\.4\.2 An overview of R’s startup files
Two files are read each time R starts (unless one of the command line options outlined above is used):
* `.Renviron`, the primary purpose of which is to set *environment variables*. These tell R where to find external programs and can hold user\-specific information than needs to be kept secret, typically *API keys*.
* `.Rprofile` is a plain text file (which is always called `.Rprofile`, hence its name) that simply runs lines of R code every time R starts. If you want R to check for package updates each time it starts (as explained in the previous section), you simply add the relevant line somewhere in this file.
When R starts (unless it was launched with `--no-environ`) it first searches for `.Renviron` and then `.Rprofile`, in that order.
Although `.Renviron` is searched for first, we will look at `.Rprofile` first as it is simpler and for many set\-up tasks more frequently useful. Both files can exist in three directories on your computer.
Modification of R’s startup files should not be taken lightly. This is an advanced topic. If you modify your startup files in the wrong way, it can cause problems: a seemingly innocent call to `setwd()` in `.Rprofile`, for example, will break **devtools** `build` and `check` functions.
Proceed with caution and, if you mess things up, just delete the offending files!
### 2\.4\.3 The location of startup files
Confusingly, multiple versions of these files can exist on the same computer, only one of which will be used per session. Note also that these files should only be changed with caution and if you know what you are doing. This is because they can make your R version behave differently to other R installations, potentially reducing the reproducibility of your code.
Files in three folders are important in this process:
* `R_HOME`, the directory in which R is installed. The `etc` sub\-directory can contain start\-up files read early on in the start\-up process. Find out where your `R_HOME` is with the `R.home()` command.
* `HOME`, the user’s home directory. Typically this is `/home/username` on Unix machines or `C:\Users\username` on Windows (since Windows 7\). Ask R where your home directory is with, `Sys.getenv("HOME")`.
* R’s current working directory. This is reported by `getwd()`.
It is important to know the location of the `.Rprofile` and `.Renviron` set\-up files that are being used out of these three options.
R only uses one `.Rprofile` and one `.Renviron` in any session: if you have a `.Rprofile` file in your current project, R will ignore `.Rprofile` in `R_HOME` and `HOME`.
Likewise, `.Rprofile` in `HOME` overrides `.Rprofile` in `R_HOME`.
The same applies to `.Renviron`: you should remember that adding project specific environment variables with `.Renviron` will de\-activate other `.Renviron` files.
To create a project\-specific start\-up script, simply create a `.Rprofile` file in the project’s root directory and start adding R code, e.g. via `file.edit(".Rprofile")`.
Remember that this will make `.Rprofile` in the home directory be ignored.
The following commands will open your `.Rprofile` from within an R editor:
```
file.edit("~/.Rprofile") # edit .Rprofile in HOME
file.edit(".Rprofile") # edit project specific .Rprofile
```
File paths provided by Windows operating systems will not always work in R. Specifically, if you use a path that contains single backslashes, such as `C:\DATA\data.csv`, as provided by Windows, this will generate the error: `Error: unexpected input in “C:\”`. To overcome this issue R provides two functions, `file.path()` and `normalizePath()`. The former can be used to specify file locations without having to use symbols to represent relative file paths, as follows: `file.path(“C:”, “DATA”, “data.csv”)`. The latter takes any input string for a file name and outputs a text string that is standard (canonical) for the operating system. `normalizePath(“C:/DATA/data.csv”)`, for example, outputs `C:\DATA\data.csv` on a Windows machine but `C:/DATA/data.csv` on Unix\-based platforms. Note that only the latter would work on both platforms so standard Unix file path notation is safe for all operating systems.
Editing the `.Renviron` file in the same locations will have the same effect.
The following code will create a user specific `.Renviron` file (where API keys and other cross\-project environment variables can be stored), without overwriting any existing file.
```
user_renviron = path.expand(file.path("~", ".Renviron"))
file.edit(user_renviron) # open with another text editor if this fails
```
The **pathological** package can help find where `.Rprofile` and `.Renviron` files are located on your system, thanks to the `os_path()` function. The output of `example(Startup)` is also instructive.
The location, contents and uses of each is outlined in more detail below.
### 2\.4\.4 The `.Rprofile` file
By default, R looks for and runs `.Rprofile` files in the three locations described above, in a specific order. `.Rprofile` files are simply R scripts that run each time R runs and they can be found within `R_HOME`, `HOME` and the project’s home directory, found with `getwd()`. To check if you have a site\-wide `.Rprofile`, which will run for all users on start\-up, run:
```
site_path = R.home(component = "home")
fname = file.path(site_path, "etc", "Rprofile.site")
file.exists(fname)
```
The above code checks for the presence of `Rprofile.site` in that directory. As outlined above, the `.Rprofile` located in your home directory is user\-specific. Again, we can test whether this file exists using
```
file.exists("~/.Rprofile")
```
We can use R to create and edit `.Rprofile` (warning: do not overwrite your previous `.Rprofile` \- we suggest you try a project\-specific `.Rprofile` first):
```
file.edit("~/.Rprofile")
```
### 2\.4\.5 An example `.Rprofile` file
The example below provides a taster of what goes into `.Rprofile`.
Note that this is simply a usual R script, but with an unusual name.
The best way to understand what is going on is to create this same script, save it as `.Rprofile` in your current working directory and then restart your R session to observe what changes. To restart your R session from within RStudio you can click `Session > Restart R` or use the keyboard shortcut `Ctrl+Shift+F10`.
```
# A fun welcome message
message("Hi Robin, welcome to R")
# Customise the R prompt that prefixes every command
# (use " " for a blank prompt)
options(prompt = "R4geo> ")
```
To quickly explain each line of code: the first simply prints a message in the console each time a new R session is started. The latter modifies the console prompt in the console (set to `>` by default). Note that simply adding more lines to the `.Rprofile` will set more features. An important aspect of `.Rprofile` (and `.Renviron`) is that *each line is run once and only once for each R session*. That means that the options set within `.Rprofile` can easily be changed during the session. The following command run mid\-session, for example, will return the default prompt:
```
options(prompt = "> ")
```
More details on these, and other potentially useful `.Rprofile` options are described subsequently. For more suggestions of useful startup settings, see Examples in `help("Startup")` and online resources such as those at [statmethods.net](http://www.statmethods.net/interface/customizing.html). The help pages for R options (accessible with `?options`) are also worth a read before writing your own `.Rprofile`.
Ever been frustrated by unwanted `+` symbols that prevent copied and pasted multi\-line functions from working? These potentially annoying `+`s can be eradicated by adding `options(continue = " ")` to your `.Rprofile`.
#### 2\.4\.5\.1 Setting options
The function `options`, used above, contains a number of default settings. Typing `options()` provides a good indication of what can be configured. Since `options()` are often related to personal preferences (with few implications for reproducibility), that you will want for many of your R sessions, `.Rprofile` in your home directory or in your project’s folder are sensible places to set them. Other illustrative options are shown below:
```
# With a customised prompt
options(prompt = "R> ", digits = 4, show.signif.stars = FALSE, continue = " ")
# With a longer prompt and empty 'continue' indent (default is "+ ")
options(prompt = "R4Geo> ", digits = 3, continue = " ")
```
The first option changes four default options in a single line.
* The R prompt, from the boring `>` to the exciting `R>`.
* The number of digits displayed.
* Removing the stars after significant \\(p\\)\-values.
* Removing the `+` in multi\-line functions.
Try to avoid adding options to the start\-up file that make your code non\-portable. For example, adding `options(stringsAsFactors = FALSE)` to your start\-up script has knock\-on effects for `read.table` and related functions including `read.csv`, making them convert text strings into characters rather than into factors as is default. This may be useful for you, but can make your code less portable, so be warned.
#### 2\.4\.5\.2 Setting the CRAN mirror
To avoid setting the CRAN mirror each time you run `install.packages()` you can permanently set the mirror in your `.Rprofile`.
```
# `local` creates a new, empty environment
# This avoids polluting .GlobalEnv with the object r
local({
r = getOption("repos")
r["CRAN"] = "https://cran.rstudio.com/"
options(repos = r)
})
```
The RStudio mirror is a virtual machine run by Amazon’s EC2 service, and it syncs with the main CRAN mirror in Austria once per day. Since RStudio is using Amazon’s CloudFront, the repository is automatically distributed around the world, so no matter where you are in the world, the data doesn’t need to travel very far, and is therefore fast to download.
#### 2\.4\.5\.3 The **fortunes** package
This section illustrates the power of `.Rprofile` customisation with reference to a package that was developed for fun. The code below could easily be altered to automatically connect to a database, or ensure that the latest packages have been downloaded.
The **fortunes** package contains a number of memorable quotes that the community has collected over many years, called R fortunes. Each fortune has a number. To get fortune number \\(50\\), for example, enter
```
fortunes::fortune(50)
#>
#> To paraphrase provocatively, 'machine learning is statistics minus any checking
#> of models and assumptions'.
#> -- Brian D. Ripley (about the difference between machine learning and
#> statistics)
#> useR! 2004, Vienna (May 2004)
```
It is easy to make R print out one of these nuggets of truth each time you start a session, by adding the following to `.Rprofile`:
```
if (interactive())
try(fortunes::fortune(), silent = TRUE)
```
The `interactive()` function tests whether R is being used interactively in a terminal. The `fortune()` function is called within `try()`. If the **fortunes** package is not available, we avoid raising an error and move on. By using `::` we avoid adding the **fortunes** package to our list of attached packages.
Typing `search()`, gives the list of attached packages. By using `fortunes::fortune()` we avoid adding the **fortunes** package to that list.
The function `.Last()`, if it exists in the `.Rprofile`, is always run at the end of the session. We can use it to install the **fortunes** package if needed. To load the package, we use `require()`, since if the package isn’t installed, the `require()` function returns `FALSE` and raises a warning.
```
.Last = function() {
cond = suppressWarnings(!require(fortunes, quietly = TRUE))
if (cond)
try(install.packages("fortunes"), silent = TRUE)
message("Goodbye at ", date(), "\n")
}
```
#### 2\.4\.5\.4 Useful functions
You can use `.Rprofile` to define new ‘helper’ functions or redefine existing ones so they’re faster to type.
For example, we could load the following two functions for examining data frames:
```
# ht == headtail
# Show the first 6 rows & last 6 rows of a data frame
ht = function(d, n=6) rbind(head(d, n), tail(d, n))
# Show the first 5 rows & first 5 columns of a data frame
hh = function(d) d[1:5, 1:5]
```
and a function for setting a nice plotting window:
```
nice_par = function(mar = c(3, 3, 2, 1), mgp = c(2, 0.4, 0), tck = -0.01,
cex.axis = 0.9, las = 1, mfrow = c(1, 1), ...) {
par(mar = mar, mgp = mgp, tck = tck, cex.axis = cex.axis, las = las,
mfrow = mfrow, ...)
}
```
Note that these functions are for personal use and are unlikely to interfere with code from other people.
For this reason even if you use a certain package every day, we don’t recommend loading it in your `.Rprofile`.
Shortening long function names for interactive (but not reproducible) code writing is another option for using `.Rprofile` to increase efficiency.
If you frequently use `View()`, for example, you may be able to save time by referring to it in abbreviated form. This is illustrated below to make it faster to view datasets (although with IDE\-driven autocompletion, outlined in the next section, the time savings is less).
```
v = utils::View
```
Also beware the dangers of loading many functions by default: it may make your code less portable.
Another potentially useful setting to change in `.Rprofile` is R’s current working directory.
If you want R to automatically set the working directory to the R folder of your project, for example, one would add the following line of code to the **project**\-specific `.Rprofile`:
```
setwd("R")
```
#### 2\.4\.5\.5 Creating hidden environments with .Rprofile
Beyond making your code less portable, another downside of putting functions in your `.Rprofile` is that it can clutter\-up your work space:
when you run the `ls()` command, your `.Rprofile` functions will appear. Also if you run `rm(list = ls())`, your functions will be deleted. One neat trick to overcome this issue is to use hidden objects and environments. When an object name starts with `.`, by default it doesn’t appear in the output of the `ls()` function
```
.obj = 1
".obj" %in% ls()
#> [1] FALSE
```
This concept also works with environments. In the `.Rprofile` file we can create a *hidden* environment
```
.env = new.env()
```
and then add functions to this environment
```
.env$ht = function(d, n = 6) rbind(head(d, n), tail(d, n))
```
At the end of the `.Rprofile` file, we use `attach`, which makes it possible to refer to objects in the environment by their names alone.
```
attach(.env)
```
#### 2\.4\.5\.1 Setting options
The function `options`, used above, contains a number of default settings. Typing `options()` provides a good indication of what can be configured. Since `options()` are often related to personal preferences (with few implications for reproducibility), that you will want for many of your R sessions, `.Rprofile` in your home directory or in your project’s folder are sensible places to set them. Other illustrative options are shown below:
```
# With a customised prompt
options(prompt = "R> ", digits = 4, show.signif.stars = FALSE, continue = " ")
# With a longer prompt and empty 'continue' indent (default is "+ ")
options(prompt = "R4Geo> ", digits = 3, continue = " ")
```
The first option changes four default options in a single line.
* The R prompt, from the boring `>` to the exciting `R>`.
* The number of digits displayed.
* Removing the stars after significant \\(p\\)\-values.
* Removing the `+` in multi\-line functions.
Try to avoid adding options to the start\-up file that make your code non\-portable. For example, adding `options(stringsAsFactors = FALSE)` to your start\-up script has knock\-on effects for `read.table` and related functions including `read.csv`, making them convert text strings into characters rather than into factors as is default. This may be useful for you, but can make your code less portable, so be warned.
#### 2\.4\.5\.2 Setting the CRAN mirror
To avoid setting the CRAN mirror each time you run `install.packages()` you can permanently set the mirror in your `.Rprofile`.
```
# `local` creates a new, empty environment
# This avoids polluting .GlobalEnv with the object r
local({
r = getOption("repos")
r["CRAN"] = "https://cran.rstudio.com/"
options(repos = r)
})
```
The RStudio mirror is a virtual machine run by Amazon’s EC2 service, and it syncs with the main CRAN mirror in Austria once per day. Since RStudio is using Amazon’s CloudFront, the repository is automatically distributed around the world, so no matter where you are in the world, the data doesn’t need to travel very far, and is therefore fast to download.
#### 2\.4\.5\.3 The **fortunes** package
This section illustrates the power of `.Rprofile` customisation with reference to a package that was developed for fun. The code below could easily be altered to automatically connect to a database, or ensure that the latest packages have been downloaded.
The **fortunes** package contains a number of memorable quotes that the community has collected over many years, called R fortunes. Each fortune has a number. To get fortune number \\(50\\), for example, enter
```
fortunes::fortune(50)
#>
#> To paraphrase provocatively, 'machine learning is statistics minus any checking
#> of models and assumptions'.
#> -- Brian D. Ripley (about the difference between machine learning and
#> statistics)
#> useR! 2004, Vienna (May 2004)
```
It is easy to make R print out one of these nuggets of truth each time you start a session, by adding the following to `.Rprofile`:
```
if (interactive())
try(fortunes::fortune(), silent = TRUE)
```
The `interactive()` function tests whether R is being used interactively in a terminal. The `fortune()` function is called within `try()`. If the **fortunes** package is not available, we avoid raising an error and move on. By using `::` we avoid adding the **fortunes** package to our list of attached packages.
Typing `search()`, gives the list of attached packages. By using `fortunes::fortune()` we avoid adding the **fortunes** package to that list.
The function `.Last()`, if it exists in the `.Rprofile`, is always run at the end of the session. We can use it to install the **fortunes** package if needed. To load the package, we use `require()`, since if the package isn’t installed, the `require()` function returns `FALSE` and raises a warning.
```
.Last = function() {
cond = suppressWarnings(!require(fortunes, quietly = TRUE))
if (cond)
try(install.packages("fortunes"), silent = TRUE)
message("Goodbye at ", date(), "\n")
}
```
#### 2\.4\.5\.4 Useful functions
You can use `.Rprofile` to define new ‘helper’ functions or redefine existing ones so they’re faster to type.
For example, we could load the following two functions for examining data frames:
```
# ht == headtail
# Show the first 6 rows & last 6 rows of a data frame
ht = function(d, n=6) rbind(head(d, n), tail(d, n))
# Show the first 5 rows & first 5 columns of a data frame
hh = function(d) d[1:5, 1:5]
```
and a function for setting a nice plotting window:
```
nice_par = function(mar = c(3, 3, 2, 1), mgp = c(2, 0.4, 0), tck = -0.01,
cex.axis = 0.9, las = 1, mfrow = c(1, 1), ...) {
par(mar = mar, mgp = mgp, tck = tck, cex.axis = cex.axis, las = las,
mfrow = mfrow, ...)
}
```
Note that these functions are for personal use and are unlikely to interfere with code from other people.
For this reason even if you use a certain package every day, we don’t recommend loading it in your `.Rprofile`.
Shortening long function names for interactive (but not reproducible) code writing is another option for using `.Rprofile` to increase efficiency.
If you frequently use `View()`, for example, you may be able to save time by referring to it in abbreviated form. This is illustrated below to make it faster to view datasets (although with IDE\-driven autocompletion, outlined in the next section, the time savings is less).
```
v = utils::View
```
Also beware the dangers of loading many functions by default: it may make your code less portable.
Another potentially useful setting to change in `.Rprofile` is R’s current working directory.
If you want R to automatically set the working directory to the R folder of your project, for example, one would add the following line of code to the **project**\-specific `.Rprofile`:
```
setwd("R")
```
#### 2\.4\.5\.5 Creating hidden environments with .Rprofile
Beyond making your code less portable, another downside of putting functions in your `.Rprofile` is that it can clutter\-up your work space:
when you run the `ls()` command, your `.Rprofile` functions will appear. Also if you run `rm(list = ls())`, your functions will be deleted. One neat trick to overcome this issue is to use hidden objects and environments. When an object name starts with `.`, by default it doesn’t appear in the output of the `ls()` function
```
.obj = 1
".obj" %in% ls()
#> [1] FALSE
```
This concept also works with environments. In the `.Rprofile` file we can create a *hidden* environment
```
.env = new.env()
```
and then add functions to this environment
```
.env$ht = function(d, n = 6) rbind(head(d, n), tail(d, n))
```
At the end of the `.Rprofile` file, we use `attach`, which makes it possible to refer to objects in the environment by their names alone.
```
attach(.env)
```
### 2\.4\.6 The `.Renviron` file
The `.Renviron` file is used to store system variables. It follows a similar start\-up routine to the `.Rprofile` file: R first looks for a global `.Renviron` file, then for local versions. A typical use of the `.Renviron` file is to specify the `R_LIBS` path, which determines where new packages are installed:
```
# Linux
R_LIBS=~/R/library
# Windows
R_LIBS=C:/R/library
```
After setting this, `install.packages()` saves packages in the directory specified by `R_LIBS`.
The location of this directory can be referred back to subsequently as follows:
```
Sys.getenv("R_LIBS_USER")
#> [1] "/home/travis/R/Library"
```
All currently stored environment variables can be seen by calling `Sys.getenv()` with no arguments. Note that many environment variables are already pre\-set and do not need to be specified in `.Renviron`. `HOME`, for example, which can be seen with `Sys.getenv("HOME")`, is taken from the operating system’s list of environment variables. A list of the most important environment variables that can affect R’s behaviour is documented in the little known help page `help("environment variables")`.
To set or unset an environment variable for the duration of a session, use the following commands:
```
Sys.setenv("TEST" = "test-string") # set an environment variable for the session
Sys.unsetenv("TEST") # unset it
```
Another common use of `.Renviron` is to store API keys and authentication tokens that will be available from one session to another.[4](#fn4)
A common use case is setting the ‘envvar’ `GITHUB_PAT`, which will be detected by the **devtools** package via the function `github_pat()`. To take another example, the following line in `.Renviron` sets the `ZEIT_KEY` environment variable which is used in the **[diezeit](https://cran.r-project.org/web/packages/diezeit/)** package:
```
ZEIT_KEY=PUT_YOUR_KEY_HERE
```
You will need to sign\-in and start a new R session for the environment variable (accessed by `Sys.getenv()`) to be visible. To test if the example API key has been successfully added as an environment variable, run the following:
```
Sys.getenv("ZEIT_KEY")
```
Use of the `.Renviron` file for storing settings such as library paths and API keys is efficient because it reduces the need to update your settings for every R session. Furthermore, the same `.Renviron` file will work across different platforms so keep it stored safely.
#### 2\.4\.6\.1 Example `.Renviron` file
My `.Renviron` file has grown over the years. I often switch between my desktop and laptop computers, so to maintain a consistent working environment, I have the same `.Renviron` file on all of my machines. As well as containing an `R_LIBS` entry and some API keys, my `.Renviron` has a few other lines:
* `TMPDIR=/data/R_tmp/`. When R is running, it creates temporary copies. On my work machine, the default directory is a network drive.
* `R_COMPILE_PKGS=3`. Byte compile all packages (covered in Chapter [3](programming.html#programming)).
* `R_LIBS_SITE=/usr/lib/R/site-library:/usr/lib/R/library` I explicitly state where to look for packages. My University has a site\-wide directory that contains out of date packages. I want to avoid using this directory.
* `R_DEFAULT_PACKAGES=utils,grDevices,graphics,stats,methods`. Explicitly state the packages to load. Note I don’t load the `datasets` package, but I ensure that `methods` is always loaded. Due to historical reasons, the `methods` package isn’t loaded by default in certain applications, e.g. `Rscript`.
#### Exercises
1. What are the three locations where the startup files are stored? Where are these locations on your computer?
2. For each location, does a `.Rprofile` or `.Renviron` file exist?
3. Create a `.Rprofile` file in your current working directory that prints the message `Happy efficient R programming` each time you start R at this location.
4. What happens to the startup files in `R_HOME` if you create them in `HOME` or local project directories?
#### 2\.4\.6\.1 Example `.Renviron` file
My `.Renviron` file has grown over the years. I often switch between my desktop and laptop computers, so to maintain a consistent working environment, I have the same `.Renviron` file on all of my machines. As well as containing an `R_LIBS` entry and some API keys, my `.Renviron` has a few other lines:
* `TMPDIR=/data/R_tmp/`. When R is running, it creates temporary copies. On my work machine, the default directory is a network drive.
* `R_COMPILE_PKGS=3`. Byte compile all packages (covered in Chapter [3](programming.html#programming)).
* `R_LIBS_SITE=/usr/lib/R/site-library:/usr/lib/R/library` I explicitly state where to look for packages. My University has a site\-wide directory that contains out of date packages. I want to avoid using this directory.
* `R_DEFAULT_PACKAGES=utils,grDevices,graphics,stats,methods`. Explicitly state the packages to load. Note I don’t load the `datasets` package, but I ensure that `methods` is always loaded. Due to historical reasons, the `methods` package isn’t loaded by default in certain applications, e.g. `Rscript`.
#### Exercises
1. What are the three locations where the startup files are stored? Where are these locations on your computer?
2. For each location, does a `.Rprofile` or `.Renviron` file exist?
3. Create a `.Rprofile` file in your current working directory that prints the message `Happy efficient R programming` each time you start R at this location.
4. What happens to the startup files in `R_HOME` if you create them in `HOME` or local project directories?
2\.5 RStudio
------------
RStudio is an Integrated Development Environment (IDE) for R.
It makes life easy for R users and developers with its intuitive and flexible interface. RStudio encourages good programming practice. Through its wide range of features RStudio can help make you a more efficient and productive R programmer. RStudio can, for example, greatly reduce the amount of time spent remembering and typing function names thanks to intelligent autocompletion.
Some of the most important features of RStudio include:
* Flexible window pane layouts to optimise use of screen space and enable fast interactive visual feed\-back.
* Intelligent autocompletion of function names, packages and R objects.
* A wide range of keyboard shortcuts.
* Visual display of objects, including a searchable data display table.
* Real\-time code checking, debugging and error detection.
* Menus to install and update packages.
* Project management and integration with version control.
* Quick display of function source code and help documents.
The above list of features should make it clear that a well set\-up IDE can be as important as a well set\-up R installation for becoming an efficient R programmer.[5](#fn5)
As with R itself, the best way to learn about RStudio is by using it.
It is therefore worth reading through this section in parallel with using RStudio to boost your productivity.
### 2\.5\.1 Installing and updating RStudio
RStudio is a mature, feature rich and powerful Integrated Development Environment (IDE) optimised for R programming and has become popular among R developers. The Open Source Edition is completely open source (as can be seen from the project’s GitHub repo). It can be installed on all major OSs from the RStudio website [rstudio.com](https://www.rstudio.com/products/rstudio/download/).
If you already have RStudio and would like to update it, simply click `Help > Check for Updates` in the menu.
For fast and efficient work, keyboard shortcuts should be used wherever possible, reducing the reliance on the mouse.
RStudio has many keyboard shortcuts that will help with this.
To get into good habits early, try accessing the RStudio Update interface without touching the mouse.
On Linux and Windows, dropdown menus are activated with the `Alt` button, so the menu item can be found with:
```
Alt+H U
```
On Mac, it works differently.
`Cmd+?` should activate a search across menu items, allowing the same operation can be achieved with:
```
Cmd+? update
```
In RStudio the keyboard shortcuts differ between Linux and Windows versions on one hand and Mac on the other. In this section we generally only use the Windows/Linux shortcut keys for brevity. The Mac equivalent is usually found by simply replacing `Ctrl` and `Alt` with the Mac\-specific `Cmd` button.
### 2\.5\.2 Window pane layout
RStudio has four main window ‘panes’ (see Figure [2\.2](set-up.html#fig:2-2)), each of which serves a range of purposes:
* The **Source pane**, for editing, saving, and dispatching R code to the console (top left). Note that this pane does not exist by default when you start RStudio: it appears when you open an R script, e.g. via `File -> New File -> R Script`. A common task in this pane is to send code on the current line to the console, via `Ctrl/Cmd+Enter`.
* The **Console pane**. Any code entered here is processed by R, line by line. This pane is ideal for interactively testing ideas before saving the final results in the Source pane above.
* The **Environment pane** (top right) contains information about the current objects loaded in the workspace including their class, dimension (if they are a data frame) and name. This pane also contains tabbed sub\-panes with a searchable history that was dispatched to the console and (if applicable to the project) Build and Git options.
* The **Files pane** (bottom right) contains a simple file browser, a Plots tab, Packages and Help tabs and a Viewer for visualising interactive R output such as those produced by the leaflet package and HTML ‘widgets’.
Figure 2\.2: RStudio Panels
Using each of the panels effectively and navigating between them quickly is a skill that will develop over time, and will only improve with practice.
#### Exercises
You are developing a project to visualise data.
Test out the multi\-panel RStudio workflow by following the steps below:
1. Create a new folder for the input data using the **Files pane**.
2. Type in `downl` in the **Source pane** and hit `Enter` to make the function `download.file()` autocomplete. Then type `"`, which will autocomplete to `""`, paste the URL of a file to download (e.g. `https://www.census.gov/2010census/csv/pop_change.csv`) and a file name (e.g. `pop_change.csv`).
3. Execute the full command with `Ctrl+Enter`:
```
download.file("https://www.census.gov/2010census/csv/pop_change.csv",
"extdata/pop_change.csv")
```
4. Write and execute a command to read\-in the data, such as
```
pop_change = read.csv("extdata/pop_change.csv", skip = 2)
```
5. Use the **Environment pane** to click on the data object `pop_change`. Note that this runs the command `View(pop_change)`, which launches an interactive data explore pane in the top left panel (see Figure [2\.3](set-up.html#fig:2-3)).
Figure 2\.3: The data viewing tab in RStudio.
6. Use the **Console** to test different plot commands to visualise the data, saving the code you want to keep back into the **Source pane**, as `pop_change.R`.
7. Use the **Plots tab** in the Files pane to scroll through past plots. Save the best using the Export dropdown button.
The above example shows understanding of these panes and how to use them interactively can help with the speed and productivity of your R programming.
Further, there are a number of RStudio settings that can help ensure that it works for your needs.
### 2\.5\.3 RStudio options
A range of `Project Options` and `Global Options` are available in RStudio from the `Tools` menu (accessible in Linux and Windows from the keyboard via `Alt+T`).
Most of these are self\-explanatory but it is worth mentioning a few that can boost your programming efficiency:
* GIT/SVN project settings allow RStudio to provide a graphical interface to your version control system, described in Chapter [9](collaboration.html#collaboration).
* R version settings allow RStudio to ‘point’ to different R versions/interpreters, which may be faster for some projects.
* `Restore .RData`: Unticking this default prevents loading previously created R objects. This will make starting R quicker and also reduce the chance of getting bugs due to previously created objects. For this reason we recommend you untick this box.
* Code editing options can make RStudio adapt to your coding style, for example, by preventing the autocompletion of braces, which some experienced programmers may find annoying. Enabling `Vim mode` makes RStudio act as a (partial) Vim emulator.
* Diagnostic settings can make RStudio more efficient by adding additional diagnostics or by removing diagnostics if they are slowing down your work. This may be an issue for people using RStudio to analyse large datasets on older low\-spec computers.
* Appearance: if you are struggling to see the source code, changing the default font size may make you a more efficient programmer by reducing the time overheads associated with squinting at the screen. Other options in this area relate more to aesthetics. Settings such as font type and background color are also important because feeling comfortable in your programming environment can boost productivity. Go to `Tools > Global Options` to modify these.
### 2\.5\.4 Autocompletion
R provides some basic autocompletion functionality.
Typing the beginning of a function name, for example `rn` (short for `rnorm()`), and hitting `Tab` twice will result in the full function names associated with this text string being printed.
In this case two options would be displayed: `rnbinom` and `rnorm`, providing a useful reminder to the user about what is available. The same applies to file names enclosed in quote marks: typing `te` in the console in a project which contains a file called `test.R` should result in the full name `"test.R"` being auto completed.
RStudio builds on this functionality and takes it to a new level.
The default settings for autocompletion in RStudio work well. They are intuitive and are likely to work well for many users, especially beginners. However, RStudio’s autocompletion options can be modified, by navigating to **Tools \> Global Options \> Code \> Completion** in RStudio’s top level menu.
Instead of only auto completing options when `Tab` is pressed, RStudio auto completes them at any point.
Building on the previous example, RStudio’s autocompletion triggers when the first three characters are typed: `rno`.
The same functionality works when only the first characters are typed, followed by `Tab`:
automatic autocompletion does not replace `Tab` autocompletion but supplements it.
Note that in RStudio two more options are provided to the user after entering `rn Tab` compared with entering the same text into base R’s console described in the previous paragraph: `RNGkind` and `RNGversion`.
This illustrates that RStudio’s autocompletion functionality is not case sensitive in the same way that R is.
This is a good thing because R has no consistent function name style!
RStudio also has more intelligent autocompletion of objects and file names than R’s built\-in command line.
To test this functionality, try typing `US`, followed by the Tab key.
After pressing down until `USArrests` is selected, press Enter so it autocompletes.
Finally, typing `$` should leave the following text on the screen and the four columns should be shown in a drop\-down box, ready for you to select the variable of interest with the down arrow.
```
USArrests$ # a dropdown menu of columns should appear in RStudio
```
To take a more complex example, variable names stored in the `data` slot of the class `SpatialPolygonsDataFrame` (a class defined by the foundational spatial package **sp**) are referred to in the long form
`spdf@data$varname`.[6](#fn6)
In this case `spdf` is the object name, `data` is the slot and `varname` is the variable name.
RStudio makes such `S4` objects easier to use by enabling autocompletion of the short form `spdf$varname`.
Another example is RStudio’s ability to find files hidden away in sub\-folders.
Typing `"te` will find `test.R` even if it is located in a sub\-folder such as `R/test.R`.
There are a number of other clever autocompletion tricks that can boost R’s productivity when using RStudio which are best found by experimenting and hitting `Tab` frequently during your R programming work.
### 2\.5\.5 Keyboard shortcuts
RStudio has many useful shortcuts that can help make your programming more efficient by reducing the need to reach for the mouse and point and click your way around code and RStudio.
These can be viewed by using a little known but extremely useful keyboard shortcut (this can also be accessed via the **Tools** menu).
```
Alt+Shift+K
```
This will display the default shortcuts in RStudio.
It is worth spending time identifying which of these could be useful in your work and practising interacting with RStudio rapidly with minimal reliance on the mouse.
The power of these autocompletion capabilities can be further enhanced by setting your own keyboard shortcuts.
However, as with setting `.Rprofile` and `.Renviron` settings, this risks reducing the portability of your workflow.
Some more useful shortcuts are listed below:
* `Ctrl+Z/Shift+Z`: Undo/Redo.
* `Ctrl+Enter`: Execute the current line or code selection in the Source pane.
* `Ctrl+Alt+R`: Execute all the R code in the currently open file in the Source pane.
* `Ctrl+Left/Right`: Navigate code quickly, word by word.
* `Home/End`: Navigate to the beginning/end of the current line.
* `Alt+Shift+Up/Down`: Duplicate the current line up or down.
* `Ctrl+D`: Delete the current line.
To set your own RStudio keyboard shortcuts, navigate to **Tools \> Modify Keyboard Shortcuts**.
### 2\.5\.6 Object display and output table
It is useful to know what is in your current R environment.
This information can be revealed with `ls()`, but this function only provides object names.
RStudio provides an efficient mechanism to show currently loaded objects, and their details, in real\-time: the Environment tab in the top right corner.
It makes sense to keep an eye on which objects are loaded and to delete objects that are no longer useful.
Doing so will minimise the probability of confusion in your workflow (e.g. by using the wrong version of an object) and reduce the amount of RAM R needs.
The details provided in the Environment tab include the object’s dimension and some additional details depending on the object’s class (e.g. size in MB for large datasets).
A very useful feature of RStudio is its advanced viewing functionality.
This is triggered either by executing `View(object)` or by clicking on the object name in the Environment tab.
Although you cannot edit data in the Viewer (this should be considered a good thing from a data integrity perspective), recent versions of RStudio provide an efficient search mechanism to rapidly filter and view the records that are of most interest (see Figure [2\.3](set-up.html#fig:2-3)).
### 2\.5\.7 Project management
In the far top\-right of RStudio there is a diminutive drop\-down menu illustrated with R inside a transparent box.
This menu may be small and simple, but it is hugely efficient in terms of organising large, complex and long\-term projects.
The idea of RStudio projects is that the bulk of R programming work is part of a wider task, which will likely consist of input data, R code, graphical and numerical outputs and documents describing the work.
It is possible to scatter each of these elements at random across your hard\-discs but this is not recommended.
Instead, the concept of projects encourages reproducible work, such that anyone who opens the particular project folder that you are working from should be able to repeat your analyses and replicate your results.
It is therefore *highly recommended* that you use projects to organise your work. It could save hours in the long\-run.
Organizing data, code and outputs also makes sense from a portability perspective: if you copy the folder (e.g. via GitHub) you can work on it from any computer without worrying about having the right files on your current machine.
These tasks are implemented using RStudio’s simple project system, in which the following things happen each time you open an existing project:
* The working directory automatically switches to the project’s folder. This enables data and script files to be referred to using relative file paths, which are much shorter than absolute file paths. This means that switching directory using `setwd()`, a common source of error for R users, is rarely, if ever, needed.
* The last previously open file is loaded into the Source pane. The history of R commands executed in previous sessions is also loaded into the History tab. This assists with continuity between one session and the next.
* The `File` tab displays the associated files and folders in the project, allowing you to quickly find your previous work.
* Any settings associated with the project, such as Git settings, are loaded. This assists with collaboration and project\-specific set\-up.
Each project is different but most contain input data, R code and outputs.
To keep things tidy, we recommend a sub\-directory structure resembling the following:
```
project/
- README.Rmd # Project description
- set-up.R # Required packages
- R/ # For R code
- input # Data files
- graphics/
- output/ # Results
```
Proper use of projects ensures that all R source files are neatly stashed in one folder with a meaningful structure. This way data and documentation can be found where one would expect them. Under this system, figures and project outputs are ‘first class citizens’ within the project’s design, each with their own folder.
Another approach to project management is to treat projects as R packages.
This is not recommended for most use cases, as it places restrictions on where you can put files. However, if the aim is *code development and sharing*, creating a small R package may be the way forward, even if you never intend to submit it on CRAN. Creating R packages is easier than ever before, as documented in (Cotton [2013](#ref-cotton_learning_2013)) and, more recently (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). The **devtools** package helps manage R’s quirks, making the process much less painful.
If you use GitHub, the advantage of this approach is that anyone should be able to reproduce your working using `devtools::install_github("username/projectname")`, although the administrative overheads of creating an entire package for each small project will outweigh the benefits for many.
Note that a `set-up.R` or even a `.Rprofile` file in the project’s root directory enable project\-specific settings to be loaded each time people work on the project.
As described in the previous section, `.Rprofile` can be used to tweak how R works at start\-up.
It is also a portable way to manage R’s configuration on a project\-by\-project basis.
Another capability that RStudio has is excellent debugging support. Rather than re\-invent the wheel, we would like to direct interested readers to the [RStudio website](https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio).
#### Exercises
1. Try modifying the look and appearance of your RStudio setup.
2. What is the keyboard shortcut to show the other shortcuts? (Hint: it begins with `Alt+Shift` on Linux and Windows.)
3. Try as many of the shortcuts revealed by the previous step as you like. Write down the ones that you think will save you time, perhaps on a post\-it note to go on your computer.
### 2\.5\.1 Installing and updating RStudio
RStudio is a mature, feature rich and powerful Integrated Development Environment (IDE) optimised for R programming and has become popular among R developers. The Open Source Edition is completely open source (as can be seen from the project’s GitHub repo). It can be installed on all major OSs from the RStudio website [rstudio.com](https://www.rstudio.com/products/rstudio/download/).
If you already have RStudio and would like to update it, simply click `Help > Check for Updates` in the menu.
For fast and efficient work, keyboard shortcuts should be used wherever possible, reducing the reliance on the mouse.
RStudio has many keyboard shortcuts that will help with this.
To get into good habits early, try accessing the RStudio Update interface without touching the mouse.
On Linux and Windows, dropdown menus are activated with the `Alt` button, so the menu item can be found with:
```
Alt+H U
```
On Mac, it works differently.
`Cmd+?` should activate a search across menu items, allowing the same operation can be achieved with:
```
Cmd+? update
```
In RStudio the keyboard shortcuts differ between Linux and Windows versions on one hand and Mac on the other. In this section we generally only use the Windows/Linux shortcut keys for brevity. The Mac equivalent is usually found by simply replacing `Ctrl` and `Alt` with the Mac\-specific `Cmd` button.
### 2\.5\.2 Window pane layout
RStudio has four main window ‘panes’ (see Figure [2\.2](set-up.html#fig:2-2)), each of which serves a range of purposes:
* The **Source pane**, for editing, saving, and dispatching R code to the console (top left). Note that this pane does not exist by default when you start RStudio: it appears when you open an R script, e.g. via `File -> New File -> R Script`. A common task in this pane is to send code on the current line to the console, via `Ctrl/Cmd+Enter`.
* The **Console pane**. Any code entered here is processed by R, line by line. This pane is ideal for interactively testing ideas before saving the final results in the Source pane above.
* The **Environment pane** (top right) contains information about the current objects loaded in the workspace including their class, dimension (if they are a data frame) and name. This pane also contains tabbed sub\-panes with a searchable history that was dispatched to the console and (if applicable to the project) Build and Git options.
* The **Files pane** (bottom right) contains a simple file browser, a Plots tab, Packages and Help tabs and a Viewer for visualising interactive R output such as those produced by the leaflet package and HTML ‘widgets’.
Figure 2\.2: RStudio Panels
Using each of the panels effectively and navigating between them quickly is a skill that will develop over time, and will only improve with practice.
#### Exercises
You are developing a project to visualise data.
Test out the multi\-panel RStudio workflow by following the steps below:
1. Create a new folder for the input data using the **Files pane**.
2. Type in `downl` in the **Source pane** and hit `Enter` to make the function `download.file()` autocomplete. Then type `"`, which will autocomplete to `""`, paste the URL of a file to download (e.g. `https://www.census.gov/2010census/csv/pop_change.csv`) and a file name (e.g. `pop_change.csv`).
3. Execute the full command with `Ctrl+Enter`:
```
download.file("https://www.census.gov/2010census/csv/pop_change.csv",
"extdata/pop_change.csv")
```
4. Write and execute a command to read\-in the data, such as
```
pop_change = read.csv("extdata/pop_change.csv", skip = 2)
```
5. Use the **Environment pane** to click on the data object `pop_change`. Note that this runs the command `View(pop_change)`, which launches an interactive data explore pane in the top left panel (see Figure [2\.3](set-up.html#fig:2-3)).
Figure 2\.3: The data viewing tab in RStudio.
6. Use the **Console** to test different plot commands to visualise the data, saving the code you want to keep back into the **Source pane**, as `pop_change.R`.
7. Use the **Plots tab** in the Files pane to scroll through past plots. Save the best using the Export dropdown button.
The above example shows understanding of these panes and how to use them interactively can help with the speed and productivity of your R programming.
Further, there are a number of RStudio settings that can help ensure that it works for your needs.
#### Exercises
You are developing a project to visualise data.
Test out the multi\-panel RStudio workflow by following the steps below:
1. Create a new folder for the input data using the **Files pane**.
2. Type in `downl` in the **Source pane** and hit `Enter` to make the function `download.file()` autocomplete. Then type `"`, which will autocomplete to `""`, paste the URL of a file to download (e.g. `https://www.census.gov/2010census/csv/pop_change.csv`) and a file name (e.g. `pop_change.csv`).
3. Execute the full command with `Ctrl+Enter`:
```
download.file("https://www.census.gov/2010census/csv/pop_change.csv",
"extdata/pop_change.csv")
```
4. Write and execute a command to read\-in the data, such as
```
pop_change = read.csv("extdata/pop_change.csv", skip = 2)
```
5. Use the **Environment pane** to click on the data object `pop_change`. Note that this runs the command `View(pop_change)`, which launches an interactive data explore pane in the top left panel (see Figure [2\.3](set-up.html#fig:2-3)).
Figure 2\.3: The data viewing tab in RStudio.
6. Use the **Console** to test different plot commands to visualise the data, saving the code you want to keep back into the **Source pane**, as `pop_change.R`.
7. Use the **Plots tab** in the Files pane to scroll through past plots. Save the best using the Export dropdown button.
The above example shows understanding of these panes and how to use them interactively can help with the speed and productivity of your R programming.
Further, there are a number of RStudio settings that can help ensure that it works for your needs.
### 2\.5\.3 RStudio options
A range of `Project Options` and `Global Options` are available in RStudio from the `Tools` menu (accessible in Linux and Windows from the keyboard via `Alt+T`).
Most of these are self\-explanatory but it is worth mentioning a few that can boost your programming efficiency:
* GIT/SVN project settings allow RStudio to provide a graphical interface to your version control system, described in Chapter [9](collaboration.html#collaboration).
* R version settings allow RStudio to ‘point’ to different R versions/interpreters, which may be faster for some projects.
* `Restore .RData`: Unticking this default prevents loading previously created R objects. This will make starting R quicker and also reduce the chance of getting bugs due to previously created objects. For this reason we recommend you untick this box.
* Code editing options can make RStudio adapt to your coding style, for example, by preventing the autocompletion of braces, which some experienced programmers may find annoying. Enabling `Vim mode` makes RStudio act as a (partial) Vim emulator.
* Diagnostic settings can make RStudio more efficient by adding additional diagnostics or by removing diagnostics if they are slowing down your work. This may be an issue for people using RStudio to analyse large datasets on older low\-spec computers.
* Appearance: if you are struggling to see the source code, changing the default font size may make you a more efficient programmer by reducing the time overheads associated with squinting at the screen. Other options in this area relate more to aesthetics. Settings such as font type and background color are also important because feeling comfortable in your programming environment can boost productivity. Go to `Tools > Global Options` to modify these.
### 2\.5\.4 Autocompletion
R provides some basic autocompletion functionality.
Typing the beginning of a function name, for example `rn` (short for `rnorm()`), and hitting `Tab` twice will result in the full function names associated with this text string being printed.
In this case two options would be displayed: `rnbinom` and `rnorm`, providing a useful reminder to the user about what is available. The same applies to file names enclosed in quote marks: typing `te` in the console in a project which contains a file called `test.R` should result in the full name `"test.R"` being auto completed.
RStudio builds on this functionality and takes it to a new level.
The default settings for autocompletion in RStudio work well. They are intuitive and are likely to work well for many users, especially beginners. However, RStudio’s autocompletion options can be modified, by navigating to **Tools \> Global Options \> Code \> Completion** in RStudio’s top level menu.
Instead of only auto completing options when `Tab` is pressed, RStudio auto completes them at any point.
Building on the previous example, RStudio’s autocompletion triggers when the first three characters are typed: `rno`.
The same functionality works when only the first characters are typed, followed by `Tab`:
automatic autocompletion does not replace `Tab` autocompletion but supplements it.
Note that in RStudio two more options are provided to the user after entering `rn Tab` compared with entering the same text into base R’s console described in the previous paragraph: `RNGkind` and `RNGversion`.
This illustrates that RStudio’s autocompletion functionality is not case sensitive in the same way that R is.
This is a good thing because R has no consistent function name style!
RStudio also has more intelligent autocompletion of objects and file names than R’s built\-in command line.
To test this functionality, try typing `US`, followed by the Tab key.
After pressing down until `USArrests` is selected, press Enter so it autocompletes.
Finally, typing `$` should leave the following text on the screen and the four columns should be shown in a drop\-down box, ready for you to select the variable of interest with the down arrow.
```
USArrests$ # a dropdown menu of columns should appear in RStudio
```
To take a more complex example, variable names stored in the `data` slot of the class `SpatialPolygonsDataFrame` (a class defined by the foundational spatial package **sp**) are referred to in the long form
`spdf@data$varname`.[6](#fn6)
In this case `spdf` is the object name, `data` is the slot and `varname` is the variable name.
RStudio makes such `S4` objects easier to use by enabling autocompletion of the short form `spdf$varname`.
Another example is RStudio’s ability to find files hidden away in sub\-folders.
Typing `"te` will find `test.R` even if it is located in a sub\-folder such as `R/test.R`.
There are a number of other clever autocompletion tricks that can boost R’s productivity when using RStudio which are best found by experimenting and hitting `Tab` frequently during your R programming work.
### 2\.5\.5 Keyboard shortcuts
RStudio has many useful shortcuts that can help make your programming more efficient by reducing the need to reach for the mouse and point and click your way around code and RStudio.
These can be viewed by using a little known but extremely useful keyboard shortcut (this can also be accessed via the **Tools** menu).
```
Alt+Shift+K
```
This will display the default shortcuts in RStudio.
It is worth spending time identifying which of these could be useful in your work and practising interacting with RStudio rapidly with minimal reliance on the mouse.
The power of these autocompletion capabilities can be further enhanced by setting your own keyboard shortcuts.
However, as with setting `.Rprofile` and `.Renviron` settings, this risks reducing the portability of your workflow.
Some more useful shortcuts are listed below:
* `Ctrl+Z/Shift+Z`: Undo/Redo.
* `Ctrl+Enter`: Execute the current line or code selection in the Source pane.
* `Ctrl+Alt+R`: Execute all the R code in the currently open file in the Source pane.
* `Ctrl+Left/Right`: Navigate code quickly, word by word.
* `Home/End`: Navigate to the beginning/end of the current line.
* `Alt+Shift+Up/Down`: Duplicate the current line up or down.
* `Ctrl+D`: Delete the current line.
To set your own RStudio keyboard shortcuts, navigate to **Tools \> Modify Keyboard Shortcuts**.
### 2\.5\.6 Object display and output table
It is useful to know what is in your current R environment.
This information can be revealed with `ls()`, but this function only provides object names.
RStudio provides an efficient mechanism to show currently loaded objects, and their details, in real\-time: the Environment tab in the top right corner.
It makes sense to keep an eye on which objects are loaded and to delete objects that are no longer useful.
Doing so will minimise the probability of confusion in your workflow (e.g. by using the wrong version of an object) and reduce the amount of RAM R needs.
The details provided in the Environment tab include the object’s dimension and some additional details depending on the object’s class (e.g. size in MB for large datasets).
A very useful feature of RStudio is its advanced viewing functionality.
This is triggered either by executing `View(object)` or by clicking on the object name in the Environment tab.
Although you cannot edit data in the Viewer (this should be considered a good thing from a data integrity perspective), recent versions of RStudio provide an efficient search mechanism to rapidly filter and view the records that are of most interest (see Figure [2\.3](set-up.html#fig:2-3)).
### 2\.5\.7 Project management
In the far top\-right of RStudio there is a diminutive drop\-down menu illustrated with R inside a transparent box.
This menu may be small and simple, but it is hugely efficient in terms of organising large, complex and long\-term projects.
The idea of RStudio projects is that the bulk of R programming work is part of a wider task, which will likely consist of input data, R code, graphical and numerical outputs and documents describing the work.
It is possible to scatter each of these elements at random across your hard\-discs but this is not recommended.
Instead, the concept of projects encourages reproducible work, such that anyone who opens the particular project folder that you are working from should be able to repeat your analyses and replicate your results.
It is therefore *highly recommended* that you use projects to organise your work. It could save hours in the long\-run.
Organizing data, code and outputs also makes sense from a portability perspective: if you copy the folder (e.g. via GitHub) you can work on it from any computer without worrying about having the right files on your current machine.
These tasks are implemented using RStudio’s simple project system, in which the following things happen each time you open an existing project:
* The working directory automatically switches to the project’s folder. This enables data and script files to be referred to using relative file paths, which are much shorter than absolute file paths. This means that switching directory using `setwd()`, a common source of error for R users, is rarely, if ever, needed.
* The last previously open file is loaded into the Source pane. The history of R commands executed in previous sessions is also loaded into the History tab. This assists with continuity between one session and the next.
* The `File` tab displays the associated files and folders in the project, allowing you to quickly find your previous work.
* Any settings associated with the project, such as Git settings, are loaded. This assists with collaboration and project\-specific set\-up.
Each project is different but most contain input data, R code and outputs.
To keep things tidy, we recommend a sub\-directory structure resembling the following:
```
project/
- README.Rmd # Project description
- set-up.R # Required packages
- R/ # For R code
- input # Data files
- graphics/
- output/ # Results
```
Proper use of projects ensures that all R source files are neatly stashed in one folder with a meaningful structure. This way data and documentation can be found where one would expect them. Under this system, figures and project outputs are ‘first class citizens’ within the project’s design, each with their own folder.
Another approach to project management is to treat projects as R packages.
This is not recommended for most use cases, as it places restrictions on where you can put files. However, if the aim is *code development and sharing*, creating a small R package may be the way forward, even if you never intend to submit it on CRAN. Creating R packages is easier than ever before, as documented in (Cotton [2013](#ref-cotton_learning_2013)) and, more recently (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). The **devtools** package helps manage R’s quirks, making the process much less painful.
If you use GitHub, the advantage of this approach is that anyone should be able to reproduce your working using `devtools::install_github("username/projectname")`, although the administrative overheads of creating an entire package for each small project will outweigh the benefits for many.
Note that a `set-up.R` or even a `.Rprofile` file in the project’s root directory enable project\-specific settings to be loaded each time people work on the project.
As described in the previous section, `.Rprofile` can be used to tweak how R works at start\-up.
It is also a portable way to manage R’s configuration on a project\-by\-project basis.
Another capability that RStudio has is excellent debugging support. Rather than re\-invent the wheel, we would like to direct interested readers to the [RStudio website](https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio).
#### Exercises
1. Try modifying the look and appearance of your RStudio setup.
2. What is the keyboard shortcut to show the other shortcuts? (Hint: it begins with `Alt+Shift` on Linux and Windows.)
3. Try as many of the shortcuts revealed by the previous step as you like. Write down the ones that you think will save you time, perhaps on a post\-it note to go on your computer.
#### Exercises
1. Try modifying the look and appearance of your RStudio setup.
2. What is the keyboard shortcut to show the other shortcuts? (Hint: it begins with `Alt+Shift` on Linux and Windows.)
3. Try as many of the shortcuts revealed by the previous step as you like. Write down the ones that you think will save you time, perhaps on a post\-it note to go on your computer.
2\.6 BLAS and alternative R interpreters
----------------------------------------
In this section we cover a few system\-level options available to speed\-up R’s performance.
Note that for many applications stability rather than speed is a priority, so these should only be considered if a) you have exhausted options for writing your R code more efficiently and b) you are confident tweaking system\-level settings.
This should therefore be seen as an advanced section: if you are not interested in speeding\-up base R, feel free to skip to the next section of hardware.
Many statistical algorithms manipulate matrices. R uses the Basic Linear Algebra System (BLAS) framework for linear algebra operations. Whenever we carry out a matrix operation, such as transpose or finding the inverse, we use the underlying BLAS library. By switching to a different BLAS library, it may be possible to speed\-up your R code. Changing your BLAS library is straightforward if you are using Linux, but can be tricky for Windows users.
The two open source alternative BLAS libraries are [ATLAS](http://math-atlas.sourceforge.net/) and [OpenBLAS](https://github.com/xianyi/OpenBLAS). The [Intel MKL](https://software.intel.com/en-us/intel-mkl) is another implementation,
designed for Intel processors by Intel and used in Revolution R (described in the next section) but it requires licensing fees. The MKL library is provided with the Revolution analytics system. Depending on your application, by switching your BLAS library, linear algebra operations can run several times faster than with the base BLAS routines.
If you use macOS or Linux, you can check whether you have a BLAS library setting with the following function, from **benchmarkme**:
```
library("benchmarkme")
get_linear_algebra()
```
### 2\.6\.1 Testing performance gains from BLAS
As an illustrative test of the performance gains offered by BLAS, the following test was run on a new laptop running Ubuntu 15\.10 on a 6th generation Core i7 processor, before and after OpenBLAS was installed.[7](#fn7)
```
res = benchmark_std() # run a suit of tests to test R's performance
```
It was found that the installation of OpenBLAS led to a 2\-fold speed\-up (from around 150 to 70 seconds). The majority of the speed gain was from the matrix algebra tests, as can be seen in Figure [2\.4](set-up.html#fig:blas-bench). Note that the results of such tests are highly dependent on the particularities of each computer. However, it clearly shows that ‘programming’ benchmarks (e.g. the calculation of 3,500,000 Fibonacci numbers) are not much faster, whereas matrix calculations and functions receive a substantial speed boost. This demonstrates that the speed\-up you can expect from BLAS depends heavily on the type of computations you are undertaking.
Figure 2\.4: Performance gains obtained changing the underlying BLAS library (tests from `benchmark_std()`).
### 2\.6\.2 Other interpreters
The R *language* can be separated from the R *interpreter*. The former refers to the meaning of R commands, the latter refers to how the computer executes the commands. Alternative interpreters have been developed to try to make R faster and, while promising, none of the following options has fully taken off.
* [Microsoft R Open](https://mran.microsoft.com/open), formerly known as Revolution R Open (RRO), is the enhanced distribution of R from Microsoft. The key enhancement is that it uses multi\-threaded mathematics libraries, which can improve performance.
* [Rho](https://github.com/rho-devel/rho) (previously called CXXR, short for C\+\+), a re\-implementation of the R interpreter for speed and efficiency. Of the new interpreters, this is the one that has the most recent development activity (as of April 2016\).
* [pqrR](http://www.pqr-project.org/) (pretty quick R) is a new version of the R interpreter. One major downside, is that it is based on R\-2\.15\.0\. The developer (Radford Neal) has made many improvements, some of which have now been incorporated into base R. **pqR** is an open\-source project licensed under the GPL. One notable improvement in pqR is that it is able to do some numeric computations in parallel with each other, and with other operations of the interpreter, on systems with multiple processors or processor cores.
* [Renjin](http://www.renjin.org/) reimplements the R interpreter in Java, so it can run on the Java Virtual Machine (JVM). Since R will be pure Java, it can run anywhere.
* [Tibco](http://spotfire.tibco.com/) created a C\+\+ based interpreter called TERR (TIBCO Enterprise Runtime for R) that is incorporated into their analytics platform, Spotfire.
* Oracle also offer an R\-interpreter that uses Intel’s mathematics library and therefore achieves a higher performance without changing R’s core.
At the time of writing, switching interpreters is something to consider carefully. But in the future, it may become more routine.
### 2\.6\.3 Useful BLAS/benchmarking resources
* The [gcbd](https://cran.r-project.org/web/packages/gcbd/) package benchmarks performance of a few standard linear algebra operations across a number of different BLAS libraries as well as a GPU implementation. It has an excellent vignette summarising the results.
* [Brett Klamer](http://brettklamer.com/diversions/statistical/faster-blas-in-r/) provides a nice comparison of ATLAS, OpenBLAS and Intel MKL BLAS libraries. He also gives a description of how to install the different libraries.
* The official R manual [section](https://cran.r-project.org/doc/manuals/r-release/R-admin.html#BLAS) on BLAS.
### Exercises
1. What BLAS system is your version of R using?
### 2\.6\.1 Testing performance gains from BLAS
As an illustrative test of the performance gains offered by BLAS, the following test was run on a new laptop running Ubuntu 15\.10 on a 6th generation Core i7 processor, before and after OpenBLAS was installed.[7](#fn7)
```
res = benchmark_std() # run a suit of tests to test R's performance
```
It was found that the installation of OpenBLAS led to a 2\-fold speed\-up (from around 150 to 70 seconds). The majority of the speed gain was from the matrix algebra tests, as can be seen in Figure [2\.4](set-up.html#fig:blas-bench). Note that the results of such tests are highly dependent on the particularities of each computer. However, it clearly shows that ‘programming’ benchmarks (e.g. the calculation of 3,500,000 Fibonacci numbers) are not much faster, whereas matrix calculations and functions receive a substantial speed boost. This demonstrates that the speed\-up you can expect from BLAS depends heavily on the type of computations you are undertaking.
Figure 2\.4: Performance gains obtained changing the underlying BLAS library (tests from `benchmark_std()`).
### 2\.6\.2 Other interpreters
The R *language* can be separated from the R *interpreter*. The former refers to the meaning of R commands, the latter refers to how the computer executes the commands. Alternative interpreters have been developed to try to make R faster and, while promising, none of the following options has fully taken off.
* [Microsoft R Open](https://mran.microsoft.com/open), formerly known as Revolution R Open (RRO), is the enhanced distribution of R from Microsoft. The key enhancement is that it uses multi\-threaded mathematics libraries, which can improve performance.
* [Rho](https://github.com/rho-devel/rho) (previously called CXXR, short for C\+\+), a re\-implementation of the R interpreter for speed and efficiency. Of the new interpreters, this is the one that has the most recent development activity (as of April 2016\).
* [pqrR](http://www.pqr-project.org/) (pretty quick R) is a new version of the R interpreter. One major downside, is that it is based on R\-2\.15\.0\. The developer (Radford Neal) has made many improvements, some of which have now been incorporated into base R. **pqR** is an open\-source project licensed under the GPL. One notable improvement in pqR is that it is able to do some numeric computations in parallel with each other, and with other operations of the interpreter, on systems with multiple processors or processor cores.
* [Renjin](http://www.renjin.org/) reimplements the R interpreter in Java, so it can run on the Java Virtual Machine (JVM). Since R will be pure Java, it can run anywhere.
* [Tibco](http://spotfire.tibco.com/) created a C\+\+ based interpreter called TERR (TIBCO Enterprise Runtime for R) that is incorporated into their analytics platform, Spotfire.
* Oracle also offer an R\-interpreter that uses Intel’s mathematics library and therefore achieves a higher performance without changing R’s core.
At the time of writing, switching interpreters is something to consider carefully. But in the future, it may become more routine.
### 2\.6\.3 Useful BLAS/benchmarking resources
* The [gcbd](https://cran.r-project.org/web/packages/gcbd/) package benchmarks performance of a few standard linear algebra operations across a number of different BLAS libraries as well as a GPU implementation. It has an excellent vignette summarising the results.
* [Brett Klamer](http://brettklamer.com/diversions/statistical/faster-blas-in-r/) provides a nice comparison of ATLAS, OpenBLAS and Intel MKL BLAS libraries. He also gives a description of how to install the different libraries.
* The official R manual [section](https://cran.r-project.org/doc/manuals/r-release/R-admin.html#BLAS) on BLAS.
### Exercises
1. What BLAS system is your version of R using?
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/programming.html |
3 Efficient programming
=======================
Many people who use R would not describe themselves as “programmers”. Instead they tend to have advanced domain level knowledge, understand standard R data structures, such as vectors and data frames, but have little formal training in computing. Sound familiar? In that case this chapter is for you.
In this chapter we will discuss “big picture” programming techniques. We cover general concepts and R programming techniques about code optimisation, before describing idiomatic programming structures. We conclude the chapter by examining relatively easy ways of speeding up code using the **compiler** package and parallel processing, using multiple CPUs.
### Prerequisites
In this chapter we introduce two new packages, **compiler** and **memoise**. The **compiler** package comes with R, so it will already be installed.
```
library("compiler")
library("memoise")
```
We also use the **pryr** and **microbenchmark** packages in the exercises.
3\.1 Top 5 tips for efficient programming
-----------------------------------------
1. Be careful never to grow vectors.
2. Vectorise code whenever possible.
3. Use factors when appropriate.
4. Avoid unnecessary computation by caching variables.
5. Byte compile packages for an easy performance boost.
3\.2 General advice
-------------------
Low level languages like C and Fortran demand more from the programmer. They force you to declare the type of every variable used, give you the burdensome responsibility of memory management and have to be compiled. The advantage of such languages, compared with R, is that they are faster to run. The disadvantage is that they take longer to learn and can not be run interactively.
The Wikipedia page on compiler optimisations gives a nice overview of standard optimisation techniques (<https://en.wikipedia.org/wiki/Optimizing_compiler>).
R users don’t tend to worry about data types. This is advantageous in terms of creating concise code, but can result in R programs that are slow. While optimisations such as going parallel can double speed, poor code can easily run hundreds of times slower, so it’s important to understand the causes of slow code. These are covered in Burns ([2011](#ref-Burns2011)), which should be considered essential reading for any aspiring R programmers.
Ultimately calling an R function always ends up calling some underlying C/Fortran code. For example the base R function `runif()` only contains a single line that consists of a call to `C_runif()`.
```
function(n, min = 0, max = 1)
.Call(C_runif, n, min, max)
```
A **golden rule** in R programming is to access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better. For example, suppose `x` is a standard vector of length `n`. Then
```
x = x + 1
```
involves a single function call to the `+` function. Whereas the `for` loop
```
for (i in seq_len(n))
x[i] = x[i] + 1
```
has
* `n` function calls to `+`;
* `n` function calls to the `[` function;
* `n` function calls to the `[<-` function (used in the assignment operation);
* Two function calls: one to `for` and another to `seq_len()`.
It isn’t that the `for` loop is slow, rather it is because we have many more function calls. Each individual function call is quick, but the total combination is slow.
Everything in R is a function call. When we execute `1 + 1`, we are actually executing `‘+’(1, 1)`.
#### Exercise
Use the **microbenchmark** package to compare the vectorised construct `x = x + 1`, to the `for` loop version. Try varying the size of the input vector.
### 3\.2\.1 Memory allocation
Another general technique is to be careful with memory allocation. If possible pre\-allocate your vector then fill in the values.
You should also consider pre\-allocating memory for data frames and lists. Never grow an object. A good rule of thumb is to compare your objects before and after a `for` loop; have they increased in length?
Let’s consider three methods of creating a sequence of numbers. **Method 1** creates an empty vector and gradually increases (or grows) the length of the vector:
```
method1 = function(n) {
vec = NULL # Or vec = c()
for (i in seq_len(n))
vec = c(vec, i)
vec
}
```
**Method 2** creates an object of the final length and then changes the values in the object by subscripting:
```
method2 = function(n) {
vec = numeric(n)
for (i in seq_len(n))
vec[i] = i
vec
}
```
**Method 3** directly creates the final object:
```
method3 = function(n) seq_len(n)
```
To compare the three methods we use the `microbenchmark()` function from the previous chapter
```
microbenchmark(times = 100, unit = "s",
method1(n), method2(n), method3(n))
```
The table below shows the timing in seconds on my machine for these three methods for a selection of values of `n`. The relationships for varying `n` are all roughly linear on a log\-log scale, but the timings between methods are drastically different. Notice that the timings are no longer trivial. When \\(n\=10^7\\), Method 1 takes around an hour whilst Method 2 takes \\(2\\) seconds and Method 3 is almost instantaneous. Remember the golden rule; access the underlying C/Fortran code as quickly as possible.
Time in seconds to create sequences. When \\(n\=10^7\\), Method 1 takes around an hour while the other methods take less than \\(3\\) seconds.
| \\(n\\) | Method 1 | Method 2 | Method 3 |
| --- | --- | --- | --- |
| \\(10^5\\) | \\(\\phantom{000}0\.21\\) | \\(0\.02\\) | \\(0\.00\\) |
| \\(10^6\\) | \\(\\phantom{00}25\.50\\) | \\(0\.22\\) | \\(0\.00\\) |
| \\(10^7\\) | \\(3827\.00\\) | \\(2\.21\\) | \\(0\.00\\) |
### 3\.2\.2 Vectorised code
Technically `x = 1` creates a vector of length \\(1\\). In this section, we use *vectorised* to indicate that functions work with vectors of all lengths.
Recall the **golden rule** in R programming, access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better. With this mind, many R functions are *vectorised*, that is the function’s inputs and/or outputs naturally work with vectors, reducing the number of function calls required. For example, the code
```
x = runif(n) + 1
```
performs two vectorised operations. First `runif()` returns `n` random numbers. Second we add `1` to each element of the vector. In general it is a good idea to exploit vectorised functions. Consider this piece of R code that calculates the sum of \\(\\log(x)\\)
```
log_sum = 0
for (i in 1:length(x))
log_sum = log_sum + log(x[i])
```
Using `1:length(x)` can lead to hard\-to\-find bugs when `x` has length zero. Instead use `seq_along(x)` or `seq_len(length(x))`.
This code could easily be vectorised via
```
log_sum = sum(log(x))
```
Writing code this way has a number of benefits.
* It’s faster. When \\(n \= 10^7\\) the *R way* is about forty times faster.
* It’s neater.
* It doesn’t contain a bug when `x` is of length \\(0\\).
As with the general example in Section [3\.2](programming.html#general), the slowdown isn’t due to the `for` loop. Instead, it’s because there are many more function calls.
#### Exercises
1. Time the two methods for calculating the log sum.
2. What happens when the `length(x) = 0`, i.e. we have an empty vector?
#### Example: Monte\-Carlo integration
It’s also important to make full use of R functions that use vectors. For example, suppose we wish to estimate the integral
\\\[
\\int\_0^1 x^2 dx
\\]
using a Monte\-Carlo method. Essentially, we throw darts at the curve and count the number of darts that fall below the curve (as in [3\.1](programming.html#fig:3-1)).
*Monte Carlo Integration*
1. Initialise: `hits = 0`
2. **for i in 1:N**
3. \\(\~\~\~\\) Generate two random numbers, \\(U\_1, U\_2\\), between 0 and 1
4. \\(\~\~\~\\) If \\(U\_2 \< U\_1^2\\), then `hits = hits + 1`
5. **end for**
6. Area estimate \= `hits/N`
Implementing this Monte\-Carlo algorithm in R would typically lead to something like:
```
monte_carlo = function(N) {
hits = 0
for (i in seq_len(N)) {
u1 = runif(1)
u2 = runif(1)
if (u1 ^ 2 > u2)
hits = hits + 1
}
return(hits / N)
}
```
In R, this takes a few seconds
```
N = 500000
system.time(monte_carlo(N))
#> user system elapsed
#> 2.206 0.004 2.210
```
In contrast, a more R\-centric approach would be
```
monte_carlo_vec = function(N) sum(runif(N)^2 > runif(N)) / N
```
The `monte_carlo_vec()` function contains (at least) four aspects of vectorisation
* The `runif()` function call is now fully vectorised;
* We raise entire vectors to a power via `^`;
* Comparisons using `>` are vectorised;
* Using `sum()` is quicker than an equivalent for loop.
The function `monte_carlo_vec()` is around \\(30\\) times faster than `monte_carlo()`.
Figure 3\.1: Example of Monte\-Carlo integration. To estimate the area under the curve, throw random points at the graph and count the number of points that lie under the curve.
### Exercise
Verify that `monte_carlo_vec()` is faster than `monte_carlo()`. How does this relate to the number of darts, i.e. the size of `N`, that is used?
3\.3 Communicating with the user
--------------------------------
When we create a function we often want the function to give efficient feedback on the current state. For example, are there missing arguments or has a numerical calculation failed. There are three main techniques for communicating with the user.
### Fatal errors: `stop()`
Fatal errors are raised by calling the `stop()`, i.e. execution is terminated. When `stop()` is called, there is no way for a function to continue. For instance, when we generate random numbers using `rnorm()` the first argument is the sample size,`n`. If the number of observations to return is less than \\(1\\), an error is raised. When we need to raise an error, we should do so as quickly as possible; otherwise it’s a waste of resources. Hence, the first few lines of a function typically perform argument checking.
Suppose we call a function that raises an error. What then? Efficient, robust code *catches* the error and handles it appropriately. Errors can be caught using `try()` and `tryCatch()`. For example,
```
# Suppress the error message
good = try(1 + 1, silent = TRUE)
bad = try(1 + "1", silent = TRUE)
```
When we inspect the objects, the variable `good` just contains the number `2`
```
good
#> [1] 2
```
However, the `bad` object is a character string with class `try-error` and a `condition` attribute that contains the error message
```
bad
#> [1] "Error in 1 + \"1\" : non-numeric argument to binary operator\n"
#> attr(,"class")
#> [1] "try-error"
#> attr(,"condition")
#> <simpleError in 1 + "1": non-numeric argument to binary operator>
```
We can use this information in a standard conditional statement
```
if (class(bad) == "try-error")
# Do something
```
Further details on error handling, as well as some excellent advice on general debugging techniques, are given in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
### Warnings: `warning()`
Warnings are generated using the `warning()` function. When a warning is raised, it indicates potential problems. For example, `mean(NULL)` returns `NA` and also raises a warning.
When we come across a warning in our code, it is important to solve the problem and not just ignore the issue. While ignoring warnings saves time in the short\-term, warnings can often mask deeper issues that have crept into our code.
Warnings can be hidden using `suppressWarnings()`.
### Informative output: `message()` and `cat()`
To give informative output, use the `message()` function. For example, in the **poweRlaw** package, the `message()` function is used to give the user an estimate of expected run time. Providing a rough estimate of how long the function takes, allows the user to optimise their time. Similar to warnings, messages can be suppressed with `suppressMessages()`.
Another function used for printing messages is `cat()`. In general `cat()` should only be used in `print()`/`show()` methods, e.g. look at the function definition of the S3 print method for `difftime` objects, `getS3method("print", "difftime")`.
### Exercises
The `stop()` function has an argument `call.` that indicates if the function call should be part of the error message. Create a function and experiment with this option.
### 3\.3\.1 Invisible returns
The `invisible()` function allows you to return a temporarily invisible copy of an object. This is particularly useful for functions that return values which can be assigned, but are not printed when they are not assigned. For example suppose we have a function that plots the data and fits a straight line
```
regression_plot = function(x, y, ...) {
# Plot and pass additional arguments to default plot method
plot(x, y, ...)
# Fit regression model
model = lm(y ~ x)
# Add line of best fit to the plot
abline(model)
invisible(model)
}
```
When the function is called, a scatter graph is plotted with the line of best fit, but the output is invisible. However when we assign the function to an object, i.e. `out = regression_plot(x, y)` the variable `out` contains the output of the `lm()` call.
Another example is the histogram function `hist()`. Typically we don’t want anything displayed in the console when we call the function
```
hist(x)
```
However if we assign the output to an object, `out = hist(x)`, the object `out` is actually a list containing, *inter alia*, information on the mid\-points, breaks and counts.
3\.4 Factors
------------
Factors are much maligned objects. While at times they are awkward, they do have their uses. A factor is used to store categorical variables. This data type is unique to R (or at least not common among programming languages). The difference between factors and strings is important because R treats factors and strings differently. Although factors look similar to character vectors, they are actually integers. This leads to initially surprising behaviour
```
x = 4:6
c(x)
#> [1] 4 5 6
c(factor(x))
#> [1] 1 2 3
```
In this case the `c()` function is using the underlying integer representation of the factor. Dealing with the wrong case of behaviour is a common source of inefficiency for R users.
Often categorical variables get stored as \\(1\\), \\(2\\), \\(3\\), \\(4\\), and \\(5\\), with associated documentation elsewhere that explains what each number means. This is clearly a pain. Alternatively we store the data as a character vector. While this is fine, the semantics are wrong because it doesn’t convey that this is a categorical variable. It’s not sensible to say that you should **always** or **never** use factors, since factors have both positive and negative features. Instead we need to examine each case individually.
As a general rule, if your variable has an inherent order, e.g. small vs large, or you have a fixed set of categories, then you should consider using a factor.
### 3\.4\.1 Inherent order
Factors can be used for ordering in graphics. For instance, suppose we have a data set where the variable `type`, takes one of three values, `small`, `medium` and `large`. Clearly there is an ordering. Using a standard `boxplot()` call,
```
boxplot(y ~ type)
```
would create a boxplot where the \\(x\\)\-axis was alphabetically ordered. By converting `type` into factor, we can easily specify the correct ordering.
```
boxplot(y ~ factor(type, levels = c("Small", "Medium", "Large")))
```
Most users interact with factors via the `read.csv()` function where character columns are automatically converted to factors. This feature can be irritating if our data is messy and we want to clean and recode variables. Typically when reading in data via `read.csv()`, we use the `stringsAsFactors = FALSE` argument. Although this argument can be added to the global `options()` list and placed in the `.Rprofile`, this leads to non\-portable code, so should be avoided.
### 3\.4\.2 Fixed set of categories
Suppose our data set relates to months of the year
```
m = c("January", "December", "March")
```
If we sort `m` in the usual way, `sort(m)`, we perform standard alpha\-numeric ordering; placing `December` first. This is technically correct, but not that helpful. We can use factors to remedy this problem by specifying the admissible levels
```
# month.name contains the 12 months
fac_m = factor(m, levels = month.name)
sort(fac_m)
#> [1] January March December
#> 12 Levels: January February March April May June July August ... December
```
#### Exercise
Factors are slightly more space efficient than characters. Create a character vector and corresponding factor and use `pryr::object_size()` to calculate the space needed for each object.
3\.5 The apply family
---------------------
The apply functions can be an alternative to writing for loops. The general idea is to apply (or map) a function to each element of an object. For example, you can apply a function to each row or column of a matrix. A list of available functions is given in Table [3\.1](programming.html#tab:apply-family), with a short description. In general, all the apply functions have similar properties:
* Each function takes at least two arguments: an object and another function. The function is passed as an argument.
* Every apply function has the dots, `...`, argument that is used to pass on arguments to the function that is given as an argument.
Using apply functions when possible, can lead to more succinct and idiomatic R code. In this section, we will cover the three main functions, `apply()`, `lapply()`, and `sapply()`. Since the apply functions are covered in most R textbooks, we just give a brief introduction to the topic and provide pointers to other resources at the end of this section.
Most people rarely use the other apply functions. For example, I have only used `eapply()` once. Students in my class uploaded R scripts. Using `source()`, I was able to read in the scripts to a separate environment. I then applied a marking scheme to each environment using `eapply()`. Using separate environments, avoided object name clashes.
Table 3\.1: The apply family of functions from base R.
| Function | Description |
| --- | --- |
| `apply` | Apply functions over array margins |
| `by` | Apply a function to a data frame split by factors |
| `eapply` | Apply a function over values in an environment |
| `lapply` | Apply a function over a list or vector |
| `mapply` | Apply a function to multiple list or vector arguments |
| `rapply` | Recursively apply a function to a list |
| `tapply` | Apply a function over a ragged array |
The `apply()` function is used to apply a function to each row or column of a matrix. In many data science
problems, this is a common task. For example, to calculate the standard deviation of the rows we have
```
data("ex_mat", package = "efficient")
# MARGIN=1: corresponds to rows
row_sd = apply(ex_mat, 1, sd)
```
The first argument of `apply()` is the object of interest. The second argument is the `MARGIN`. This is a vector giving the subscripts which the function (the third argument) will be applied over. When the object is a matrix, a margin of `1` indicates rows and `2` indicates columns. So to calculate the column standard deviations, the second argument is changed to `2`
```
col_sd = apply(ex_mat, 2, sd)
```
Additional arguments can be passed to the function that is to be applied to the data. For example, to pass the `na.rm` argument to the `sd` function, we have
```
row_sd = apply(ex_mat, 1, sd, na.rm = TRUE)
```
The `apply()` function also works on higher dimensional arrays; a one dimensional array is a vector, a two dimensional array is a matrix.
The `lapply()` function is similar to `apply()`; with the key difference being that the input type is a vector or list and the return type is a list. Essentially, we apply a function to each element of a list or vector. The functions `sapply()` and `vapply()` are similar to `lapply()`, but the return type is not necessary a list.
### 3\.5\.1 Example: the movies data set
The [Internet Movie Database](http://imdb.com/) is a website that collects movie data supplied by studios and fans. It is one of the largest movie databases on the web and is maintained by Amazon. The **ggplot2movies** package contains about sixty thousand movies stored as a data frame
```
data(movies, package = "ggplot2movies")
```
Movies are rated between \\(1\\) and \\(10\\) by fans. Columns \\(7\\) to \\(16\\) of the `movies` data set gives the percentage of voters for a particular rating.
```
ratings = movies[, 7:16]
```
For example, 4\.5% of voters, rated the first movie a rating of \\(1\\)
```
ratings[1, ]
#> # A tibble: 1 x 10
#> r1 r2 r3 r4 r5 r6 r7 r8 r9 r10
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 4.5 4.5 4.5 4.5 14.5 24.5 24.5 14.5 4.5 4.5
```
We can use the `apply()` function to investigate voting patterns. The function `nnet::which.is.max()` finds the maximum position in a vector, but breaks ties at random; `which.max()` just returns the first value. Using `apply()`, we can easily determine the most popular rating for each movie and plot the results
```
popular = apply(ratings, 1, nnet::which.is.max)
plot(table(popular))
```
Figure 3\.2: Movie voting preferences.
Figure [3\.2](programming.html#fig:3-2) highlights that voting patterns are clearly not uniform between \\(1\\) and \\(10\\). The most popular vote is the highest rating, \\(10\\). Clearly if you went to the trouble of voting for a movie, it was either very good, or very bad (there is also a peak at \\(1\\)). Rating a movie \\(7\\) is also a popular choice (search the web for “most popular number” and \\(7\\) dominates the rankings).
### 3\.5\.2 Type consistency
When programming, it is helpful if the return value from a function always takes the same form. Unfortunately, not all base R functions follow this idiom. For example, the functions `sapply()` and `[.data.frame()` aren’t type consistent
```
two_cols = data.frame(x = 1:5, y = letters[1:5])
zero_cols = data.frame()
sapply(two_cols, class) # a character vector
sapply(zero_cols, class) # a list
two_cols[, 1:2] # a data.frame
two_cols[, 1] # an integer vector
```
This can cause unexpected problems. The functions `lapply()` and `vapply()` are type consistent. Likewise for `dplyr::select()` and `dplyr:filter()`. The **purrr** package has some type consistent alternatives to base R functions. For example, `map_dbl()` (and other `map_*` functions) to replace `Map()` and `flatten_df()` to replace `unlist()`.
#### Other resources
Almost every R book has a section on the apply function. Below, we’ve given the resources we feel are most helpful.
* Each function has a number of examples in the associated help page. You can directly access the examples using the `example()` function, e.g. to run the `apply()` examples, use `example("apply")`.
* There is a very detailed StackOverflow [answer](http://stackoverflow.com/q/3505701/203420) which describes when, where and how to use each of the functions.
* In a similar vein, Neil Saunders has a nice blog [post](https://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/) giving an overview of the functions.
* The apply functions are an example of functional programming. Chapter 16 of *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)) describes the interplay between loops and functional programming in more detail, while H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) gives a more in\-depth description of the topic.
#### Exercises
1. Rewrite the `sapply()` function calls above using `vapply()` to ensure type consistency.
2. How would you make subsetting data frames with `[` type consistent? Hint: look at
the `drop` argument.
3\.6 Caching variables
----------------------
A straightforward method for speeding up code is to calculate objects once and reuse the value when necessary. This could be as simple as replacing `sd(x)` in multiple function calls with the object `sd_x` that is defined once and reused. For example, suppose we wish to normalise each column of a matrix. However, instead of using the standard deviation of each column, we will use the standard deviation of the entire data set
```
apply(x, 2, function(i) mean(i) / sd(x))
```
This is inefficient since the value of `sd(x)` is constant and thus recalculating the standard deviation for every column is unnecessary. Instead we should evaluate once and store the result
```
sd_x = sd(x)
apply(x, 2, function(i) mean(i) / sd_x)
```
If we compare the two methods on a \\(100\\) row by \\(1000\\) column matrix, the cached version is around \\(100\\) times faster (Figure [3\.3](programming.html#fig:3-4)).
Figure 3\.3: Performance gains obtained from caching the standard deviation in a \\(100\\) by \\(1000\\) matrix.
A more advanced form of caching is to use the **memoise** package. If a function is called multiple times with the same input, it may be possible to speed things up by keeping a cache of known answers that it can retrieve. The **memoise** package allows us to easily store the value of function call and returns the cached result when the function is called again with the same arguments. This package trades off memory versus speed, since the memoised function stores all previous inputs and outputs. To cache a function, we simply pass the function to the **memoise** function.
The classic memoise example is the factorial function. Another example is to limit use to a web resource. For example, suppose we are developing a Shiny (an interactive graphic) application where the user can fit a regression line to data. The user can remove points and refit the line. An example function would be
```
# Argument indicates row to remove
plot_mpg = function(row_to_remove) {
data(mpg, package = "ggplot2")
mpg = mpg[-row_to_remove, ]
plot(mpg$cty, mpg$hwy)
lines(lowess(mpg$cty, mpg$hwy), col = 2)
}
```
We can use **memoise** to speed up repeated function calls by caching results. A quick benchmark
```
m_plot_mpg = memoise(plot_mpg)
microbenchmark(times = 10, unit = "ms", m_plot_mpg(10), plot_mpg(10))
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> m_plot_mpg(10) 0.04 4e-02 0.07 8e-02 8e-02 0.1 10 a
#> plot_mpg(10) 40.20 1e+02 95.52 1e+02 1e+02 107.1 10 b
```
suggests that we can obtain a \\(100\\)\-fold speed\-up.
#### Exercise
Construct a box plot of timings for the standard plotting function and the memoised version.
### 3\.6\.1 Function closures
The following section is meant to provide an introduction to function closures with example use cases. See H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) for a detailed introduction.
More advanced caching is available using *function closures*. A closure in R is an object that contains functions bound to the environment the closure was created in. Technically all functions in R have this property, but we use the term function closure to denote functions where the environment is not in `.GlobalEnv`. One of the environments associated with a function is known as the enclosing environment, that is, where the function was created. This allows us to store values between function calls. Suppose we want to create a stop\-watch type function. This is easily achieved with a function closure
```
# <<- assigns values to the parent environment
stop_watch = function() {
start_time = stop_time = NULL
start = function() start_time <<- Sys.time()
stop = function() {
stop_time <<- Sys.time()
difftime(stop_time, start_time)
}
list(start = start, stop = stop)
}
watch = stop_watch()
```
The object `watch` is a list, that contains two functions. One function for starting the timer
```
watch$start()
```
the other for stopping the timer
```
watch$stop()
```
Without using function closures, the stop\-watch function would be longer, more complex and therefore more inefficient. When used properly, function closures are very useful programming tools for writing concise code.
#### Exercise
1. Write a stop\-watch function **without** using function closures.
2. Many stop\-watches have the ability to measure not only your overall time but also your individual laps. Add a `lap()` function to the `stop_watch()` function that will record individual times, while still keeping track of the overall time.
A related idea to function closures, is non\-standard evaluation (NSE), or programming on the language. NSE crops up all the time in R. For example, when we execute `plot(height, weight)`, R automatically labels the x\- and y\-axis of the plot with `height` and `weight`. This is a powerful concept that enables us to simplify code. More detail is given about “Non\-standard evaluation” in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
3\.7 The byte compiler
----------------------
The **compiler** package, written by R Core member Luke Tierney has been part of R since version 2\.13\.0\. The **compiler** package allows R functions to be compiled, resulting in a byte code version that may run faster[8](#fn8). The compilation process eliminates a number of costly operations the interpreter has to perform, such as variable lookup.
Since R 2\.14\.0, all of the standard functions and packages in base R are pre\-compiled into byte\-code. This is illustrated by the base function `mean()`:
```
getFunction("mean")
#> function (x, ...)
#> UseMethod("mean")
#> <bytecode: 0x242e2c0>
#> <environment: namespace:base>
```
The third line contains the `bytecode` of the function. This means that the **compiler** package has translated the R function into another language that can be interpreted by a very fast interpreter. Amazingly the **compiler** package is almost entirely pure R, with just a few C support routines.
### 3\.7\.1 Example: the mean function
The **compiler** package comes with R, so we just need to load the package in the usual way
```
library("compiler")
```
Next we create an inefficient function for calculating the mean. This function takes in a vector, calculates the length and then updates the `m` variable.
```
mean_r = function(x) {
m = 0
n = length(x)
for (i in seq_len(n))
m = m + x[i] / n
m
}
```
This is clearly a bad function and we should just use the `mean()` function, but it’s a useful comparison. Compiling the function is straightforward
```
cmp_mean_r = cmpfun(mean_r)
```
Then we use the `microbenchmark()` function to compare the three variants
```
# Generate some data
x = rnorm(1000)
microbenchmark(times = 10, unit = "ms", # milliseconds
mean_r(x), cmp_mean_r(x), mean(x))
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> mean_r(x) 0.358 0.361 0.370 0.363 0.367 0.43 10 c
#> cmp_mean_r(x) 0.050 0.051 0.052 0.051 0.051 0.07 10 b
#> mean(x) 0.005 0.005 0.008 0.007 0.008 0.03 10 a
```
The compiled function is around seven times faster than the uncompiled function. Of course the native `mean()` function is faster, but compiling does make a significant difference (Figure [3\.4](programming.html#fig:3-3)).
Figure 3\.4: Comparison of mean functions.
### 3\.7\.2 Compiling code
There are a number of ways to compile code. The easiest is to compile individual functions using `cmpfun()`, but this obviously doesn’t scale. If you create a package, you can automatically compile the package on installation by adding
```
ByteCompile: true
```
to the `DESCRIPTION` file. Most R packages installed using `install.packages()` are not compiled. We can enable (or force) packages to be compiled by starting R with the environment variable `R_COMPILE_PKGS` set to a positive integer value and specify that we install the package from `source`, i.e.
```
## Windows users will need Rtools
install.packages("ggplot2", type = "source")
```
Or if we want to avoid altering the `.Renviron` file, we can specify an additional argument
```
install.packages("ggplot2", type = "source", INSTALL_opts = "--byte-compile")
```
A final option is to use just\-in\-time (JIT) compilation. The `enableJIT()` function disables JIT compilation if the argument is `0`. Arguments `1`, `2`, or `3` implement different levels of optimisation. JIT can also be enabled by setting the environment variable `R_ENABLE_JIT`, to one of these values.
We recommend setting the compile level to the maximum value of 3\.
The impact of compiling on install will vary from package to package: for packages that already have lots of pre\-compiled code, speed gains will be small (R Core Team [2016](#ref-team2016installation)).
Not all packages work if compiled on installation.
### Prerequisites
In this chapter we introduce two new packages, **compiler** and **memoise**. The **compiler** package comes with R, so it will already be installed.
```
library("compiler")
library("memoise")
```
We also use the **pryr** and **microbenchmark** packages in the exercises.
3\.1 Top 5 tips for efficient programming
-----------------------------------------
1. Be careful never to grow vectors.
2. Vectorise code whenever possible.
3. Use factors when appropriate.
4. Avoid unnecessary computation by caching variables.
5. Byte compile packages for an easy performance boost.
3\.2 General advice
-------------------
Low level languages like C and Fortran demand more from the programmer. They force you to declare the type of every variable used, give you the burdensome responsibility of memory management and have to be compiled. The advantage of such languages, compared with R, is that they are faster to run. The disadvantage is that they take longer to learn and can not be run interactively.
The Wikipedia page on compiler optimisations gives a nice overview of standard optimisation techniques (<https://en.wikipedia.org/wiki/Optimizing_compiler>).
R users don’t tend to worry about data types. This is advantageous in terms of creating concise code, but can result in R programs that are slow. While optimisations such as going parallel can double speed, poor code can easily run hundreds of times slower, so it’s important to understand the causes of slow code. These are covered in Burns ([2011](#ref-Burns2011)), which should be considered essential reading for any aspiring R programmers.
Ultimately calling an R function always ends up calling some underlying C/Fortran code. For example the base R function `runif()` only contains a single line that consists of a call to `C_runif()`.
```
function(n, min = 0, max = 1)
.Call(C_runif, n, min, max)
```
A **golden rule** in R programming is to access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better. For example, suppose `x` is a standard vector of length `n`. Then
```
x = x + 1
```
involves a single function call to the `+` function. Whereas the `for` loop
```
for (i in seq_len(n))
x[i] = x[i] + 1
```
has
* `n` function calls to `+`;
* `n` function calls to the `[` function;
* `n` function calls to the `[<-` function (used in the assignment operation);
* Two function calls: one to `for` and another to `seq_len()`.
It isn’t that the `for` loop is slow, rather it is because we have many more function calls. Each individual function call is quick, but the total combination is slow.
Everything in R is a function call. When we execute `1 + 1`, we are actually executing `‘+’(1, 1)`.
#### Exercise
Use the **microbenchmark** package to compare the vectorised construct `x = x + 1`, to the `for` loop version. Try varying the size of the input vector.
### 3\.2\.1 Memory allocation
Another general technique is to be careful with memory allocation. If possible pre\-allocate your vector then fill in the values.
You should also consider pre\-allocating memory for data frames and lists. Never grow an object. A good rule of thumb is to compare your objects before and after a `for` loop; have they increased in length?
Let’s consider three methods of creating a sequence of numbers. **Method 1** creates an empty vector and gradually increases (or grows) the length of the vector:
```
method1 = function(n) {
vec = NULL # Or vec = c()
for (i in seq_len(n))
vec = c(vec, i)
vec
}
```
**Method 2** creates an object of the final length and then changes the values in the object by subscripting:
```
method2 = function(n) {
vec = numeric(n)
for (i in seq_len(n))
vec[i] = i
vec
}
```
**Method 3** directly creates the final object:
```
method3 = function(n) seq_len(n)
```
To compare the three methods we use the `microbenchmark()` function from the previous chapter
```
microbenchmark(times = 100, unit = "s",
method1(n), method2(n), method3(n))
```
The table below shows the timing in seconds on my machine for these three methods for a selection of values of `n`. The relationships for varying `n` are all roughly linear on a log\-log scale, but the timings between methods are drastically different. Notice that the timings are no longer trivial. When \\(n\=10^7\\), Method 1 takes around an hour whilst Method 2 takes \\(2\\) seconds and Method 3 is almost instantaneous. Remember the golden rule; access the underlying C/Fortran code as quickly as possible.
Time in seconds to create sequences. When \\(n\=10^7\\), Method 1 takes around an hour while the other methods take less than \\(3\\) seconds.
| \\(n\\) | Method 1 | Method 2 | Method 3 |
| --- | --- | --- | --- |
| \\(10^5\\) | \\(\\phantom{000}0\.21\\) | \\(0\.02\\) | \\(0\.00\\) |
| \\(10^6\\) | \\(\\phantom{00}25\.50\\) | \\(0\.22\\) | \\(0\.00\\) |
| \\(10^7\\) | \\(3827\.00\\) | \\(2\.21\\) | \\(0\.00\\) |
### 3\.2\.2 Vectorised code
Technically `x = 1` creates a vector of length \\(1\\). In this section, we use *vectorised* to indicate that functions work with vectors of all lengths.
Recall the **golden rule** in R programming, access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better. With this mind, many R functions are *vectorised*, that is the function’s inputs and/or outputs naturally work with vectors, reducing the number of function calls required. For example, the code
```
x = runif(n) + 1
```
performs two vectorised operations. First `runif()` returns `n` random numbers. Second we add `1` to each element of the vector. In general it is a good idea to exploit vectorised functions. Consider this piece of R code that calculates the sum of \\(\\log(x)\\)
```
log_sum = 0
for (i in 1:length(x))
log_sum = log_sum + log(x[i])
```
Using `1:length(x)` can lead to hard\-to\-find bugs when `x` has length zero. Instead use `seq_along(x)` or `seq_len(length(x))`.
This code could easily be vectorised via
```
log_sum = sum(log(x))
```
Writing code this way has a number of benefits.
* It’s faster. When \\(n \= 10^7\\) the *R way* is about forty times faster.
* It’s neater.
* It doesn’t contain a bug when `x` is of length \\(0\\).
As with the general example in Section [3\.2](programming.html#general), the slowdown isn’t due to the `for` loop. Instead, it’s because there are many more function calls.
#### Exercises
1. Time the two methods for calculating the log sum.
2. What happens when the `length(x) = 0`, i.e. we have an empty vector?
#### Example: Monte\-Carlo integration
It’s also important to make full use of R functions that use vectors. For example, suppose we wish to estimate the integral
\\\[
\\int\_0^1 x^2 dx
\\]
using a Monte\-Carlo method. Essentially, we throw darts at the curve and count the number of darts that fall below the curve (as in [3\.1](programming.html#fig:3-1)).
*Monte Carlo Integration*
1. Initialise: `hits = 0`
2. **for i in 1:N**
3. \\(\~\~\~\\) Generate two random numbers, \\(U\_1, U\_2\\), between 0 and 1
4. \\(\~\~\~\\) If \\(U\_2 \< U\_1^2\\), then `hits = hits + 1`
5. **end for**
6. Area estimate \= `hits/N`
Implementing this Monte\-Carlo algorithm in R would typically lead to something like:
```
monte_carlo = function(N) {
hits = 0
for (i in seq_len(N)) {
u1 = runif(1)
u2 = runif(1)
if (u1 ^ 2 > u2)
hits = hits + 1
}
return(hits / N)
}
```
In R, this takes a few seconds
```
N = 500000
system.time(monte_carlo(N))
#> user system elapsed
#> 2.206 0.004 2.210
```
In contrast, a more R\-centric approach would be
```
monte_carlo_vec = function(N) sum(runif(N)^2 > runif(N)) / N
```
The `monte_carlo_vec()` function contains (at least) four aspects of vectorisation
* The `runif()` function call is now fully vectorised;
* We raise entire vectors to a power via `^`;
* Comparisons using `>` are vectorised;
* Using `sum()` is quicker than an equivalent for loop.
The function `monte_carlo_vec()` is around \\(30\\) times faster than `monte_carlo()`.
Figure 3\.1: Example of Monte\-Carlo integration. To estimate the area under the curve, throw random points at the graph and count the number of points that lie under the curve.
### Exercise
Verify that `monte_carlo_vec()` is faster than `monte_carlo()`. How does this relate to the number of darts, i.e. the size of `N`, that is used?
#### Exercise
Use the **microbenchmark** package to compare the vectorised construct `x = x + 1`, to the `for` loop version. Try varying the size of the input vector.
### 3\.2\.1 Memory allocation
Another general technique is to be careful with memory allocation. If possible pre\-allocate your vector then fill in the values.
You should also consider pre\-allocating memory for data frames and lists. Never grow an object. A good rule of thumb is to compare your objects before and after a `for` loop; have they increased in length?
Let’s consider three methods of creating a sequence of numbers. **Method 1** creates an empty vector and gradually increases (or grows) the length of the vector:
```
method1 = function(n) {
vec = NULL # Or vec = c()
for (i in seq_len(n))
vec = c(vec, i)
vec
}
```
**Method 2** creates an object of the final length and then changes the values in the object by subscripting:
```
method2 = function(n) {
vec = numeric(n)
for (i in seq_len(n))
vec[i] = i
vec
}
```
**Method 3** directly creates the final object:
```
method3 = function(n) seq_len(n)
```
To compare the three methods we use the `microbenchmark()` function from the previous chapter
```
microbenchmark(times = 100, unit = "s",
method1(n), method2(n), method3(n))
```
The table below shows the timing in seconds on my machine for these three methods for a selection of values of `n`. The relationships for varying `n` are all roughly linear on a log\-log scale, but the timings between methods are drastically different. Notice that the timings are no longer trivial. When \\(n\=10^7\\), Method 1 takes around an hour whilst Method 2 takes \\(2\\) seconds and Method 3 is almost instantaneous. Remember the golden rule; access the underlying C/Fortran code as quickly as possible.
Time in seconds to create sequences. When \\(n\=10^7\\), Method 1 takes around an hour while the other methods take less than \\(3\\) seconds.
| \\(n\\) | Method 1 | Method 2 | Method 3 |
| --- | --- | --- | --- |
| \\(10^5\\) | \\(\\phantom{000}0\.21\\) | \\(0\.02\\) | \\(0\.00\\) |
| \\(10^6\\) | \\(\\phantom{00}25\.50\\) | \\(0\.22\\) | \\(0\.00\\) |
| \\(10^7\\) | \\(3827\.00\\) | \\(2\.21\\) | \\(0\.00\\) |
### 3\.2\.2 Vectorised code
Technically `x = 1` creates a vector of length \\(1\\). In this section, we use *vectorised* to indicate that functions work with vectors of all lengths.
Recall the **golden rule** in R programming, access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better. With this mind, many R functions are *vectorised*, that is the function’s inputs and/or outputs naturally work with vectors, reducing the number of function calls required. For example, the code
```
x = runif(n) + 1
```
performs two vectorised operations. First `runif()` returns `n` random numbers. Second we add `1` to each element of the vector. In general it is a good idea to exploit vectorised functions. Consider this piece of R code that calculates the sum of \\(\\log(x)\\)
```
log_sum = 0
for (i in 1:length(x))
log_sum = log_sum + log(x[i])
```
Using `1:length(x)` can lead to hard\-to\-find bugs when `x` has length zero. Instead use `seq_along(x)` or `seq_len(length(x))`.
This code could easily be vectorised via
```
log_sum = sum(log(x))
```
Writing code this way has a number of benefits.
* It’s faster. When \\(n \= 10^7\\) the *R way* is about forty times faster.
* It’s neater.
* It doesn’t contain a bug when `x` is of length \\(0\\).
As with the general example in Section [3\.2](programming.html#general), the slowdown isn’t due to the `for` loop. Instead, it’s because there are many more function calls.
#### Exercises
1. Time the two methods for calculating the log sum.
2. What happens when the `length(x) = 0`, i.e. we have an empty vector?
#### Example: Monte\-Carlo integration
It’s also important to make full use of R functions that use vectors. For example, suppose we wish to estimate the integral
\\\[
\\int\_0^1 x^2 dx
\\]
using a Monte\-Carlo method. Essentially, we throw darts at the curve and count the number of darts that fall below the curve (as in [3\.1](programming.html#fig:3-1)).
*Monte Carlo Integration*
1. Initialise: `hits = 0`
2. **for i in 1:N**
3. \\(\~\~\~\\) Generate two random numbers, \\(U\_1, U\_2\\), between 0 and 1
4. \\(\~\~\~\\) If \\(U\_2 \< U\_1^2\\), then `hits = hits + 1`
5. **end for**
6. Area estimate \= `hits/N`
Implementing this Monte\-Carlo algorithm in R would typically lead to something like:
```
monte_carlo = function(N) {
hits = 0
for (i in seq_len(N)) {
u1 = runif(1)
u2 = runif(1)
if (u1 ^ 2 > u2)
hits = hits + 1
}
return(hits / N)
}
```
In R, this takes a few seconds
```
N = 500000
system.time(monte_carlo(N))
#> user system elapsed
#> 2.206 0.004 2.210
```
In contrast, a more R\-centric approach would be
```
monte_carlo_vec = function(N) sum(runif(N)^2 > runif(N)) / N
```
The `monte_carlo_vec()` function contains (at least) four aspects of vectorisation
* The `runif()` function call is now fully vectorised;
* We raise entire vectors to a power via `^`;
* Comparisons using `>` are vectorised;
* Using `sum()` is quicker than an equivalent for loop.
The function `monte_carlo_vec()` is around \\(30\\) times faster than `monte_carlo()`.
Figure 3\.1: Example of Monte\-Carlo integration. To estimate the area under the curve, throw random points at the graph and count the number of points that lie under the curve.
#### Exercises
1. Time the two methods for calculating the log sum.
2. What happens when the `length(x) = 0`, i.e. we have an empty vector?
#### Example: Monte\-Carlo integration
It’s also important to make full use of R functions that use vectors. For example, suppose we wish to estimate the integral
\\\[
\\int\_0^1 x^2 dx
\\]
using a Monte\-Carlo method. Essentially, we throw darts at the curve and count the number of darts that fall below the curve (as in [3\.1](programming.html#fig:3-1)).
*Monte Carlo Integration*
1. Initialise: `hits = 0`
2. **for i in 1:N**
3. \\(\~\~\~\\) Generate two random numbers, \\(U\_1, U\_2\\), between 0 and 1
4. \\(\~\~\~\\) If \\(U\_2 \< U\_1^2\\), then `hits = hits + 1`
5. **end for**
6. Area estimate \= `hits/N`
Implementing this Monte\-Carlo algorithm in R would typically lead to something like:
```
monte_carlo = function(N) {
hits = 0
for (i in seq_len(N)) {
u1 = runif(1)
u2 = runif(1)
if (u1 ^ 2 > u2)
hits = hits + 1
}
return(hits / N)
}
```
In R, this takes a few seconds
```
N = 500000
system.time(monte_carlo(N))
#> user system elapsed
#> 2.206 0.004 2.210
```
In contrast, a more R\-centric approach would be
```
monte_carlo_vec = function(N) sum(runif(N)^2 > runif(N)) / N
```
The `monte_carlo_vec()` function contains (at least) four aspects of vectorisation
* The `runif()` function call is now fully vectorised;
* We raise entire vectors to a power via `^`;
* Comparisons using `>` are vectorised;
* Using `sum()` is quicker than an equivalent for loop.
The function `monte_carlo_vec()` is around \\(30\\) times faster than `monte_carlo()`.
Figure 3\.1: Example of Monte\-Carlo integration. To estimate the area under the curve, throw random points at the graph and count the number of points that lie under the curve.
### Exercise
Verify that `monte_carlo_vec()` is faster than `monte_carlo()`. How does this relate to the number of darts, i.e. the size of `N`, that is used?
3\.3 Communicating with the user
--------------------------------
When we create a function we often want the function to give efficient feedback on the current state. For example, are there missing arguments or has a numerical calculation failed. There are three main techniques for communicating with the user.
### Fatal errors: `stop()`
Fatal errors are raised by calling the `stop()`, i.e. execution is terminated. When `stop()` is called, there is no way for a function to continue. For instance, when we generate random numbers using `rnorm()` the first argument is the sample size,`n`. If the number of observations to return is less than \\(1\\), an error is raised. When we need to raise an error, we should do so as quickly as possible; otherwise it’s a waste of resources. Hence, the first few lines of a function typically perform argument checking.
Suppose we call a function that raises an error. What then? Efficient, robust code *catches* the error and handles it appropriately. Errors can be caught using `try()` and `tryCatch()`. For example,
```
# Suppress the error message
good = try(1 + 1, silent = TRUE)
bad = try(1 + "1", silent = TRUE)
```
When we inspect the objects, the variable `good` just contains the number `2`
```
good
#> [1] 2
```
However, the `bad` object is a character string with class `try-error` and a `condition` attribute that contains the error message
```
bad
#> [1] "Error in 1 + \"1\" : non-numeric argument to binary operator\n"
#> attr(,"class")
#> [1] "try-error"
#> attr(,"condition")
#> <simpleError in 1 + "1": non-numeric argument to binary operator>
```
We can use this information in a standard conditional statement
```
if (class(bad) == "try-error")
# Do something
```
Further details on error handling, as well as some excellent advice on general debugging techniques, are given in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
### Warnings: `warning()`
Warnings are generated using the `warning()` function. When a warning is raised, it indicates potential problems. For example, `mean(NULL)` returns `NA` and also raises a warning.
When we come across a warning in our code, it is important to solve the problem and not just ignore the issue. While ignoring warnings saves time in the short\-term, warnings can often mask deeper issues that have crept into our code.
Warnings can be hidden using `suppressWarnings()`.
### Informative output: `message()` and `cat()`
To give informative output, use the `message()` function. For example, in the **poweRlaw** package, the `message()` function is used to give the user an estimate of expected run time. Providing a rough estimate of how long the function takes, allows the user to optimise their time. Similar to warnings, messages can be suppressed with `suppressMessages()`.
Another function used for printing messages is `cat()`. In general `cat()` should only be used in `print()`/`show()` methods, e.g. look at the function definition of the S3 print method for `difftime` objects, `getS3method("print", "difftime")`.
### Exercises
The `stop()` function has an argument `call.` that indicates if the function call should be part of the error message. Create a function and experiment with this option.
### 3\.3\.1 Invisible returns
The `invisible()` function allows you to return a temporarily invisible copy of an object. This is particularly useful for functions that return values which can be assigned, but are not printed when they are not assigned. For example suppose we have a function that plots the data and fits a straight line
```
regression_plot = function(x, y, ...) {
# Plot and pass additional arguments to default plot method
plot(x, y, ...)
# Fit regression model
model = lm(y ~ x)
# Add line of best fit to the plot
abline(model)
invisible(model)
}
```
When the function is called, a scatter graph is plotted with the line of best fit, but the output is invisible. However when we assign the function to an object, i.e. `out = regression_plot(x, y)` the variable `out` contains the output of the `lm()` call.
Another example is the histogram function `hist()`. Typically we don’t want anything displayed in the console when we call the function
```
hist(x)
```
However if we assign the output to an object, `out = hist(x)`, the object `out` is actually a list containing, *inter alia*, information on the mid\-points, breaks and counts.
### Fatal errors: `stop()`
Fatal errors are raised by calling the `stop()`, i.e. execution is terminated. When `stop()` is called, there is no way for a function to continue. For instance, when we generate random numbers using `rnorm()` the first argument is the sample size,`n`. If the number of observations to return is less than \\(1\\), an error is raised. When we need to raise an error, we should do so as quickly as possible; otherwise it’s a waste of resources. Hence, the first few lines of a function typically perform argument checking.
Suppose we call a function that raises an error. What then? Efficient, robust code *catches* the error and handles it appropriately. Errors can be caught using `try()` and `tryCatch()`. For example,
```
# Suppress the error message
good = try(1 + 1, silent = TRUE)
bad = try(1 + "1", silent = TRUE)
```
When we inspect the objects, the variable `good` just contains the number `2`
```
good
#> [1] 2
```
However, the `bad` object is a character string with class `try-error` and a `condition` attribute that contains the error message
```
bad
#> [1] "Error in 1 + \"1\" : non-numeric argument to binary operator\n"
#> attr(,"class")
#> [1] "try-error"
#> attr(,"condition")
#> <simpleError in 1 + "1": non-numeric argument to binary operator>
```
We can use this information in a standard conditional statement
```
if (class(bad) == "try-error")
# Do something
```
Further details on error handling, as well as some excellent advice on general debugging techniques, are given in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
### Warnings: `warning()`
Warnings are generated using the `warning()` function. When a warning is raised, it indicates potential problems. For example, `mean(NULL)` returns `NA` and also raises a warning.
When we come across a warning in our code, it is important to solve the problem and not just ignore the issue. While ignoring warnings saves time in the short\-term, warnings can often mask deeper issues that have crept into our code.
Warnings can be hidden using `suppressWarnings()`.
### Informative output: `message()` and `cat()`
To give informative output, use the `message()` function. For example, in the **poweRlaw** package, the `message()` function is used to give the user an estimate of expected run time. Providing a rough estimate of how long the function takes, allows the user to optimise their time. Similar to warnings, messages can be suppressed with `suppressMessages()`.
Another function used for printing messages is `cat()`. In general `cat()` should only be used in `print()`/`show()` methods, e.g. look at the function definition of the S3 print method for `difftime` objects, `getS3method("print", "difftime")`.
### Exercises
The `stop()` function has an argument `call.` that indicates if the function call should be part of the error message. Create a function and experiment with this option.
### 3\.3\.1 Invisible returns
The `invisible()` function allows you to return a temporarily invisible copy of an object. This is particularly useful for functions that return values which can be assigned, but are not printed when they are not assigned. For example suppose we have a function that plots the data and fits a straight line
```
regression_plot = function(x, y, ...) {
# Plot and pass additional arguments to default plot method
plot(x, y, ...)
# Fit regression model
model = lm(y ~ x)
# Add line of best fit to the plot
abline(model)
invisible(model)
}
```
When the function is called, a scatter graph is plotted with the line of best fit, but the output is invisible. However when we assign the function to an object, i.e. `out = regression_plot(x, y)` the variable `out` contains the output of the `lm()` call.
Another example is the histogram function `hist()`. Typically we don’t want anything displayed in the console when we call the function
```
hist(x)
```
However if we assign the output to an object, `out = hist(x)`, the object `out` is actually a list containing, *inter alia*, information on the mid\-points, breaks and counts.
3\.4 Factors
------------
Factors are much maligned objects. While at times they are awkward, they do have their uses. A factor is used to store categorical variables. This data type is unique to R (or at least not common among programming languages). The difference between factors and strings is important because R treats factors and strings differently. Although factors look similar to character vectors, they are actually integers. This leads to initially surprising behaviour
```
x = 4:6
c(x)
#> [1] 4 5 6
c(factor(x))
#> [1] 1 2 3
```
In this case the `c()` function is using the underlying integer representation of the factor. Dealing with the wrong case of behaviour is a common source of inefficiency for R users.
Often categorical variables get stored as \\(1\\), \\(2\\), \\(3\\), \\(4\\), and \\(5\\), with associated documentation elsewhere that explains what each number means. This is clearly a pain. Alternatively we store the data as a character vector. While this is fine, the semantics are wrong because it doesn’t convey that this is a categorical variable. It’s not sensible to say that you should **always** or **never** use factors, since factors have both positive and negative features. Instead we need to examine each case individually.
As a general rule, if your variable has an inherent order, e.g. small vs large, or you have a fixed set of categories, then you should consider using a factor.
### 3\.4\.1 Inherent order
Factors can be used for ordering in graphics. For instance, suppose we have a data set where the variable `type`, takes one of three values, `small`, `medium` and `large`. Clearly there is an ordering. Using a standard `boxplot()` call,
```
boxplot(y ~ type)
```
would create a boxplot where the \\(x\\)\-axis was alphabetically ordered. By converting `type` into factor, we can easily specify the correct ordering.
```
boxplot(y ~ factor(type, levels = c("Small", "Medium", "Large")))
```
Most users interact with factors via the `read.csv()` function where character columns are automatically converted to factors. This feature can be irritating if our data is messy and we want to clean and recode variables. Typically when reading in data via `read.csv()`, we use the `stringsAsFactors = FALSE` argument. Although this argument can be added to the global `options()` list and placed in the `.Rprofile`, this leads to non\-portable code, so should be avoided.
### 3\.4\.2 Fixed set of categories
Suppose our data set relates to months of the year
```
m = c("January", "December", "March")
```
If we sort `m` in the usual way, `sort(m)`, we perform standard alpha\-numeric ordering; placing `December` first. This is technically correct, but not that helpful. We can use factors to remedy this problem by specifying the admissible levels
```
# month.name contains the 12 months
fac_m = factor(m, levels = month.name)
sort(fac_m)
#> [1] January March December
#> 12 Levels: January February March April May June July August ... December
```
#### Exercise
Factors are slightly more space efficient than characters. Create a character vector and corresponding factor and use `pryr::object_size()` to calculate the space needed for each object.
### 3\.4\.1 Inherent order
Factors can be used for ordering in graphics. For instance, suppose we have a data set where the variable `type`, takes one of three values, `small`, `medium` and `large`. Clearly there is an ordering. Using a standard `boxplot()` call,
```
boxplot(y ~ type)
```
would create a boxplot where the \\(x\\)\-axis was alphabetically ordered. By converting `type` into factor, we can easily specify the correct ordering.
```
boxplot(y ~ factor(type, levels = c("Small", "Medium", "Large")))
```
Most users interact with factors via the `read.csv()` function where character columns are automatically converted to factors. This feature can be irritating if our data is messy and we want to clean and recode variables. Typically when reading in data via `read.csv()`, we use the `stringsAsFactors = FALSE` argument. Although this argument can be added to the global `options()` list and placed in the `.Rprofile`, this leads to non\-portable code, so should be avoided.
### 3\.4\.2 Fixed set of categories
Suppose our data set relates to months of the year
```
m = c("January", "December", "March")
```
If we sort `m` in the usual way, `sort(m)`, we perform standard alpha\-numeric ordering; placing `December` first. This is technically correct, but not that helpful. We can use factors to remedy this problem by specifying the admissible levels
```
# month.name contains the 12 months
fac_m = factor(m, levels = month.name)
sort(fac_m)
#> [1] January March December
#> 12 Levels: January February March April May June July August ... December
```
#### Exercise
Factors are slightly more space efficient than characters. Create a character vector and corresponding factor and use `pryr::object_size()` to calculate the space needed for each object.
#### Exercise
Factors are slightly more space efficient than characters. Create a character vector and corresponding factor and use `pryr::object_size()` to calculate the space needed for each object.
3\.5 The apply family
---------------------
The apply functions can be an alternative to writing for loops. The general idea is to apply (or map) a function to each element of an object. For example, you can apply a function to each row or column of a matrix. A list of available functions is given in Table [3\.1](programming.html#tab:apply-family), with a short description. In general, all the apply functions have similar properties:
* Each function takes at least two arguments: an object and another function. The function is passed as an argument.
* Every apply function has the dots, `...`, argument that is used to pass on arguments to the function that is given as an argument.
Using apply functions when possible, can lead to more succinct and idiomatic R code. In this section, we will cover the three main functions, `apply()`, `lapply()`, and `sapply()`. Since the apply functions are covered in most R textbooks, we just give a brief introduction to the topic and provide pointers to other resources at the end of this section.
Most people rarely use the other apply functions. For example, I have only used `eapply()` once. Students in my class uploaded R scripts. Using `source()`, I was able to read in the scripts to a separate environment. I then applied a marking scheme to each environment using `eapply()`. Using separate environments, avoided object name clashes.
Table 3\.1: The apply family of functions from base R.
| Function | Description |
| --- | --- |
| `apply` | Apply functions over array margins |
| `by` | Apply a function to a data frame split by factors |
| `eapply` | Apply a function over values in an environment |
| `lapply` | Apply a function over a list or vector |
| `mapply` | Apply a function to multiple list or vector arguments |
| `rapply` | Recursively apply a function to a list |
| `tapply` | Apply a function over a ragged array |
The `apply()` function is used to apply a function to each row or column of a matrix. In many data science
problems, this is a common task. For example, to calculate the standard deviation of the rows we have
```
data("ex_mat", package = "efficient")
# MARGIN=1: corresponds to rows
row_sd = apply(ex_mat, 1, sd)
```
The first argument of `apply()` is the object of interest. The second argument is the `MARGIN`. This is a vector giving the subscripts which the function (the third argument) will be applied over. When the object is a matrix, a margin of `1` indicates rows and `2` indicates columns. So to calculate the column standard deviations, the second argument is changed to `2`
```
col_sd = apply(ex_mat, 2, sd)
```
Additional arguments can be passed to the function that is to be applied to the data. For example, to pass the `na.rm` argument to the `sd` function, we have
```
row_sd = apply(ex_mat, 1, sd, na.rm = TRUE)
```
The `apply()` function also works on higher dimensional arrays; a one dimensional array is a vector, a two dimensional array is a matrix.
The `lapply()` function is similar to `apply()`; with the key difference being that the input type is a vector or list and the return type is a list. Essentially, we apply a function to each element of a list or vector. The functions `sapply()` and `vapply()` are similar to `lapply()`, but the return type is not necessary a list.
### 3\.5\.1 Example: the movies data set
The [Internet Movie Database](http://imdb.com/) is a website that collects movie data supplied by studios and fans. It is one of the largest movie databases on the web and is maintained by Amazon. The **ggplot2movies** package contains about sixty thousand movies stored as a data frame
```
data(movies, package = "ggplot2movies")
```
Movies are rated between \\(1\\) and \\(10\\) by fans. Columns \\(7\\) to \\(16\\) of the `movies` data set gives the percentage of voters for a particular rating.
```
ratings = movies[, 7:16]
```
For example, 4\.5% of voters, rated the first movie a rating of \\(1\\)
```
ratings[1, ]
#> # A tibble: 1 x 10
#> r1 r2 r3 r4 r5 r6 r7 r8 r9 r10
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 4.5 4.5 4.5 4.5 14.5 24.5 24.5 14.5 4.5 4.5
```
We can use the `apply()` function to investigate voting patterns. The function `nnet::which.is.max()` finds the maximum position in a vector, but breaks ties at random; `which.max()` just returns the first value. Using `apply()`, we can easily determine the most popular rating for each movie and plot the results
```
popular = apply(ratings, 1, nnet::which.is.max)
plot(table(popular))
```
Figure 3\.2: Movie voting preferences.
Figure [3\.2](programming.html#fig:3-2) highlights that voting patterns are clearly not uniform between \\(1\\) and \\(10\\). The most popular vote is the highest rating, \\(10\\). Clearly if you went to the trouble of voting for a movie, it was either very good, or very bad (there is also a peak at \\(1\\)). Rating a movie \\(7\\) is also a popular choice (search the web for “most popular number” and \\(7\\) dominates the rankings).
### 3\.5\.2 Type consistency
When programming, it is helpful if the return value from a function always takes the same form. Unfortunately, not all base R functions follow this idiom. For example, the functions `sapply()` and `[.data.frame()` aren’t type consistent
```
two_cols = data.frame(x = 1:5, y = letters[1:5])
zero_cols = data.frame()
sapply(two_cols, class) # a character vector
sapply(zero_cols, class) # a list
two_cols[, 1:2] # a data.frame
two_cols[, 1] # an integer vector
```
This can cause unexpected problems. The functions `lapply()` and `vapply()` are type consistent. Likewise for `dplyr::select()` and `dplyr:filter()`. The **purrr** package has some type consistent alternatives to base R functions. For example, `map_dbl()` (and other `map_*` functions) to replace `Map()` and `flatten_df()` to replace `unlist()`.
#### Other resources
Almost every R book has a section on the apply function. Below, we’ve given the resources we feel are most helpful.
* Each function has a number of examples in the associated help page. You can directly access the examples using the `example()` function, e.g. to run the `apply()` examples, use `example("apply")`.
* There is a very detailed StackOverflow [answer](http://stackoverflow.com/q/3505701/203420) which describes when, where and how to use each of the functions.
* In a similar vein, Neil Saunders has a nice blog [post](https://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/) giving an overview of the functions.
* The apply functions are an example of functional programming. Chapter 16 of *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)) describes the interplay between loops and functional programming in more detail, while H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) gives a more in\-depth description of the topic.
#### Exercises
1. Rewrite the `sapply()` function calls above using `vapply()` to ensure type consistency.
2. How would you make subsetting data frames with `[` type consistent? Hint: look at
the `drop` argument.
### 3\.5\.1 Example: the movies data set
The [Internet Movie Database](http://imdb.com/) is a website that collects movie data supplied by studios and fans. It is one of the largest movie databases on the web and is maintained by Amazon. The **ggplot2movies** package contains about sixty thousand movies stored as a data frame
```
data(movies, package = "ggplot2movies")
```
Movies are rated between \\(1\\) and \\(10\\) by fans. Columns \\(7\\) to \\(16\\) of the `movies` data set gives the percentage of voters for a particular rating.
```
ratings = movies[, 7:16]
```
For example, 4\.5% of voters, rated the first movie a rating of \\(1\\)
```
ratings[1, ]
#> # A tibble: 1 x 10
#> r1 r2 r3 r4 r5 r6 r7 r8 r9 r10
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 4.5 4.5 4.5 4.5 14.5 24.5 24.5 14.5 4.5 4.5
```
We can use the `apply()` function to investigate voting patterns. The function `nnet::which.is.max()` finds the maximum position in a vector, but breaks ties at random; `which.max()` just returns the first value. Using `apply()`, we can easily determine the most popular rating for each movie and plot the results
```
popular = apply(ratings, 1, nnet::which.is.max)
plot(table(popular))
```
Figure 3\.2: Movie voting preferences.
Figure [3\.2](programming.html#fig:3-2) highlights that voting patterns are clearly not uniform between \\(1\\) and \\(10\\). The most popular vote is the highest rating, \\(10\\). Clearly if you went to the trouble of voting for a movie, it was either very good, or very bad (there is also a peak at \\(1\\)). Rating a movie \\(7\\) is also a popular choice (search the web for “most popular number” and \\(7\\) dominates the rankings).
### 3\.5\.2 Type consistency
When programming, it is helpful if the return value from a function always takes the same form. Unfortunately, not all base R functions follow this idiom. For example, the functions `sapply()` and `[.data.frame()` aren’t type consistent
```
two_cols = data.frame(x = 1:5, y = letters[1:5])
zero_cols = data.frame()
sapply(two_cols, class) # a character vector
sapply(zero_cols, class) # a list
two_cols[, 1:2] # a data.frame
two_cols[, 1] # an integer vector
```
This can cause unexpected problems. The functions `lapply()` and `vapply()` are type consistent. Likewise for `dplyr::select()` and `dplyr:filter()`. The **purrr** package has some type consistent alternatives to base R functions. For example, `map_dbl()` (and other `map_*` functions) to replace `Map()` and `flatten_df()` to replace `unlist()`.
#### Other resources
Almost every R book has a section on the apply function. Below, we’ve given the resources we feel are most helpful.
* Each function has a number of examples in the associated help page. You can directly access the examples using the `example()` function, e.g. to run the `apply()` examples, use `example("apply")`.
* There is a very detailed StackOverflow [answer](http://stackoverflow.com/q/3505701/203420) which describes when, where and how to use each of the functions.
* In a similar vein, Neil Saunders has a nice blog [post](https://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/) giving an overview of the functions.
* The apply functions are an example of functional programming. Chapter 16 of *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)) describes the interplay between loops and functional programming in more detail, while H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) gives a more in\-depth description of the topic.
#### Exercises
1. Rewrite the `sapply()` function calls above using `vapply()` to ensure type consistency.
2. How would you make subsetting data frames with `[` type consistent? Hint: look at
the `drop` argument.
#### Other resources
Almost every R book has a section on the apply function. Below, we’ve given the resources we feel are most helpful.
* Each function has a number of examples in the associated help page. You can directly access the examples using the `example()` function, e.g. to run the `apply()` examples, use `example("apply")`.
* There is a very detailed StackOverflow [answer](http://stackoverflow.com/q/3505701/203420) which describes when, where and how to use each of the functions.
* In a similar vein, Neil Saunders has a nice blog [post](https://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/) giving an overview of the functions.
* The apply functions are an example of functional programming. Chapter 16 of *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)) describes the interplay between loops and functional programming in more detail, while H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) gives a more in\-depth description of the topic.
#### Exercises
1. Rewrite the `sapply()` function calls above using `vapply()` to ensure type consistency.
2. How would you make subsetting data frames with `[` type consistent? Hint: look at
the `drop` argument.
3\.6 Caching variables
----------------------
A straightforward method for speeding up code is to calculate objects once and reuse the value when necessary. This could be as simple as replacing `sd(x)` in multiple function calls with the object `sd_x` that is defined once and reused. For example, suppose we wish to normalise each column of a matrix. However, instead of using the standard deviation of each column, we will use the standard deviation of the entire data set
```
apply(x, 2, function(i) mean(i) / sd(x))
```
This is inefficient since the value of `sd(x)` is constant and thus recalculating the standard deviation for every column is unnecessary. Instead we should evaluate once and store the result
```
sd_x = sd(x)
apply(x, 2, function(i) mean(i) / sd_x)
```
If we compare the two methods on a \\(100\\) row by \\(1000\\) column matrix, the cached version is around \\(100\\) times faster (Figure [3\.3](programming.html#fig:3-4)).
Figure 3\.3: Performance gains obtained from caching the standard deviation in a \\(100\\) by \\(1000\\) matrix.
A more advanced form of caching is to use the **memoise** package. If a function is called multiple times with the same input, it may be possible to speed things up by keeping a cache of known answers that it can retrieve. The **memoise** package allows us to easily store the value of function call and returns the cached result when the function is called again with the same arguments. This package trades off memory versus speed, since the memoised function stores all previous inputs and outputs. To cache a function, we simply pass the function to the **memoise** function.
The classic memoise example is the factorial function. Another example is to limit use to a web resource. For example, suppose we are developing a Shiny (an interactive graphic) application where the user can fit a regression line to data. The user can remove points and refit the line. An example function would be
```
# Argument indicates row to remove
plot_mpg = function(row_to_remove) {
data(mpg, package = "ggplot2")
mpg = mpg[-row_to_remove, ]
plot(mpg$cty, mpg$hwy)
lines(lowess(mpg$cty, mpg$hwy), col = 2)
}
```
We can use **memoise** to speed up repeated function calls by caching results. A quick benchmark
```
m_plot_mpg = memoise(plot_mpg)
microbenchmark(times = 10, unit = "ms", m_plot_mpg(10), plot_mpg(10))
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> m_plot_mpg(10) 0.04 4e-02 0.07 8e-02 8e-02 0.1 10 a
#> plot_mpg(10) 40.20 1e+02 95.52 1e+02 1e+02 107.1 10 b
```
suggests that we can obtain a \\(100\\)\-fold speed\-up.
#### Exercise
Construct a box plot of timings for the standard plotting function and the memoised version.
### 3\.6\.1 Function closures
The following section is meant to provide an introduction to function closures with example use cases. See H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) for a detailed introduction.
More advanced caching is available using *function closures*. A closure in R is an object that contains functions bound to the environment the closure was created in. Technically all functions in R have this property, but we use the term function closure to denote functions where the environment is not in `.GlobalEnv`. One of the environments associated with a function is known as the enclosing environment, that is, where the function was created. This allows us to store values between function calls. Suppose we want to create a stop\-watch type function. This is easily achieved with a function closure
```
# <<- assigns values to the parent environment
stop_watch = function() {
start_time = stop_time = NULL
start = function() start_time <<- Sys.time()
stop = function() {
stop_time <<- Sys.time()
difftime(stop_time, start_time)
}
list(start = start, stop = stop)
}
watch = stop_watch()
```
The object `watch` is a list, that contains two functions. One function for starting the timer
```
watch$start()
```
the other for stopping the timer
```
watch$stop()
```
Without using function closures, the stop\-watch function would be longer, more complex and therefore more inefficient. When used properly, function closures are very useful programming tools for writing concise code.
#### Exercise
1. Write a stop\-watch function **without** using function closures.
2. Many stop\-watches have the ability to measure not only your overall time but also your individual laps. Add a `lap()` function to the `stop_watch()` function that will record individual times, while still keeping track of the overall time.
A related idea to function closures, is non\-standard evaluation (NSE), or programming on the language. NSE crops up all the time in R. For example, when we execute `plot(height, weight)`, R automatically labels the x\- and y\-axis of the plot with `height` and `weight`. This is a powerful concept that enables us to simplify code. More detail is given about “Non\-standard evaluation” in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
#### Exercise
Construct a box plot of timings for the standard plotting function and the memoised version.
### 3\.6\.1 Function closures
The following section is meant to provide an introduction to function closures with example use cases. See H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) for a detailed introduction.
More advanced caching is available using *function closures*. A closure in R is an object that contains functions bound to the environment the closure was created in. Technically all functions in R have this property, but we use the term function closure to denote functions where the environment is not in `.GlobalEnv`. One of the environments associated with a function is known as the enclosing environment, that is, where the function was created. This allows us to store values between function calls. Suppose we want to create a stop\-watch type function. This is easily achieved with a function closure
```
# <<- assigns values to the parent environment
stop_watch = function() {
start_time = stop_time = NULL
start = function() start_time <<- Sys.time()
stop = function() {
stop_time <<- Sys.time()
difftime(stop_time, start_time)
}
list(start = start, stop = stop)
}
watch = stop_watch()
```
The object `watch` is a list, that contains two functions. One function for starting the timer
```
watch$start()
```
the other for stopping the timer
```
watch$stop()
```
Without using function closures, the stop\-watch function would be longer, more complex and therefore more inefficient. When used properly, function closures are very useful programming tools for writing concise code.
#### Exercise
1. Write a stop\-watch function **without** using function closures.
2. Many stop\-watches have the ability to measure not only your overall time but also your individual laps. Add a `lap()` function to the `stop_watch()` function that will record individual times, while still keeping track of the overall time.
A related idea to function closures, is non\-standard evaluation (NSE), or programming on the language. NSE crops up all the time in R. For example, when we execute `plot(height, weight)`, R automatically labels the x\- and y\-axis of the plot with `height` and `weight`. This is a powerful concept that enables us to simplify code. More detail is given about “Non\-standard evaluation” in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
#### Exercise
1. Write a stop\-watch function **without** using function closures.
2. Many stop\-watches have the ability to measure not only your overall time but also your individual laps. Add a `lap()` function to the `stop_watch()` function that will record individual times, while still keeping track of the overall time.
A related idea to function closures, is non\-standard evaluation (NSE), or programming on the language. NSE crops up all the time in R. For example, when we execute `plot(height, weight)`, R automatically labels the x\- and y\-axis of the plot with `height` and `weight`. This is a powerful concept that enables us to simplify code. More detail is given about “Non\-standard evaluation” in H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)).
3\.7 The byte compiler
----------------------
The **compiler** package, written by R Core member Luke Tierney has been part of R since version 2\.13\.0\. The **compiler** package allows R functions to be compiled, resulting in a byte code version that may run faster[8](#fn8). The compilation process eliminates a number of costly operations the interpreter has to perform, such as variable lookup.
Since R 2\.14\.0, all of the standard functions and packages in base R are pre\-compiled into byte\-code. This is illustrated by the base function `mean()`:
```
getFunction("mean")
#> function (x, ...)
#> UseMethod("mean")
#> <bytecode: 0x242e2c0>
#> <environment: namespace:base>
```
The third line contains the `bytecode` of the function. This means that the **compiler** package has translated the R function into another language that can be interpreted by a very fast interpreter. Amazingly the **compiler** package is almost entirely pure R, with just a few C support routines.
### 3\.7\.1 Example: the mean function
The **compiler** package comes with R, so we just need to load the package in the usual way
```
library("compiler")
```
Next we create an inefficient function for calculating the mean. This function takes in a vector, calculates the length and then updates the `m` variable.
```
mean_r = function(x) {
m = 0
n = length(x)
for (i in seq_len(n))
m = m + x[i] / n
m
}
```
This is clearly a bad function and we should just use the `mean()` function, but it’s a useful comparison. Compiling the function is straightforward
```
cmp_mean_r = cmpfun(mean_r)
```
Then we use the `microbenchmark()` function to compare the three variants
```
# Generate some data
x = rnorm(1000)
microbenchmark(times = 10, unit = "ms", # milliseconds
mean_r(x), cmp_mean_r(x), mean(x))
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> mean_r(x) 0.358 0.361 0.370 0.363 0.367 0.43 10 c
#> cmp_mean_r(x) 0.050 0.051 0.052 0.051 0.051 0.07 10 b
#> mean(x) 0.005 0.005 0.008 0.007 0.008 0.03 10 a
```
The compiled function is around seven times faster than the uncompiled function. Of course the native `mean()` function is faster, but compiling does make a significant difference (Figure [3\.4](programming.html#fig:3-3)).
Figure 3\.4: Comparison of mean functions.
### 3\.7\.2 Compiling code
There are a number of ways to compile code. The easiest is to compile individual functions using `cmpfun()`, but this obviously doesn’t scale. If you create a package, you can automatically compile the package on installation by adding
```
ByteCompile: true
```
to the `DESCRIPTION` file. Most R packages installed using `install.packages()` are not compiled. We can enable (or force) packages to be compiled by starting R with the environment variable `R_COMPILE_PKGS` set to a positive integer value and specify that we install the package from `source`, i.e.
```
## Windows users will need Rtools
install.packages("ggplot2", type = "source")
```
Or if we want to avoid altering the `.Renviron` file, we can specify an additional argument
```
install.packages("ggplot2", type = "source", INSTALL_opts = "--byte-compile")
```
A final option is to use just\-in\-time (JIT) compilation. The `enableJIT()` function disables JIT compilation if the argument is `0`. Arguments `1`, `2`, or `3` implement different levels of optimisation. JIT can also be enabled by setting the environment variable `R_ENABLE_JIT`, to one of these values.
We recommend setting the compile level to the maximum value of 3\.
The impact of compiling on install will vary from package to package: for packages that already have lots of pre\-compiled code, speed gains will be small (R Core Team [2016](#ref-team2016installation)).
Not all packages work if compiled on installation.
### 3\.7\.1 Example: the mean function
The **compiler** package comes with R, so we just need to load the package in the usual way
```
library("compiler")
```
Next we create an inefficient function for calculating the mean. This function takes in a vector, calculates the length and then updates the `m` variable.
```
mean_r = function(x) {
m = 0
n = length(x)
for (i in seq_len(n))
m = m + x[i] / n
m
}
```
This is clearly a bad function and we should just use the `mean()` function, but it’s a useful comparison. Compiling the function is straightforward
```
cmp_mean_r = cmpfun(mean_r)
```
Then we use the `microbenchmark()` function to compare the three variants
```
# Generate some data
x = rnorm(1000)
microbenchmark(times = 10, unit = "ms", # milliseconds
mean_r(x), cmp_mean_r(x), mean(x))
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> mean_r(x) 0.358 0.361 0.370 0.363 0.367 0.43 10 c
#> cmp_mean_r(x) 0.050 0.051 0.052 0.051 0.051 0.07 10 b
#> mean(x) 0.005 0.005 0.008 0.007 0.008 0.03 10 a
```
The compiled function is around seven times faster than the uncompiled function. Of course the native `mean()` function is faster, but compiling does make a significant difference (Figure [3\.4](programming.html#fig:3-3)).
Figure 3\.4: Comparison of mean functions.
### 3\.7\.2 Compiling code
There are a number of ways to compile code. The easiest is to compile individual functions using `cmpfun()`, but this obviously doesn’t scale. If you create a package, you can automatically compile the package on installation by adding
```
ByteCompile: true
```
to the `DESCRIPTION` file. Most R packages installed using `install.packages()` are not compiled. We can enable (or force) packages to be compiled by starting R with the environment variable `R_COMPILE_PKGS` set to a positive integer value and specify that we install the package from `source`, i.e.
```
## Windows users will need Rtools
install.packages("ggplot2", type = "source")
```
Or if we want to avoid altering the `.Renviron` file, we can specify an additional argument
```
install.packages("ggplot2", type = "source", INSTALL_opts = "--byte-compile")
```
A final option is to use just\-in\-time (JIT) compilation. The `enableJIT()` function disables JIT compilation if the argument is `0`. Arguments `1`, `2`, or `3` implement different levels of optimisation. JIT can also be enabled by setting the environment variable `R_ENABLE_JIT`, to one of these values.
We recommend setting the compile level to the maximum value of 3\.
The impact of compiling on install will vary from package to package: for packages that already have lots of pre\-compiled code, speed gains will be small (R Core Team [2016](#ref-team2016installation)).
Not all packages work if compiled on installation.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/workflow.html |
4 Efficient workflow
====================
Efficient programming is an important skill for generating the correct result, on time. Yet coding is only one part of a wider skillset needed for successful outcomes for projects involving R programming. Unless your project is to write generic R code (i.e., unless you are on the R Core Team), the project will probably transcend the confines of the R world; it must engage with a whole range of other factors. In this context, we define ‘workflow’ as the sum of practices, habits and systems that enable productivity.[9](#fn9) To some extent workflow is about personal preferences. Everyone’s mind works differently so the most appropriate workflow varies from person to person and from one project to the next. Project management practices will also vary depending on the scale and type of the project; it’s a big topic but can usefully be condensed in 5 top tips.
### Prerequisites
This chapter focuses on workflow. For project planning and management, we’ll use the **DiagrammeR** package. For project reporting, we’ll focus on R Markdown and **knitr** which are bundled with RStudio (but can be installed independently if needed). We’ll suggest other packages that are worth investigating, but are not required for this particular chapter.
```
library("DiagrammeR")
```
4\.1 Top 5 tips for efficient workflow
--------------------------------------
1. Start without writing code but with a clear mind and perhaps a pen and paper. This will ensure you keep your objectives at the forefront of your mind, without getting lost in the technology.
2. Make a plan. The size and nature will depend on the project but timelines, resources and ‘chunking’ the work will make you more effective when you start.
3. Select the packages you will use for implementing the plan early. Minutes spent researching and selecting from the available options could save hours in the future.
4. Document your work at every stage; work can only be effective if it’s communicated clearly and code can only be efficiently understood if it’s commented.
5. Make your entire workflow as reproducible as possible. **knitr** can help with this in the phase of documentation.
4\.2 A project planning typology
--------------------------------
Appropriate project management structures and workflow depend on the *type* of project you are undertaking. The typology below demonstrates the links between project type and project management requirements.[10](#fn10)
* *Data analysis*. Here you are trying to explore datasets to discover something interesting/answer some questions. The emphasis is on the speed of manipulating your data to generate interesting results. Formality is less important in this type of project. Sometimes this analysis project may only be part of a larger project (the data may have to be created in a lab, for example). How the data analysts interact with the rest of the team may be as important for the project’s success as how they interact with each other.
* *Package creation*. Here you want to create code that can be reused across projects, possibly by people whose use case you don’t know (if you make it publicly available). The emphasis in this case will be on clarity of user interface and documentation, meaning style and code review are important. Robustness and testing are important in this type of project too.
* *Reporting and publishing*. Here you are writing a report or journal paper or book. The level of formality varies depending upon the audience, but you have additional worries like how much code it takes to arrive at the conclusions, and how much output does the code create.
* *Software applications*. This could range from a simple Shiny app to R being embedded in the server of a much larger piece of software. Either way, since there is limited opportunity for human interaction, the emphasis is on robust code and gracefully dealing with failure.
Based on these observations, we recommend thinking about which type of workflow, file structure and project management system suits your projects best. Sometimes it’s best not to be prescriptive so we recommend trying different working practices to discover which works best, time permitting.[11](#fn11)
There are, however, concrete steps that can be taken to improve workflow in most projects that involve R programming.
Learning them will, in the long\-run, improve productivity and reproducibility.
With these motivations in mind, the purpose of this chapter is simple: to highlight some key ingredients of an efficient R workflow.
It builds on the concept of an R/RStudio *project*, introduced in Chapter [2](set-up.html#set-up), and is ordered chronologically throughout the stages involved in a typical project’s lifespan, from its inception to publication:
* Project planning. This should happen before any code has been written, to avoid time wasted using a mistaken analysis strategy. Project management is the art of making project plans happen.
* Package selection. After planning your project, you should identify which packages are most suitable to get the work done quickly and effectively. With rapid increases in the number and performance of packages, it is more important than ever to consider the range of options at the outset. For example, `*_join()` from **dplyr** is often more appropriate than `merge()`, as we’ll see in Chapter [6](data-carpentry.html#data-carpentry).
* Publication. This final stage is relevant if you want your R code to be useful for others in the long term. To this end, Section [4\.5](workflow.html#publication) touches on documentation using **knitr** and the much stricter approach to coding for publication and package development.
4\.3 Project planning and management
------------------------------------
Good programmers working on a complex project will rarely just start typing code. Instead, they will plan the steps needed to complete the task as efficiently as possible: “smart preparation minimizes work” (Berkun [2005](#ref-berkun2005art)).
Although search engines are useful for identifying the appropriate strategy, trial\-and\-error approaches (for example, typing code at random and Googling the inevitable error messages) are usually highly *inefficient*.
Strategic thinking is especially important during a project’s inception; if you make a bad decision early on, it will have cascading negative impacts throughout the project’s entire lifespan. So detrimental and ubiquitous is this phenomenon in software development that a term has been coined to describe it: *technical debt*. This has been defined as “not quite right code which we postpone making it right” (Kruchten, Nord, and Ozkaya [2012](#ref-kruchten2012technical)). Dozens of academic papers have been written on the subject but, from the perspective of *beginning* a project (i.e., in the planning stage, where we are now), all you need to know is that it is absolutely vital to make sensible decisions at the outset. If you do not, your project may be doomed to failure of incessant rounds of refactoring.
To minimise technical debt at the outset, the best place to start may be with a pen and paper and an open mind. Sketching out your ideas and deciding precisely what you want to do, free from the constraints of a particular piece of technology, can be a rewarding exercise before you begin.
Project planning and ‘visioning’ can be a creative process not always well\-suited to the linear logic of computing, despite recent advances in project management software, some of which are outlined in the bullet points below.
Scale can loosely be defined as the number of people working on a project. It should be considered at the outset because the importance of project management increases exponentially with the number of people involved.
Project management may be trivial for a small project but if you expect it to grow, implementing a structured workflow early could avoid problems later.
On small projects consisting of a ‘one off’ script, project management may be a distracting waste of time.
Large projects involving dozens of people, on the other hand, require much effort dedicated to project management: regular meetings, division of labour and a scalable project management system to track progress, issues and
priorities that will inevitably consume a large proportion of the project’s time. Fortunately, a multitude of dedicated project management systems have been developed to cater for projects across a range of scales. These include, in rough ascending order of scale and complexity:
* the interactive code sharing site [GitHub](https://github.com/), which is described in more detail in Chapter [9](collaboration.html#collaboration)
* [ZenHub](https://www.zenhub.com/), a browser plugin that is “the first and only project management suite that works natively within GitHub”
* web\-based and easy\-to\-use kanban tools such as [Trello](https://trello.com/) and [Jira](https://www.atlassian.com/software/jira)
* dedicated desktop project management software such as [ProjectLibre](http://sourceforge.net/projects/projectlibre/) and [GanttProject](https://sourceforge.net/projects/ganttproject)
* fully featured, enterprise scale open source project management systems such as [OpenProject](https://www.openproject.org/) and [Redmine](https://www.redmine.org/).
Regardless of the software (or lack thereof) used for project management, it involves considering the project’s aims in the context of available resources (e.g., computational and programmer resources), project scope, time\-scales and suitable software. And these things should be considered together. To take one example, is it worth the investment of time needed to learn a particular R package which is not essential to completing the project but which will make the code run faster? Does it make more sense to hire another programmer or invest in more computational resources to complete an urgent deadline?
Minutes spent thinking through such issues before writing a single line can save hours in the future. This is emphasised in books such as Berkun ([2005](#ref-berkun2005art)) and PMBoK ([2000](#ref-PMBoK2000)) and useful online resources, such as [teamgantt.com](https://teamgantt.com/guide-to-project-management/) and
[lasa.org.uk](https://www.lasa.org.uk/uploads/publications/ictpublications/computanews_guides/lcgpm.pdf),
which focus exclusively on project planning. This section condenses some of the most important lessons from this literature in the context of typical R projects (i.e., which involve data analysis, modelling and visualisation).
### 4\.3\.1 ‘Chunking’ your work
Once a project overview has been devised and stored, in mind (for small projects, if you trust that as storage medium!) or written, a plan with a timeline can be drawn\-up.
The up\-to\-date visualisation of this plan can be a powerful reminder to yourself and collaborators of progress on the project so far. More importantly, the timeline provides an overview of what needs to be done next.
Setting start dates and deadlines for each task will help prioritise the work and ensure you are on track.
Breaking a large project into smaller chunks is highly recommended, making huge, complex tasks more achievable and modular (PMBoK [2000](#ref-PMBoK2000)).
‘Chunking’ the work will also make collaboration easier, as we shall see in Chapter [5](input-output.html#input-output).
Figure 4\.1: Schematic illustrations of key project phases and levels of activity over time, based on PMBoK ([2000](#ref-PMBoK2000)).
The tasks that a project should be split into will depend on the nature of the work and the phases illustrated in Figure [4\.1](workflow.html#fig:4-1) represent a rough starting point, not a template, and the ‘programming’ phase will usually need to be split into at least ‘data tidying’, ‘processing’, and ‘visualisation’.
### 4\.3\.2 Making your workflow SMART
A more rigorous (but potentially onerous) way to project plan is to divide the work into a series of objectives and track their progress throughout the project’s duration.
One way to check if an objective is appropriate for action and review is by using the SMART criteria:
* Specific: is the objective clearly defined and self\-contained?
* Measurable: is there a clear indication of its completion?
* Attainable: can the target be achieved? This can also refer to Assigned (to a person).
* Realistic: have sufficient resources been allocated to the task?
* Time\-bound: is there an associated completion date or milestone?
If the answer to each of these questions is ‘yes’, the task is likely to be suitable to include in the project’s plan.
Note that this does not mean all project plans need to be uniform.
A project plan can take many forms, including a short document, a Gantt chart (see Figure [4\.2](workflow.html#fig:4-2)) or simply a clear vision of the project’s steps in mind.
Figure 4\.2: A Gantt chart created using **DiagrammeR** illustrating the steps needed to complete this book at an early stage of its development.
### 4\.3\.3 Visualising plans with R
Various R packages can help visualise the project plan.
While these are useful, they cannot compete with the dedicated project management software outlined at the outset of this section. However, if you are working on a relatively simple project, it is useful to know that R can help represent and keep track of your work. Packages for plotting project progress include:[12](#fn12)
* [**plan**](https://cran.r-project.org/web/packages/plan/), a package that provides basic tools to create burndown charts (which concisely show whether a project is on\-time or not) and Gantt charts.
* [**plotrix**](https://cran.r-project.org/web/packages/plotrix/index.html), a general purpose plotting package that provides basic Gantt chart plotting functionality. Enter `example(gantt.chart)` for details.
* [**DiagrammeR**](http://rich-iannone.github.io/DiagrammeR/), a new package for creating network graphs and other schematic diagrams in R. This package provides an R interface to simple flow\-chart file formats such as [mermaid](https://github.com/mermaid-js/mermaid) and [GraphViz](https://gitlab.com/graphviz/graphviz/).
The small example below (which provides the basis for creating charts like Figure [4\.2](workflow.html#fig:4-2)) illustrates how **DiagrammeR** can take simple text inputs to create informative up\-to\-date Gantt charts.
Such charts can greatly help with the planning and task management of long and complex R projects, as long as they do not take away valuable programming time from core project objectives.
```
library("DiagrammeR")
# Define the Gantt chart and plot the result (not shown)
mermaid("gantt
Section Initiation
Planning :a1, 2016-01-01, 10d
Data processing :after a1 , 30d")
```
In the above code, `gantt` defines the subsequent data layout.
`Section` refers to the project’s section (useful for large projects, with milestones) and each new line refers to a discrete task.
`Planning`, for example, has the task ID `a1`, which begins on the first day of 2016 and lasts for 10 days, and is referenced by the following task, `Data processing`. See [mermaid\-js.github.io/mermaid/\#/gantt](http://mermaid-js.github.io/mermaid/#/gantt) for more detailed documentation.
#### Exercises
1. What are the three most important work ‘chunks’ of your current R project?
2. What is the meaning of ‘SMART’ objectives (see [Making your workflow SMART](workflow.html#smart))?
3. Run the [code chunk](#DiagrammeR) at the end of this section to see the output.
4. Bonus exercise: modify this code to create a basic Gantt chart for an R project you are working on.
4\.4 Package selection
----------------------
A good example of the importance of prior planning to minimise effort and reduce technical debt is package selection. An inefficient, poorly supported or simply outdated package can waste hours. When a more appropriate alternative is available, this waste can be prevented by prior planning. There are many poor packages on CRAN and much duplication so it’s easy to go wrong. Just because a certain package *can* solve a particular problem, doesn’t mean that it *should*.
Used well, however, packages can greatly improve productivity: not reinventing the wheel is part of the ethos of open source software. If someone has already solved a particular technical problem, you don’t have to re\-write their code, allowing you to focus on solving the applied problem. Furthermore, because R packages are generally (but not always) written by competent programmers and subject to user feedback, they may work faster and more effectively than the hastily prepared code you may have written. All R code is open source and potentially subject to peer review. A prerequisite of publishing an R package is that developer contact details must be provided, and many packages provide a site for issue tracking. Furthermore, R packages can increase programmer productivity by dramatically reducing the amount of code they need to write because all the code is *packaged* in functions behind the scenes.
Let’s take an example. Imagine for a project you would like to find the distance between sets of points (origins, `o` and destinations, `d`) on the Earth’s surface. Background reading shows that a good approximation of ‘great circle’ distance, which accounts for the curvature of the Earth, can be made by using the Haversine formula, which you duly implement, involving much trial and error:
```
# Function to convert degrees to radians
deg2rad = function(deg) deg * pi / 180
# Create origins and destinations
o = c(lon = -1.55, lat = 53.80)
d = c(lon = -1.61, lat = 54.98)
# Convert to radians
o_rad = deg2rad(o)
d_rad = deg2rad(d)
# Find difference in degrees
delta_lon = (o_rad[1] - d_rad[1])
delta_lat = (o_rad[2] - d_rad[2])
# Calculate distance with Haversine formula
a = sin(delta_lat / 2)^2 + cos(o_rad[2]) * cos(d_rad[2]) * sin(delta_lon / 2)^2
c = 2 * asin(min(1, sqrt(a)))
(d_hav1 = 6371 * c) # multiply by Earth's diameter
#> [1] 131
```
This method works but it takes time to write, test and debug. It would be much better to package it up into a function. Or even better, use a function that someone else has written and put in a package:
```
# Find great circle distance with geosphere
(d_hav2 = geosphere::distHaversine(o, d))
#> [1] 131415
```
The difference between the hard\-coded method and the package method is striking. One is 7 lines of tricky R code involving many subsetting stages and small, similar functions (e.g., `sin` and `asin`) which are easy to confuse. The other is one line of simple code. The package method using **geosphere** took perhaps 100th of the time *and* gave a more accurate result (because it uses a more accurate estimate of the diameter of the Earth). This means that a couple of minutes searching for a package to estimate great circle distances would have been time well spent at the outset of this project. But how do you search for packages?
### 4\.4\.1 Searching for R packages
Building on the example above, how can one find out if there is a package to solve your particular problem? The first stage is to guess: if it is a common problem, someone has probably tried to solve it. The second stage is to search. A simple Google query, [`haversine formula R`](https://www.google.com/search?q=haversine+formula+R), returned links to various packages and a [blog post on a base R implementation](http://www.r-bloggers.com/great-circle-distance-calculations-in-r/).
Searching on sites dedicated to R can yield more specific and useful results.
The [r\-pkg.org](https://r-pkg.org/) website provides a simple yet effective online search system.
Entering `haversine` into its search bar yields the URL [r\-pkg.org/search.html?q\=haversine](https://r-pkg.org/search.html?q=haversine), which contains links to relevant packages.
Furthermore, undertaking searches for particular functions and packages from within R can save time and avoid the distractions of online searches via a web browser.
You can search *currently installed* packages with the command `??haversine`, although this will not help you find pacakges you’ve yet to install.
A simple solution is the `RSiteSearch()` function from the base R **utils** package opens a url in your browser linking to a number of functions (49 at the time of writing) mentioning the text string, with the following command:
```
# Search CRAN for mentions of haversine
RSiteSearch("haversine")
```
To get more functionality, various packages dedicated to searching for R packages have been developed.
**pkgsearch** is a popular package that provides many options for searching for packages, and a basic example is shown below.
The results show that 4 relevant packages were identified and ranked, simplifying the search process.
```
haversine_pkgs = pkgsearch::pkg_search(query = "haversine")
haversine_pkgs
```
```
- "haversine" ---------------------------------------------------------------------------- 4 packages in 0.007 seconds -
# package version by @ title
1 100 hans 0.1 Alex Hallam 8M Haversines are not Slow
2 40 geodist 0.0.4 Mark Padgham 6d Fast, Dependency-Free Geodesic Distance Calculations
3 12 geosed 0.1.1 Shant Sukljian 9M Smallest Enclosing Disc for Latitude and Longitude Points
4 11 leaderCluster 1.2 Taylor B. Arnold 5y Leader Clustering Algorithm
```
Another website offering search functionality is [rdocumentation.org](http://www.rdocumentation.org/), which provides a search engine to pinpoint the function or package you need. The search for `haversine` in the Description field yielded 50\+ results from more than a dozen packages (as of summer 2020\) packages: the community has contributed to many implementations of the Haversine formula! This shows the importance of careful package selection as there are often many packages that do the same job, as we see in the next section.
### 4\.4\.2 How to select a package
Due to the conservative nature of base R development, which rightly prioritises stability over innovation, much of the innovation and performance gains in the ‘R ecosystem’ has occurred in recent years in its packages.
The increased ease of package development (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)) and interfacing with other languages (e.g. Eddelbuettel et al. [2011](#ref-Eddelbuettel_2011)) has accelerated their number, quality and efficiency.
An additional factor has been the growth in collaboration and peer review in package development, driven by code\-sharing websites such as GitHub and online communities such as [ROpenSci](https://ropensci.org/) for peer reviewing code.
Performance, stability and ease of use should be high on the priority list when choosing which package to use.
Another more subtle factor is that some packages work better together than others.
The ‘R package ecosystem’ is composed of interrelated packages.
Knowing something of these inter\-dependencies can help select a ‘package suite’ when the project demands a number of diverse yet interrelated programming tasks.
The [**tidyverse**](https://www.tidyverse.org/), for example, is a ‘metapackage’ with interrelated packages that work well together, such as **readr**, **tidyr**, and **dplyr**.
These can be used together to read\-in, tidy and then process data, as outlined in the subsequent sections.
There is no ‘hard and fast’ rule about which package you should use and new packages are emerging all the time.
The ultimate test will be empirical evidence:
does it get the job done on your data?
However, the following criteria should provide a good indication of whether a package is worth an investment of your precious time, or even installing on your computer:
* **Is it mature?** The more time a package is available, the more time it will have for obvious bugs to be ironed out. The age of a package on CRAN can be seen from its Archive page on CRAN. We can see from [cran.r\-project.org/src/contrib/Archive/ggplot2](https://cran.r-project.org/src/contrib/Archive/ggplot2/), for example, that **ggplot2** was first released on the 10th June 2007 and that it has had 37 releases.
The most recent of these at the time of writing was **ggplot2** 3\.3\.0:
reaching 1 or higher in the first digit of package versions is usually an indication from the package author that the package has reached a high level of stability.
* **Is it actively developed?** It is a good sign if packages are frequently updated. A frequently updated package will have its latest version ‘published’ recently on CRAN. The CRAN package page for **ggplot2**, for example, said `Published: 2020-03-05`, less than three months old at the time of writing.
* **Is it well documented?** This is not only an indication of how much thought, care and attention has gone into the package. It also has a direct impact on its ease of use. Using a poorly documented package can be inefficient due to the hours spent trying to work out how to use it! To check if the package is well documented, look at the help pages associated with its key functions (e.g., `?ggplot`), try the examples (e.g., `example(ggplot)`) and search for package vignettes (e.g., `vignette(package = "ggplot2")`).
* **Is it well used?** This can be seen by searching for the package name online. Most packages that have a strong user base will produce thousands of results when typed into a generic search engine such as Google’s. More specific (and potentially useful) indications of use will narrow down the search to particular users. A package widely used by the programming community will likely be visible on GitHub. At the time of writing, a search for [**ggplot2**](https://github.com/search?utf8=%E2%9C%93&q=ggplot2) on GitHub yielded over 3,000 repositories and over 1,000,000 matches in committed code!
Likewise, a package that has been adopted for use in academia will tend to be mentioned in Google Scholar (again, **ggplot2** scores extremely well in this measure, with almost 27,000 hits).
An article in [simplystats](https://simplystatistics.org/2015/11/06/how-i-decide-when-to-trust-an-r-package/) discusses this issue with reference to the proliferation of GitHub packages (those that are not available on CRAN). In this context, well\-regarded and experienced package creators and ‘indirect data’, such as amount of GitHub activity, are also highlighted as reasons to trust a package.
The websites [MRAN](https://mran.revolutionanalytics.com/packages) and [METACRAN](https://www.r-pkg.org) can help the package selection process by providing further information on each package uploaded to CRAN. [METACRAN](https://www.r-pkg.org), for example, provides metadata about R packages via a simple API and the provision of ‘badges’ to show how many downloads a particular package has per month. Returning to the Haversine example above, we could find out how many times two packages that implement the formula are downloaded each month with the following urls:
* `https://cranlogs.r-pkg.org/badges/last-month/geosphere`, downloads of **geosphere**:
* `https://cranlogs.r-pkg.org/badges/last-month/geoPlot`, downloads of **geoPlot**:
It is clear from the results reported above that **geosphere** is by far the more popular package, so is a sensible and mature choice for dealing with distances on the Earth’s surface.
4\.5 Publication
----------------
The final stage in a typical project workflow is publication. Although it’s the final stage to be worked on, that does not mean you should only document *after* the other stages are complete: making documentation integral to your overall workflow will make this stage much easier and more efficient.
Whether the final output is a report containing graphics produced by R, an online platform for exploring results or well\-documented code that colleagues can use to improve their workflow, starting it early is a good plan. In every case, the programming principles of reproducibility, modularity and DRY (don’t repeat yourself) will make your publications faster to write, easier to maintain and more useful to others.
Instead of attempting a comprehensive treatment of the topic, we will touch briefly on a couple of ways of documenting your work in R: dynamic reports and R packages. A wealth of online resources exists on each of these; to avoid duplication of effort, the focus is on documentation from a workflow efficiency perspective.
### 4\.5\.1 Dynamic documents with R Markdown
When writing a report using R outputs, a typical workflow has historically been to 1\) do the analysis 2\) save the resulting graphics and record the main results outside the R project and 3\) open a program unrelated to R such as LibreOffice to import and communicate the results in prose. This is inefficient; it makes updating and maintaining the outputs difficult (when the data changes, steps 1 to 3 will have to be done again) and there is an overhead involved in jumping between incompatible computing environments.
To overcome this inefficiency in the documentation of R outputs, the R Markdown framework was developed. Used in conjunction with the **knitr** package, we have the ability to
* process code chunks (via **knitr**)
* a notebook interface for R (via RStudio)
* the ability to render output to multiple formats (via pandoc).
R Markdown documents are plain text and have file extension `.Rmd`. This framework allows for documents to be generated automatically. Furthermore, *nothing* is efficient unless you can quickly redo it. Documenting your code inside dynamic documents in this way ensures that analysis can be quickly re\-run.
This note briefly explains R Markdown for the un\-initiated. R Markdown is a form of Markdown. Markdown is a pure text document format that has become a standard for documentation for software. It is the default format for displaying text on GitHub. R Markdown allows the user to embed R code in a Markdown document. This is a powerful addition to Markdown, as it allows custom images, tables and even interactive visualisations, to be included in your R documents. R Markdown is an efficient file format to write in because it is light\-weight, human and computer readable, and is much less verbose than HTML and LaTeX. This book was written in R Markdown.
In an R Markdown document, results are generated *on the fly* by including ‘code chunks’. Code chunks are R code that are preceded by ````{r, options}` on the line before the R code, and ````` at the end of the chunk. For example, suppose we have the following code chunk:
```
```{r eval = TRUE, echo = TRUE}
(1:5)^2
```
```
in an R Markdown document. The `eval = TRUE` in the code indicates that the code should be evaluated while `echo = TRUE` controls whether the R code is displayed. When we compile the document, we get
```
(1:5)^2
#> [1] 1 4 9 16 25
```
R Markdown via **knitr** provides a wide range of options to customise what is displayed and evaluated. When you adapt to this workflow it is highly efficient, especially as RStudio provides a number of shortcuts that make it easy to create and modify code chunks. To create a chunk while editing a `.Rmd` file, for example, simply enter `Ctrl/Cmd+Alt+I` on Windows or Linux, or select the option from the Code drop down menu in RStudio.
Once your document has compiled, it should appear on your screen in the file format requested. If an html file has been generated (as is the default), RStudio provides a feature that allows you to put it up online rapidly.
This is done using the [rpubs](https://rpubs.com) website, a store of a huge number of dynamic documents (which could be a good source of inspiration for your publications).
Assuming you have an RStudio account, clicking the ‘Publish’ button at the top of the html output window will instantly publish your work online, with a minimum amount of effort, enabling fast and efficient communication with many collaborators and the public.
An important advantage of dynamically documenting work this way is that when the data or analysis code changes, the results will be updated in the document automatically. This can save hours of fiddly copying and pasting of R output between different programs. Also, if your client wants pages and pages of documented output, **knitr** can provide them with a minimum amount of typing; e.g., creating slightly different versions of the same plot over and over again. From a delivery of content perspective, that is certainly an efficiency gain compared with hours of copying and pasting figures!
If your R Markdown documents include time\-consuming processing stages, a speed boost can be attained after the first build by setting `opts_chunk$set(cache = TRUE)` in the first chunk of the document. This setting was used to reduce the build times of this book, as can be seen on [GitHub](https://github.com/csgillespie/efficientR/blob/master/code/before_script.R).
Furthermore, dynamic documents written in R Markdown can compile into a range of output formats including html, pdf and Microsoft Word’s docx. There is a wealth of information on the details of dynamic report writing that is not worth replicating here. Key references are RStudio’s excellent website on R Markdown hosted at [rmarkdown.rstudio.com](https://rmarkdown.rstudio.com/), and for a more detailed account of dynamic documents with R, see Xie ([2015](#ref-xie2015dynamic)).
### 4\.5\.2 R packages
A strict approach to project management and workflow is treating your projects as R packages. This approach has advantages and limitations. The major risk with treating a project as a package is that the package is quite a strict way of organising work. Packages are suited for code intensive projects where code documentation is important. An intermediate approach is to use a ‘dummy package’ that includes a `DESCRIPTION` file in the root directory telling users of the project which packages must be installed for the code to work. This book is based on a dummy package so that we can easily keep the dependencies up\-to\-date (see the book’s [DESCRIPTION](https://github.com/csgillespie/efficientR/blob/master/DESCRIPTION) file online for an insight into how this works).
Creating packages is good practice in terms of learning to correctly document your code, store example data, and even (via vignettes) ensure reproducibility. But it can take a lot of extra time, so it should not be taken lightly. This approach to R workflow is appropriate for managing complex projects which repeatedly use the same routines that can be converted into functions. Creating project packages can provide a foundation for generalising your code for use by others; e.g., via publication on GitHub and/or CRAN. Additionally, R package development has been made much easier in recent years by the development of the **devtools** package, which is highly recommended for anyone attempting to write an R package.
The number of essential elements of R packages differentiate them from other R projects. Three of these are outlined below from an efficiency perspective.
* The [`DESCRIPTION`](http://r-pkgs.had.co.nz/description.html) file contains key information about the package, including which packages are required for the code contained in your package to work, e.g. using `Imports:`. This is efficient because it means that anyone who installs your package will automatically install the other packages that it depends on.
* The `R/` folder contains all the R code that defines your package’s functions. Placing your code in a single place encourages you to make your code modular, which greatly reduces duplication of code on large projects. Furthermore, the documentation of R packages through [Roxygen tags](http://r-pkgs.had.co.nz/man.html#man-workflow), such as `#' This function does this...`, makes it easy for others to use your work. This form of efficient documentation is facilitated by the **roxygen2** package.
* The `data/` folder contains example code for demonstrating to others how the functions work and transporting datasets that will be frequently used in your workflow. Data can be added automatically to your package project using the **devtools** package, with `devtools::use_data()`. This can increase efficiency by providing a way of distributing small to medium sized datasets and making them available when the package is loaded with the function `data("data_set_name")`.
The package **testthat** makes it easier than ever to test your R code as you go, ensuring that nothing breaks. This, combined with ‘continuous integration’ services, such as that provided by [Travis](https://travis-ci.org/), make updating your code base as efficient and robust as possible. This, and more, is described in Cotton ([2016](#ref-cotton_testing_2016)[b](#ref-cotton_testing_2016)).
As with dynamic documents, package development is a large topic. For small ‘one\-off’ projects, the time taken in learning how to set\-up a package may not be worth the savings. However, packages provide a rigorous way of storing code, data and documentation that can greatly boost productivity in the long\-run. For more on R packages, see H. Wickham ([2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)); the online version provides all you need to know about writing R packages for free (see [r\-pkgs.had.co.nz/](http://r-pkgs.had.co.nz/)).
### Prerequisites
This chapter focuses on workflow. For project planning and management, we’ll use the **DiagrammeR** package. For project reporting, we’ll focus on R Markdown and **knitr** which are bundled with RStudio (but can be installed independently if needed). We’ll suggest other packages that are worth investigating, but are not required for this particular chapter.
```
library("DiagrammeR")
```
4\.1 Top 5 tips for efficient workflow
--------------------------------------
1. Start without writing code but with a clear mind and perhaps a pen and paper. This will ensure you keep your objectives at the forefront of your mind, without getting lost in the technology.
2. Make a plan. The size and nature will depend on the project but timelines, resources and ‘chunking’ the work will make you more effective when you start.
3. Select the packages you will use for implementing the plan early. Minutes spent researching and selecting from the available options could save hours in the future.
4. Document your work at every stage; work can only be effective if it’s communicated clearly and code can only be efficiently understood if it’s commented.
5. Make your entire workflow as reproducible as possible. **knitr** can help with this in the phase of documentation.
4\.2 A project planning typology
--------------------------------
Appropriate project management structures and workflow depend on the *type* of project you are undertaking. The typology below demonstrates the links between project type and project management requirements.[10](#fn10)
* *Data analysis*. Here you are trying to explore datasets to discover something interesting/answer some questions. The emphasis is on the speed of manipulating your data to generate interesting results. Formality is less important in this type of project. Sometimes this analysis project may only be part of a larger project (the data may have to be created in a lab, for example). How the data analysts interact with the rest of the team may be as important for the project’s success as how they interact with each other.
* *Package creation*. Here you want to create code that can be reused across projects, possibly by people whose use case you don’t know (if you make it publicly available). The emphasis in this case will be on clarity of user interface and documentation, meaning style and code review are important. Robustness and testing are important in this type of project too.
* *Reporting and publishing*. Here you are writing a report or journal paper or book. The level of formality varies depending upon the audience, but you have additional worries like how much code it takes to arrive at the conclusions, and how much output does the code create.
* *Software applications*. This could range from a simple Shiny app to R being embedded in the server of a much larger piece of software. Either way, since there is limited opportunity for human interaction, the emphasis is on robust code and gracefully dealing with failure.
Based on these observations, we recommend thinking about which type of workflow, file structure and project management system suits your projects best. Sometimes it’s best not to be prescriptive so we recommend trying different working practices to discover which works best, time permitting.[11](#fn11)
There are, however, concrete steps that can be taken to improve workflow in most projects that involve R programming.
Learning them will, in the long\-run, improve productivity and reproducibility.
With these motivations in mind, the purpose of this chapter is simple: to highlight some key ingredients of an efficient R workflow.
It builds on the concept of an R/RStudio *project*, introduced in Chapter [2](set-up.html#set-up), and is ordered chronologically throughout the stages involved in a typical project’s lifespan, from its inception to publication:
* Project planning. This should happen before any code has been written, to avoid time wasted using a mistaken analysis strategy. Project management is the art of making project plans happen.
* Package selection. After planning your project, you should identify which packages are most suitable to get the work done quickly and effectively. With rapid increases in the number and performance of packages, it is more important than ever to consider the range of options at the outset. For example, `*_join()` from **dplyr** is often more appropriate than `merge()`, as we’ll see in Chapter [6](data-carpentry.html#data-carpentry).
* Publication. This final stage is relevant if you want your R code to be useful for others in the long term. To this end, Section [4\.5](workflow.html#publication) touches on documentation using **knitr** and the much stricter approach to coding for publication and package development.
4\.3 Project planning and management
------------------------------------
Good programmers working on a complex project will rarely just start typing code. Instead, they will plan the steps needed to complete the task as efficiently as possible: “smart preparation minimizes work” (Berkun [2005](#ref-berkun2005art)).
Although search engines are useful for identifying the appropriate strategy, trial\-and\-error approaches (for example, typing code at random and Googling the inevitable error messages) are usually highly *inefficient*.
Strategic thinking is especially important during a project’s inception; if you make a bad decision early on, it will have cascading negative impacts throughout the project’s entire lifespan. So detrimental and ubiquitous is this phenomenon in software development that a term has been coined to describe it: *technical debt*. This has been defined as “not quite right code which we postpone making it right” (Kruchten, Nord, and Ozkaya [2012](#ref-kruchten2012technical)). Dozens of academic papers have been written on the subject but, from the perspective of *beginning* a project (i.e., in the planning stage, where we are now), all you need to know is that it is absolutely vital to make sensible decisions at the outset. If you do not, your project may be doomed to failure of incessant rounds of refactoring.
To minimise technical debt at the outset, the best place to start may be with a pen and paper and an open mind. Sketching out your ideas and deciding precisely what you want to do, free from the constraints of a particular piece of technology, can be a rewarding exercise before you begin.
Project planning and ‘visioning’ can be a creative process not always well\-suited to the linear logic of computing, despite recent advances in project management software, some of which are outlined in the bullet points below.
Scale can loosely be defined as the number of people working on a project. It should be considered at the outset because the importance of project management increases exponentially with the number of people involved.
Project management may be trivial for a small project but if you expect it to grow, implementing a structured workflow early could avoid problems later.
On small projects consisting of a ‘one off’ script, project management may be a distracting waste of time.
Large projects involving dozens of people, on the other hand, require much effort dedicated to project management: regular meetings, division of labour and a scalable project management system to track progress, issues and
priorities that will inevitably consume a large proportion of the project’s time. Fortunately, a multitude of dedicated project management systems have been developed to cater for projects across a range of scales. These include, in rough ascending order of scale and complexity:
* the interactive code sharing site [GitHub](https://github.com/), which is described in more detail in Chapter [9](collaboration.html#collaboration)
* [ZenHub](https://www.zenhub.com/), a browser plugin that is “the first and only project management suite that works natively within GitHub”
* web\-based and easy\-to\-use kanban tools such as [Trello](https://trello.com/) and [Jira](https://www.atlassian.com/software/jira)
* dedicated desktop project management software such as [ProjectLibre](http://sourceforge.net/projects/projectlibre/) and [GanttProject](https://sourceforge.net/projects/ganttproject)
* fully featured, enterprise scale open source project management systems such as [OpenProject](https://www.openproject.org/) and [Redmine](https://www.redmine.org/).
Regardless of the software (or lack thereof) used for project management, it involves considering the project’s aims in the context of available resources (e.g., computational and programmer resources), project scope, time\-scales and suitable software. And these things should be considered together. To take one example, is it worth the investment of time needed to learn a particular R package which is not essential to completing the project but which will make the code run faster? Does it make more sense to hire another programmer or invest in more computational resources to complete an urgent deadline?
Minutes spent thinking through such issues before writing a single line can save hours in the future. This is emphasised in books such as Berkun ([2005](#ref-berkun2005art)) and PMBoK ([2000](#ref-PMBoK2000)) and useful online resources, such as [teamgantt.com](https://teamgantt.com/guide-to-project-management/) and
[lasa.org.uk](https://www.lasa.org.uk/uploads/publications/ictpublications/computanews_guides/lcgpm.pdf),
which focus exclusively on project planning. This section condenses some of the most important lessons from this literature in the context of typical R projects (i.e., which involve data analysis, modelling and visualisation).
### 4\.3\.1 ‘Chunking’ your work
Once a project overview has been devised and stored, in mind (for small projects, if you trust that as storage medium!) or written, a plan with a timeline can be drawn\-up.
The up\-to\-date visualisation of this plan can be a powerful reminder to yourself and collaborators of progress on the project so far. More importantly, the timeline provides an overview of what needs to be done next.
Setting start dates and deadlines for each task will help prioritise the work and ensure you are on track.
Breaking a large project into smaller chunks is highly recommended, making huge, complex tasks more achievable and modular (PMBoK [2000](#ref-PMBoK2000)).
‘Chunking’ the work will also make collaboration easier, as we shall see in Chapter [5](input-output.html#input-output).
Figure 4\.1: Schematic illustrations of key project phases and levels of activity over time, based on PMBoK ([2000](#ref-PMBoK2000)).
The tasks that a project should be split into will depend on the nature of the work and the phases illustrated in Figure [4\.1](workflow.html#fig:4-1) represent a rough starting point, not a template, and the ‘programming’ phase will usually need to be split into at least ‘data tidying’, ‘processing’, and ‘visualisation’.
### 4\.3\.2 Making your workflow SMART
A more rigorous (but potentially onerous) way to project plan is to divide the work into a series of objectives and track their progress throughout the project’s duration.
One way to check if an objective is appropriate for action and review is by using the SMART criteria:
* Specific: is the objective clearly defined and self\-contained?
* Measurable: is there a clear indication of its completion?
* Attainable: can the target be achieved? This can also refer to Assigned (to a person).
* Realistic: have sufficient resources been allocated to the task?
* Time\-bound: is there an associated completion date or milestone?
If the answer to each of these questions is ‘yes’, the task is likely to be suitable to include in the project’s plan.
Note that this does not mean all project plans need to be uniform.
A project plan can take many forms, including a short document, a Gantt chart (see Figure [4\.2](workflow.html#fig:4-2)) or simply a clear vision of the project’s steps in mind.
Figure 4\.2: A Gantt chart created using **DiagrammeR** illustrating the steps needed to complete this book at an early stage of its development.
### 4\.3\.3 Visualising plans with R
Various R packages can help visualise the project plan.
While these are useful, they cannot compete with the dedicated project management software outlined at the outset of this section. However, if you are working on a relatively simple project, it is useful to know that R can help represent and keep track of your work. Packages for plotting project progress include:[12](#fn12)
* [**plan**](https://cran.r-project.org/web/packages/plan/), a package that provides basic tools to create burndown charts (which concisely show whether a project is on\-time or not) and Gantt charts.
* [**plotrix**](https://cran.r-project.org/web/packages/plotrix/index.html), a general purpose plotting package that provides basic Gantt chart plotting functionality. Enter `example(gantt.chart)` for details.
* [**DiagrammeR**](http://rich-iannone.github.io/DiagrammeR/), a new package for creating network graphs and other schematic diagrams in R. This package provides an R interface to simple flow\-chart file formats such as [mermaid](https://github.com/mermaid-js/mermaid) and [GraphViz](https://gitlab.com/graphviz/graphviz/).
The small example below (which provides the basis for creating charts like Figure [4\.2](workflow.html#fig:4-2)) illustrates how **DiagrammeR** can take simple text inputs to create informative up\-to\-date Gantt charts.
Such charts can greatly help with the planning and task management of long and complex R projects, as long as they do not take away valuable programming time from core project objectives.
```
library("DiagrammeR")
# Define the Gantt chart and plot the result (not shown)
mermaid("gantt
Section Initiation
Planning :a1, 2016-01-01, 10d
Data processing :after a1 , 30d")
```
In the above code, `gantt` defines the subsequent data layout.
`Section` refers to the project’s section (useful for large projects, with milestones) and each new line refers to a discrete task.
`Planning`, for example, has the task ID `a1`, which begins on the first day of 2016 and lasts for 10 days, and is referenced by the following task, `Data processing`. See [mermaid\-js.github.io/mermaid/\#/gantt](http://mermaid-js.github.io/mermaid/#/gantt) for more detailed documentation.
#### Exercises
1. What are the three most important work ‘chunks’ of your current R project?
2. What is the meaning of ‘SMART’ objectives (see [Making your workflow SMART](workflow.html#smart))?
3. Run the [code chunk](#DiagrammeR) at the end of this section to see the output.
4. Bonus exercise: modify this code to create a basic Gantt chart for an R project you are working on.
### 4\.3\.1 ‘Chunking’ your work
Once a project overview has been devised and stored, in mind (for small projects, if you trust that as storage medium!) or written, a plan with a timeline can be drawn\-up.
The up\-to\-date visualisation of this plan can be a powerful reminder to yourself and collaborators of progress on the project so far. More importantly, the timeline provides an overview of what needs to be done next.
Setting start dates and deadlines for each task will help prioritise the work and ensure you are on track.
Breaking a large project into smaller chunks is highly recommended, making huge, complex tasks more achievable and modular (PMBoK [2000](#ref-PMBoK2000)).
‘Chunking’ the work will also make collaboration easier, as we shall see in Chapter [5](input-output.html#input-output).
Figure 4\.1: Schematic illustrations of key project phases and levels of activity over time, based on PMBoK ([2000](#ref-PMBoK2000)).
The tasks that a project should be split into will depend on the nature of the work and the phases illustrated in Figure [4\.1](workflow.html#fig:4-1) represent a rough starting point, not a template, and the ‘programming’ phase will usually need to be split into at least ‘data tidying’, ‘processing’, and ‘visualisation’.
### 4\.3\.2 Making your workflow SMART
A more rigorous (but potentially onerous) way to project plan is to divide the work into a series of objectives and track their progress throughout the project’s duration.
One way to check if an objective is appropriate for action and review is by using the SMART criteria:
* Specific: is the objective clearly defined and self\-contained?
* Measurable: is there a clear indication of its completion?
* Attainable: can the target be achieved? This can also refer to Assigned (to a person).
* Realistic: have sufficient resources been allocated to the task?
* Time\-bound: is there an associated completion date or milestone?
If the answer to each of these questions is ‘yes’, the task is likely to be suitable to include in the project’s plan.
Note that this does not mean all project plans need to be uniform.
A project plan can take many forms, including a short document, a Gantt chart (see Figure [4\.2](workflow.html#fig:4-2)) or simply a clear vision of the project’s steps in mind.
Figure 4\.2: A Gantt chart created using **DiagrammeR** illustrating the steps needed to complete this book at an early stage of its development.
### 4\.3\.3 Visualising plans with R
Various R packages can help visualise the project plan.
While these are useful, they cannot compete with the dedicated project management software outlined at the outset of this section. However, if you are working on a relatively simple project, it is useful to know that R can help represent and keep track of your work. Packages for plotting project progress include:[12](#fn12)
* [**plan**](https://cran.r-project.org/web/packages/plan/), a package that provides basic tools to create burndown charts (which concisely show whether a project is on\-time or not) and Gantt charts.
* [**plotrix**](https://cran.r-project.org/web/packages/plotrix/index.html), a general purpose plotting package that provides basic Gantt chart plotting functionality. Enter `example(gantt.chart)` for details.
* [**DiagrammeR**](http://rich-iannone.github.io/DiagrammeR/), a new package for creating network graphs and other schematic diagrams in R. This package provides an R interface to simple flow\-chart file formats such as [mermaid](https://github.com/mermaid-js/mermaid) and [GraphViz](https://gitlab.com/graphviz/graphviz/).
The small example below (which provides the basis for creating charts like Figure [4\.2](workflow.html#fig:4-2)) illustrates how **DiagrammeR** can take simple text inputs to create informative up\-to\-date Gantt charts.
Such charts can greatly help with the planning and task management of long and complex R projects, as long as they do not take away valuable programming time from core project objectives.
```
library("DiagrammeR")
# Define the Gantt chart and plot the result (not shown)
mermaid("gantt
Section Initiation
Planning :a1, 2016-01-01, 10d
Data processing :after a1 , 30d")
```
In the above code, `gantt` defines the subsequent data layout.
`Section` refers to the project’s section (useful for large projects, with milestones) and each new line refers to a discrete task.
`Planning`, for example, has the task ID `a1`, which begins on the first day of 2016 and lasts for 10 days, and is referenced by the following task, `Data processing`. See [mermaid\-js.github.io/mermaid/\#/gantt](http://mermaid-js.github.io/mermaid/#/gantt) for more detailed documentation.
#### Exercises
1. What are the three most important work ‘chunks’ of your current R project?
2. What is the meaning of ‘SMART’ objectives (see [Making your workflow SMART](workflow.html#smart))?
3. Run the [code chunk](#DiagrammeR) at the end of this section to see the output.
4. Bonus exercise: modify this code to create a basic Gantt chart for an R project you are working on.
#### Exercises
1. What are the three most important work ‘chunks’ of your current R project?
2. What is the meaning of ‘SMART’ objectives (see [Making your workflow SMART](workflow.html#smart))?
3. Run the [code chunk](#DiagrammeR) at the end of this section to see the output.
4. Bonus exercise: modify this code to create a basic Gantt chart for an R project you are working on.
4\.4 Package selection
----------------------
A good example of the importance of prior planning to minimise effort and reduce technical debt is package selection. An inefficient, poorly supported or simply outdated package can waste hours. When a more appropriate alternative is available, this waste can be prevented by prior planning. There are many poor packages on CRAN and much duplication so it’s easy to go wrong. Just because a certain package *can* solve a particular problem, doesn’t mean that it *should*.
Used well, however, packages can greatly improve productivity: not reinventing the wheel is part of the ethos of open source software. If someone has already solved a particular technical problem, you don’t have to re\-write their code, allowing you to focus on solving the applied problem. Furthermore, because R packages are generally (but not always) written by competent programmers and subject to user feedback, they may work faster and more effectively than the hastily prepared code you may have written. All R code is open source and potentially subject to peer review. A prerequisite of publishing an R package is that developer contact details must be provided, and many packages provide a site for issue tracking. Furthermore, R packages can increase programmer productivity by dramatically reducing the amount of code they need to write because all the code is *packaged* in functions behind the scenes.
Let’s take an example. Imagine for a project you would like to find the distance between sets of points (origins, `o` and destinations, `d`) on the Earth’s surface. Background reading shows that a good approximation of ‘great circle’ distance, which accounts for the curvature of the Earth, can be made by using the Haversine formula, which you duly implement, involving much trial and error:
```
# Function to convert degrees to radians
deg2rad = function(deg) deg * pi / 180
# Create origins and destinations
o = c(lon = -1.55, lat = 53.80)
d = c(lon = -1.61, lat = 54.98)
# Convert to radians
o_rad = deg2rad(o)
d_rad = deg2rad(d)
# Find difference in degrees
delta_lon = (o_rad[1] - d_rad[1])
delta_lat = (o_rad[2] - d_rad[2])
# Calculate distance with Haversine formula
a = sin(delta_lat / 2)^2 + cos(o_rad[2]) * cos(d_rad[2]) * sin(delta_lon / 2)^2
c = 2 * asin(min(1, sqrt(a)))
(d_hav1 = 6371 * c) # multiply by Earth's diameter
#> [1] 131
```
This method works but it takes time to write, test and debug. It would be much better to package it up into a function. Or even better, use a function that someone else has written and put in a package:
```
# Find great circle distance with geosphere
(d_hav2 = geosphere::distHaversine(o, d))
#> [1] 131415
```
The difference between the hard\-coded method and the package method is striking. One is 7 lines of tricky R code involving many subsetting stages and small, similar functions (e.g., `sin` and `asin`) which are easy to confuse. The other is one line of simple code. The package method using **geosphere** took perhaps 100th of the time *and* gave a more accurate result (because it uses a more accurate estimate of the diameter of the Earth). This means that a couple of minutes searching for a package to estimate great circle distances would have been time well spent at the outset of this project. But how do you search for packages?
### 4\.4\.1 Searching for R packages
Building on the example above, how can one find out if there is a package to solve your particular problem? The first stage is to guess: if it is a common problem, someone has probably tried to solve it. The second stage is to search. A simple Google query, [`haversine formula R`](https://www.google.com/search?q=haversine+formula+R), returned links to various packages and a [blog post on a base R implementation](http://www.r-bloggers.com/great-circle-distance-calculations-in-r/).
Searching on sites dedicated to R can yield more specific and useful results.
The [r\-pkg.org](https://r-pkg.org/) website provides a simple yet effective online search system.
Entering `haversine` into its search bar yields the URL [r\-pkg.org/search.html?q\=haversine](https://r-pkg.org/search.html?q=haversine), which contains links to relevant packages.
Furthermore, undertaking searches for particular functions and packages from within R can save time and avoid the distractions of online searches via a web browser.
You can search *currently installed* packages with the command `??haversine`, although this will not help you find pacakges you’ve yet to install.
A simple solution is the `RSiteSearch()` function from the base R **utils** package opens a url in your browser linking to a number of functions (49 at the time of writing) mentioning the text string, with the following command:
```
# Search CRAN for mentions of haversine
RSiteSearch("haversine")
```
To get more functionality, various packages dedicated to searching for R packages have been developed.
**pkgsearch** is a popular package that provides many options for searching for packages, and a basic example is shown below.
The results show that 4 relevant packages were identified and ranked, simplifying the search process.
```
haversine_pkgs = pkgsearch::pkg_search(query = "haversine")
haversine_pkgs
```
```
- "haversine" ---------------------------------------------------------------------------- 4 packages in 0.007 seconds -
# package version by @ title
1 100 hans 0.1 Alex Hallam 8M Haversines are not Slow
2 40 geodist 0.0.4 Mark Padgham 6d Fast, Dependency-Free Geodesic Distance Calculations
3 12 geosed 0.1.1 Shant Sukljian 9M Smallest Enclosing Disc for Latitude and Longitude Points
4 11 leaderCluster 1.2 Taylor B. Arnold 5y Leader Clustering Algorithm
```
Another website offering search functionality is [rdocumentation.org](http://www.rdocumentation.org/), which provides a search engine to pinpoint the function or package you need. The search for `haversine` in the Description field yielded 50\+ results from more than a dozen packages (as of summer 2020\) packages: the community has contributed to many implementations of the Haversine formula! This shows the importance of careful package selection as there are often many packages that do the same job, as we see in the next section.
### 4\.4\.2 How to select a package
Due to the conservative nature of base R development, which rightly prioritises stability over innovation, much of the innovation and performance gains in the ‘R ecosystem’ has occurred in recent years in its packages.
The increased ease of package development (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)) and interfacing with other languages (e.g. Eddelbuettel et al. [2011](#ref-Eddelbuettel_2011)) has accelerated their number, quality and efficiency.
An additional factor has been the growth in collaboration and peer review in package development, driven by code\-sharing websites such as GitHub and online communities such as [ROpenSci](https://ropensci.org/) for peer reviewing code.
Performance, stability and ease of use should be high on the priority list when choosing which package to use.
Another more subtle factor is that some packages work better together than others.
The ‘R package ecosystem’ is composed of interrelated packages.
Knowing something of these inter\-dependencies can help select a ‘package suite’ when the project demands a number of diverse yet interrelated programming tasks.
The [**tidyverse**](https://www.tidyverse.org/), for example, is a ‘metapackage’ with interrelated packages that work well together, such as **readr**, **tidyr**, and **dplyr**.
These can be used together to read\-in, tidy and then process data, as outlined in the subsequent sections.
There is no ‘hard and fast’ rule about which package you should use and new packages are emerging all the time.
The ultimate test will be empirical evidence:
does it get the job done on your data?
However, the following criteria should provide a good indication of whether a package is worth an investment of your precious time, or even installing on your computer:
* **Is it mature?** The more time a package is available, the more time it will have for obvious bugs to be ironed out. The age of a package on CRAN can be seen from its Archive page on CRAN. We can see from [cran.r\-project.org/src/contrib/Archive/ggplot2](https://cran.r-project.org/src/contrib/Archive/ggplot2/), for example, that **ggplot2** was first released on the 10th June 2007 and that it has had 37 releases.
The most recent of these at the time of writing was **ggplot2** 3\.3\.0:
reaching 1 or higher in the first digit of package versions is usually an indication from the package author that the package has reached a high level of stability.
* **Is it actively developed?** It is a good sign if packages are frequently updated. A frequently updated package will have its latest version ‘published’ recently on CRAN. The CRAN package page for **ggplot2**, for example, said `Published: 2020-03-05`, less than three months old at the time of writing.
* **Is it well documented?** This is not only an indication of how much thought, care and attention has gone into the package. It also has a direct impact on its ease of use. Using a poorly documented package can be inefficient due to the hours spent trying to work out how to use it! To check if the package is well documented, look at the help pages associated with its key functions (e.g., `?ggplot`), try the examples (e.g., `example(ggplot)`) and search for package vignettes (e.g., `vignette(package = "ggplot2")`).
* **Is it well used?** This can be seen by searching for the package name online. Most packages that have a strong user base will produce thousands of results when typed into a generic search engine such as Google’s. More specific (and potentially useful) indications of use will narrow down the search to particular users. A package widely used by the programming community will likely be visible on GitHub. At the time of writing, a search for [**ggplot2**](https://github.com/search?utf8=%E2%9C%93&q=ggplot2) on GitHub yielded over 3,000 repositories and over 1,000,000 matches in committed code!
Likewise, a package that has been adopted for use in academia will tend to be mentioned in Google Scholar (again, **ggplot2** scores extremely well in this measure, with almost 27,000 hits).
An article in [simplystats](https://simplystatistics.org/2015/11/06/how-i-decide-when-to-trust-an-r-package/) discusses this issue with reference to the proliferation of GitHub packages (those that are not available on CRAN). In this context, well\-regarded and experienced package creators and ‘indirect data’, such as amount of GitHub activity, are also highlighted as reasons to trust a package.
The websites [MRAN](https://mran.revolutionanalytics.com/packages) and [METACRAN](https://www.r-pkg.org) can help the package selection process by providing further information on each package uploaded to CRAN. [METACRAN](https://www.r-pkg.org), for example, provides metadata about R packages via a simple API and the provision of ‘badges’ to show how many downloads a particular package has per month. Returning to the Haversine example above, we could find out how many times two packages that implement the formula are downloaded each month with the following urls:
* `https://cranlogs.r-pkg.org/badges/last-month/geosphere`, downloads of **geosphere**:
* `https://cranlogs.r-pkg.org/badges/last-month/geoPlot`, downloads of **geoPlot**:
It is clear from the results reported above that **geosphere** is by far the more popular package, so is a sensible and mature choice for dealing with distances on the Earth’s surface.
### 4\.4\.1 Searching for R packages
Building on the example above, how can one find out if there is a package to solve your particular problem? The first stage is to guess: if it is a common problem, someone has probably tried to solve it. The second stage is to search. A simple Google query, [`haversine formula R`](https://www.google.com/search?q=haversine+formula+R), returned links to various packages and a [blog post on a base R implementation](http://www.r-bloggers.com/great-circle-distance-calculations-in-r/).
Searching on sites dedicated to R can yield more specific and useful results.
The [r\-pkg.org](https://r-pkg.org/) website provides a simple yet effective online search system.
Entering `haversine` into its search bar yields the URL [r\-pkg.org/search.html?q\=haversine](https://r-pkg.org/search.html?q=haversine), which contains links to relevant packages.
Furthermore, undertaking searches for particular functions and packages from within R can save time and avoid the distractions of online searches via a web browser.
You can search *currently installed* packages with the command `??haversine`, although this will not help you find pacakges you’ve yet to install.
A simple solution is the `RSiteSearch()` function from the base R **utils** package opens a url in your browser linking to a number of functions (49 at the time of writing) mentioning the text string, with the following command:
```
# Search CRAN for mentions of haversine
RSiteSearch("haversine")
```
To get more functionality, various packages dedicated to searching for R packages have been developed.
**pkgsearch** is a popular package that provides many options for searching for packages, and a basic example is shown below.
The results show that 4 relevant packages were identified and ranked, simplifying the search process.
```
haversine_pkgs = pkgsearch::pkg_search(query = "haversine")
haversine_pkgs
```
```
- "haversine" ---------------------------------------------------------------------------- 4 packages in 0.007 seconds -
# package version by @ title
1 100 hans 0.1 Alex Hallam 8M Haversines are not Slow
2 40 geodist 0.0.4 Mark Padgham 6d Fast, Dependency-Free Geodesic Distance Calculations
3 12 geosed 0.1.1 Shant Sukljian 9M Smallest Enclosing Disc for Latitude and Longitude Points
4 11 leaderCluster 1.2 Taylor B. Arnold 5y Leader Clustering Algorithm
```
Another website offering search functionality is [rdocumentation.org](http://www.rdocumentation.org/), which provides a search engine to pinpoint the function or package you need. The search for `haversine` in the Description field yielded 50\+ results from more than a dozen packages (as of summer 2020\) packages: the community has contributed to many implementations of the Haversine formula! This shows the importance of careful package selection as there are often many packages that do the same job, as we see in the next section.
### 4\.4\.2 How to select a package
Due to the conservative nature of base R development, which rightly prioritises stability over innovation, much of the innovation and performance gains in the ‘R ecosystem’ has occurred in recent years in its packages.
The increased ease of package development (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)) and interfacing with other languages (e.g. Eddelbuettel et al. [2011](#ref-Eddelbuettel_2011)) has accelerated their number, quality and efficiency.
An additional factor has been the growth in collaboration and peer review in package development, driven by code\-sharing websites such as GitHub and online communities such as [ROpenSci](https://ropensci.org/) for peer reviewing code.
Performance, stability and ease of use should be high on the priority list when choosing which package to use.
Another more subtle factor is that some packages work better together than others.
The ‘R package ecosystem’ is composed of interrelated packages.
Knowing something of these inter\-dependencies can help select a ‘package suite’ when the project demands a number of diverse yet interrelated programming tasks.
The [**tidyverse**](https://www.tidyverse.org/), for example, is a ‘metapackage’ with interrelated packages that work well together, such as **readr**, **tidyr**, and **dplyr**.
These can be used together to read\-in, tidy and then process data, as outlined in the subsequent sections.
There is no ‘hard and fast’ rule about which package you should use and new packages are emerging all the time.
The ultimate test will be empirical evidence:
does it get the job done on your data?
However, the following criteria should provide a good indication of whether a package is worth an investment of your precious time, or even installing on your computer:
* **Is it mature?** The more time a package is available, the more time it will have for obvious bugs to be ironed out. The age of a package on CRAN can be seen from its Archive page on CRAN. We can see from [cran.r\-project.org/src/contrib/Archive/ggplot2](https://cran.r-project.org/src/contrib/Archive/ggplot2/), for example, that **ggplot2** was first released on the 10th June 2007 and that it has had 37 releases.
The most recent of these at the time of writing was **ggplot2** 3\.3\.0:
reaching 1 or higher in the first digit of package versions is usually an indication from the package author that the package has reached a high level of stability.
* **Is it actively developed?** It is a good sign if packages are frequently updated. A frequently updated package will have its latest version ‘published’ recently on CRAN. The CRAN package page for **ggplot2**, for example, said `Published: 2020-03-05`, less than three months old at the time of writing.
* **Is it well documented?** This is not only an indication of how much thought, care and attention has gone into the package. It also has a direct impact on its ease of use. Using a poorly documented package can be inefficient due to the hours spent trying to work out how to use it! To check if the package is well documented, look at the help pages associated with its key functions (e.g., `?ggplot`), try the examples (e.g., `example(ggplot)`) and search for package vignettes (e.g., `vignette(package = "ggplot2")`).
* **Is it well used?** This can be seen by searching for the package name online. Most packages that have a strong user base will produce thousands of results when typed into a generic search engine such as Google’s. More specific (and potentially useful) indications of use will narrow down the search to particular users. A package widely used by the programming community will likely be visible on GitHub. At the time of writing, a search for [**ggplot2**](https://github.com/search?utf8=%E2%9C%93&q=ggplot2) on GitHub yielded over 3,000 repositories and over 1,000,000 matches in committed code!
Likewise, a package that has been adopted for use in academia will tend to be mentioned in Google Scholar (again, **ggplot2** scores extremely well in this measure, with almost 27,000 hits).
An article in [simplystats](https://simplystatistics.org/2015/11/06/how-i-decide-when-to-trust-an-r-package/) discusses this issue with reference to the proliferation of GitHub packages (those that are not available on CRAN). In this context, well\-regarded and experienced package creators and ‘indirect data’, such as amount of GitHub activity, are also highlighted as reasons to trust a package.
The websites [MRAN](https://mran.revolutionanalytics.com/packages) and [METACRAN](https://www.r-pkg.org) can help the package selection process by providing further information on each package uploaded to CRAN. [METACRAN](https://www.r-pkg.org), for example, provides metadata about R packages via a simple API and the provision of ‘badges’ to show how many downloads a particular package has per month. Returning to the Haversine example above, we could find out how many times two packages that implement the formula are downloaded each month with the following urls:
* `https://cranlogs.r-pkg.org/badges/last-month/geosphere`, downloads of **geosphere**:
* `https://cranlogs.r-pkg.org/badges/last-month/geoPlot`, downloads of **geoPlot**:
It is clear from the results reported above that **geosphere** is by far the more popular package, so is a sensible and mature choice for dealing with distances on the Earth’s surface.
4\.5 Publication
----------------
The final stage in a typical project workflow is publication. Although it’s the final stage to be worked on, that does not mean you should only document *after* the other stages are complete: making documentation integral to your overall workflow will make this stage much easier and more efficient.
Whether the final output is a report containing graphics produced by R, an online platform for exploring results or well\-documented code that colleagues can use to improve their workflow, starting it early is a good plan. In every case, the programming principles of reproducibility, modularity and DRY (don’t repeat yourself) will make your publications faster to write, easier to maintain and more useful to others.
Instead of attempting a comprehensive treatment of the topic, we will touch briefly on a couple of ways of documenting your work in R: dynamic reports and R packages. A wealth of online resources exists on each of these; to avoid duplication of effort, the focus is on documentation from a workflow efficiency perspective.
### 4\.5\.1 Dynamic documents with R Markdown
When writing a report using R outputs, a typical workflow has historically been to 1\) do the analysis 2\) save the resulting graphics and record the main results outside the R project and 3\) open a program unrelated to R such as LibreOffice to import and communicate the results in prose. This is inefficient; it makes updating and maintaining the outputs difficult (when the data changes, steps 1 to 3 will have to be done again) and there is an overhead involved in jumping between incompatible computing environments.
To overcome this inefficiency in the documentation of R outputs, the R Markdown framework was developed. Used in conjunction with the **knitr** package, we have the ability to
* process code chunks (via **knitr**)
* a notebook interface for R (via RStudio)
* the ability to render output to multiple formats (via pandoc).
R Markdown documents are plain text and have file extension `.Rmd`. This framework allows for documents to be generated automatically. Furthermore, *nothing* is efficient unless you can quickly redo it. Documenting your code inside dynamic documents in this way ensures that analysis can be quickly re\-run.
This note briefly explains R Markdown for the un\-initiated. R Markdown is a form of Markdown. Markdown is a pure text document format that has become a standard for documentation for software. It is the default format for displaying text on GitHub. R Markdown allows the user to embed R code in a Markdown document. This is a powerful addition to Markdown, as it allows custom images, tables and even interactive visualisations, to be included in your R documents. R Markdown is an efficient file format to write in because it is light\-weight, human and computer readable, and is much less verbose than HTML and LaTeX. This book was written in R Markdown.
In an R Markdown document, results are generated *on the fly* by including ‘code chunks’. Code chunks are R code that are preceded by ````{r, options}` on the line before the R code, and ````` at the end of the chunk. For example, suppose we have the following code chunk:
```
```{r eval = TRUE, echo = TRUE}
(1:5)^2
```
```
in an R Markdown document. The `eval = TRUE` in the code indicates that the code should be evaluated while `echo = TRUE` controls whether the R code is displayed. When we compile the document, we get
```
(1:5)^2
#> [1] 1 4 9 16 25
```
R Markdown via **knitr** provides a wide range of options to customise what is displayed and evaluated. When you adapt to this workflow it is highly efficient, especially as RStudio provides a number of shortcuts that make it easy to create and modify code chunks. To create a chunk while editing a `.Rmd` file, for example, simply enter `Ctrl/Cmd+Alt+I` on Windows or Linux, or select the option from the Code drop down menu in RStudio.
Once your document has compiled, it should appear on your screen in the file format requested. If an html file has been generated (as is the default), RStudio provides a feature that allows you to put it up online rapidly.
This is done using the [rpubs](https://rpubs.com) website, a store of a huge number of dynamic documents (which could be a good source of inspiration for your publications).
Assuming you have an RStudio account, clicking the ‘Publish’ button at the top of the html output window will instantly publish your work online, with a minimum amount of effort, enabling fast and efficient communication with many collaborators and the public.
An important advantage of dynamically documenting work this way is that when the data or analysis code changes, the results will be updated in the document automatically. This can save hours of fiddly copying and pasting of R output between different programs. Also, if your client wants pages and pages of documented output, **knitr** can provide them with a minimum amount of typing; e.g., creating slightly different versions of the same plot over and over again. From a delivery of content perspective, that is certainly an efficiency gain compared with hours of copying and pasting figures!
If your R Markdown documents include time\-consuming processing stages, a speed boost can be attained after the first build by setting `opts_chunk$set(cache = TRUE)` in the first chunk of the document. This setting was used to reduce the build times of this book, as can be seen on [GitHub](https://github.com/csgillespie/efficientR/blob/master/code/before_script.R).
Furthermore, dynamic documents written in R Markdown can compile into a range of output formats including html, pdf and Microsoft Word’s docx. There is a wealth of information on the details of dynamic report writing that is not worth replicating here. Key references are RStudio’s excellent website on R Markdown hosted at [rmarkdown.rstudio.com](https://rmarkdown.rstudio.com/), and for a more detailed account of dynamic documents with R, see Xie ([2015](#ref-xie2015dynamic)).
### 4\.5\.2 R packages
A strict approach to project management and workflow is treating your projects as R packages. This approach has advantages and limitations. The major risk with treating a project as a package is that the package is quite a strict way of organising work. Packages are suited for code intensive projects where code documentation is important. An intermediate approach is to use a ‘dummy package’ that includes a `DESCRIPTION` file in the root directory telling users of the project which packages must be installed for the code to work. This book is based on a dummy package so that we can easily keep the dependencies up\-to\-date (see the book’s [DESCRIPTION](https://github.com/csgillespie/efficientR/blob/master/DESCRIPTION) file online for an insight into how this works).
Creating packages is good practice in terms of learning to correctly document your code, store example data, and even (via vignettes) ensure reproducibility. But it can take a lot of extra time, so it should not be taken lightly. This approach to R workflow is appropriate for managing complex projects which repeatedly use the same routines that can be converted into functions. Creating project packages can provide a foundation for generalising your code for use by others; e.g., via publication on GitHub and/or CRAN. Additionally, R package development has been made much easier in recent years by the development of the **devtools** package, which is highly recommended for anyone attempting to write an R package.
The number of essential elements of R packages differentiate them from other R projects. Three of these are outlined below from an efficiency perspective.
* The [`DESCRIPTION`](http://r-pkgs.had.co.nz/description.html) file contains key information about the package, including which packages are required for the code contained in your package to work, e.g. using `Imports:`. This is efficient because it means that anyone who installs your package will automatically install the other packages that it depends on.
* The `R/` folder contains all the R code that defines your package’s functions. Placing your code in a single place encourages you to make your code modular, which greatly reduces duplication of code on large projects. Furthermore, the documentation of R packages through [Roxygen tags](http://r-pkgs.had.co.nz/man.html#man-workflow), such as `#' This function does this...`, makes it easy for others to use your work. This form of efficient documentation is facilitated by the **roxygen2** package.
* The `data/` folder contains example code for demonstrating to others how the functions work and transporting datasets that will be frequently used in your workflow. Data can be added automatically to your package project using the **devtools** package, with `devtools::use_data()`. This can increase efficiency by providing a way of distributing small to medium sized datasets and making them available when the package is loaded with the function `data("data_set_name")`.
The package **testthat** makes it easier than ever to test your R code as you go, ensuring that nothing breaks. This, combined with ‘continuous integration’ services, such as that provided by [Travis](https://travis-ci.org/), make updating your code base as efficient and robust as possible. This, and more, is described in Cotton ([2016](#ref-cotton_testing_2016)[b](#ref-cotton_testing_2016)).
As with dynamic documents, package development is a large topic. For small ‘one\-off’ projects, the time taken in learning how to set\-up a package may not be worth the savings. However, packages provide a rigorous way of storing code, data and documentation that can greatly boost productivity in the long\-run. For more on R packages, see H. Wickham ([2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)); the online version provides all you need to know about writing R packages for free (see [r\-pkgs.had.co.nz/](http://r-pkgs.had.co.nz/)).
### 4\.5\.1 Dynamic documents with R Markdown
When writing a report using R outputs, a typical workflow has historically been to 1\) do the analysis 2\) save the resulting graphics and record the main results outside the R project and 3\) open a program unrelated to R such as LibreOffice to import and communicate the results in prose. This is inefficient; it makes updating and maintaining the outputs difficult (when the data changes, steps 1 to 3 will have to be done again) and there is an overhead involved in jumping between incompatible computing environments.
To overcome this inefficiency in the documentation of R outputs, the R Markdown framework was developed. Used in conjunction with the **knitr** package, we have the ability to
* process code chunks (via **knitr**)
* a notebook interface for R (via RStudio)
* the ability to render output to multiple formats (via pandoc).
R Markdown documents are plain text and have file extension `.Rmd`. This framework allows for documents to be generated automatically. Furthermore, *nothing* is efficient unless you can quickly redo it. Documenting your code inside dynamic documents in this way ensures that analysis can be quickly re\-run.
This note briefly explains R Markdown for the un\-initiated. R Markdown is a form of Markdown. Markdown is a pure text document format that has become a standard for documentation for software. It is the default format for displaying text on GitHub. R Markdown allows the user to embed R code in a Markdown document. This is a powerful addition to Markdown, as it allows custom images, tables and even interactive visualisations, to be included in your R documents. R Markdown is an efficient file format to write in because it is light\-weight, human and computer readable, and is much less verbose than HTML and LaTeX. This book was written in R Markdown.
In an R Markdown document, results are generated *on the fly* by including ‘code chunks’. Code chunks are R code that are preceded by ````{r, options}` on the line before the R code, and ````` at the end of the chunk. For example, suppose we have the following code chunk:
```
```{r eval = TRUE, echo = TRUE}
(1:5)^2
```
```
in an R Markdown document. The `eval = TRUE` in the code indicates that the code should be evaluated while `echo = TRUE` controls whether the R code is displayed. When we compile the document, we get
```
(1:5)^2
#> [1] 1 4 9 16 25
```
R Markdown via **knitr** provides a wide range of options to customise what is displayed and evaluated. When you adapt to this workflow it is highly efficient, especially as RStudio provides a number of shortcuts that make it easy to create and modify code chunks. To create a chunk while editing a `.Rmd` file, for example, simply enter `Ctrl/Cmd+Alt+I` on Windows or Linux, or select the option from the Code drop down menu in RStudio.
Once your document has compiled, it should appear on your screen in the file format requested. If an html file has been generated (as is the default), RStudio provides a feature that allows you to put it up online rapidly.
This is done using the [rpubs](https://rpubs.com) website, a store of a huge number of dynamic documents (which could be a good source of inspiration for your publications).
Assuming you have an RStudio account, clicking the ‘Publish’ button at the top of the html output window will instantly publish your work online, with a minimum amount of effort, enabling fast and efficient communication with many collaborators and the public.
An important advantage of dynamically documenting work this way is that when the data or analysis code changes, the results will be updated in the document automatically. This can save hours of fiddly copying and pasting of R output between different programs. Also, if your client wants pages and pages of documented output, **knitr** can provide them with a minimum amount of typing; e.g., creating slightly different versions of the same plot over and over again. From a delivery of content perspective, that is certainly an efficiency gain compared with hours of copying and pasting figures!
If your R Markdown documents include time\-consuming processing stages, a speed boost can be attained after the first build by setting `opts_chunk$set(cache = TRUE)` in the first chunk of the document. This setting was used to reduce the build times of this book, as can be seen on [GitHub](https://github.com/csgillespie/efficientR/blob/master/code/before_script.R).
Furthermore, dynamic documents written in R Markdown can compile into a range of output formats including html, pdf and Microsoft Word’s docx. There is a wealth of information on the details of dynamic report writing that is not worth replicating here. Key references are RStudio’s excellent website on R Markdown hosted at [rmarkdown.rstudio.com](https://rmarkdown.rstudio.com/), and for a more detailed account of dynamic documents with R, see Xie ([2015](#ref-xie2015dynamic)).
### 4\.5\.2 R packages
A strict approach to project management and workflow is treating your projects as R packages. This approach has advantages and limitations. The major risk with treating a project as a package is that the package is quite a strict way of organising work. Packages are suited for code intensive projects where code documentation is important. An intermediate approach is to use a ‘dummy package’ that includes a `DESCRIPTION` file in the root directory telling users of the project which packages must be installed for the code to work. This book is based on a dummy package so that we can easily keep the dependencies up\-to\-date (see the book’s [DESCRIPTION](https://github.com/csgillespie/efficientR/blob/master/DESCRIPTION) file online for an insight into how this works).
Creating packages is good practice in terms of learning to correctly document your code, store example data, and even (via vignettes) ensure reproducibility. But it can take a lot of extra time, so it should not be taken lightly. This approach to R workflow is appropriate for managing complex projects which repeatedly use the same routines that can be converted into functions. Creating project packages can provide a foundation for generalising your code for use by others; e.g., via publication on GitHub and/or CRAN. Additionally, R package development has been made much easier in recent years by the development of the **devtools** package, which is highly recommended for anyone attempting to write an R package.
The number of essential elements of R packages differentiate them from other R projects. Three of these are outlined below from an efficiency perspective.
* The [`DESCRIPTION`](http://r-pkgs.had.co.nz/description.html) file contains key information about the package, including which packages are required for the code contained in your package to work, e.g. using `Imports:`. This is efficient because it means that anyone who installs your package will automatically install the other packages that it depends on.
* The `R/` folder contains all the R code that defines your package’s functions. Placing your code in a single place encourages you to make your code modular, which greatly reduces duplication of code on large projects. Furthermore, the documentation of R packages through [Roxygen tags](http://r-pkgs.had.co.nz/man.html#man-workflow), such as `#' This function does this...`, makes it easy for others to use your work. This form of efficient documentation is facilitated by the **roxygen2** package.
* The `data/` folder contains example code for demonstrating to others how the functions work and transporting datasets that will be frequently used in your workflow. Data can be added automatically to your package project using the **devtools** package, with `devtools::use_data()`. This can increase efficiency by providing a way of distributing small to medium sized datasets and making them available when the package is loaded with the function `data("data_set_name")`.
The package **testthat** makes it easier than ever to test your R code as you go, ensuring that nothing breaks. This, combined with ‘continuous integration’ services, such as that provided by [Travis](https://travis-ci.org/), make updating your code base as efficient and robust as possible. This, and more, is described in Cotton ([2016](#ref-cotton_testing_2016)[b](#ref-cotton_testing_2016)).
As with dynamic documents, package development is a large topic. For small ‘one\-off’ projects, the time taken in learning how to set\-up a package may not be worth the savings. However, packages provide a rigorous way of storing code, data and documentation that can greatly boost productivity in the long\-run. For more on R packages, see H. Wickham ([2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)); the online version provides all you need to know about writing R packages for free (see [r\-pkgs.had.co.nz/](http://r-pkgs.had.co.nz/)).
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/input-output.html |
5 Efficient input/output
========================
This chapter explains how to efficiently read and write data in R. Input/output (I/O) is the technical term for reading and writing data: the process of getting information into a particular computer system (in this case R) and then exporting it to the ‘outside world’ again (in this case as a file format that other software can read). Data I/O will be needed on projects where data comes from, or goes to, external sources. However, the majority of R resources and documentation start with the optimistic assumption that your data has already been loaded, ignoring the fact that importing datasets into R, and exporting them to the world outside the R ecosystem, can be a time\-consuming and frustrating process. Tricky, slow or ultimately unsuccessful data I/O can cripple efficiency right at the outset of a project. Conversely, reading and writing your data efficiently will make your R projects more likely to succeed in the outside world.
The first section introduces **rio**, a ‘meta package’ for efficiently reading and writing data in a range of file formats. **rio** requires only two intuitive functions for data I/O, making it efficient to learn and use. Next we explore in more detail efficient functions for reading in files stored in common *plain text* file formats from the **readr** and **data.table** packages. Binary formats, which can dramatically reduce file sizes and read/write times, are covered next.
With the accelerating digital revolution and growth in open data, an increasing proportion of the world’s data can be downloaded from the internet. This trend is set to continue, making section [5\.5](input-output.html#download), on downloading and importing data from the web, important for ‘future\-proofing’ your I/O skills. The benchmarks in this chapter demonstrate that choice of file format and packages for data I/O can have a huge impact on computational efficiency.
Before reading in a single line of data, it is worth considering a general principle for reproducible data management: never modify raw data files. Raw data should be seen as read\-only, and contain information about its provenance. Keeping the original file name and commenting on its origin are a couple of ways to improve reproducibility, even when the data are not publicly available.
### Prerequisites
R can read data from a variety of sources. We begin by discussing the generic package **rio** that handles a wide variety of data types. Special attention is paid to CSV files, which leads to the **readr** and **data.table** packages. The relatively new package **feather** is introduced as a binary file format, that has cross\-language support.
```
library("rio")
library("readr")
library("data.table")
library("feather")
```
We also use the **WDI** package to illustrate accessing online data sets
```
library("WDI")
```
5\.1 Top 5 tips for efficient data I/O
--------------------------------------
1. If possible, keep the names of local files downloaded from the internet or copied onto your computer unchanged. This will help you trace the provenance of the data in the future.
2. R’s native file format is `.Rds`. These files can be imported and exported using `readRDS()` and `saveRDS()` for fast and space efficient data storage.
3. Use `import()` from the **rio** package to efficiently import data from a wide range of formats, avoiding the hassle of loading format\-specific libraries.
4. Use the **readr** or **data.table** equivalents of `read.table()` to efficiently import large text files.
5. Use `file.size()` and `object.size()` to keep track of the size of files and R objects and take action if they get too big.
5\.2 Versatile data import with rio
-----------------------------------
**rio** is a ‘A Swiss\-Army Knife for Data I/O’. **rio** provides easy\-to\-use and computationally efficient functions for importing and exporting tabular data in a range of file formats. As stated in the package’s [vignette](https://cran.r-project.org/web/packages/rio/vignettes/rio.html), **rio** aims to “simplify the process of importing data into R and exporting data from R.” The vignette goes on to explain how many of the functions for data I/O described in R’s [Data Import/Export manual](https://cran.r-project.org/doc/manuals/r-release/R-data.html) are out of date (for example referring to **WriteXLS** but not the more recent **readxl** package) and difficult to learn.
This is why **rio** is covered at the outset of this chapter: if you just want to get data into R, with a minimum of time learning new functions, there is a fair chance that **rio** can help, for many common file formats. At the time of writing, these include `.csv`, `.feather`, `.json`, `.dta`, `.xls`, `.xlsx` and Google Sheets (see the package’s [github page](https://github.com/leeper/rio) for up\-to\-date information). Below we illustrate the key **rio** functions of `import()` and `export()`:
```
library("rio")
# Specify a file
fname = system.file("extdata/voc_voyages.tsv", package = "efficient")
# Import the file (uses the fread function from data.table)
voyages = import(fname)
# Export the file as an Excel spreadsheet
export(voyages, "voc_voyages.xlsx")
```
There was no need to specify the optional `format` argument for data import and export functions because this is inferred by the *suffix*, in the above example `.tsv` and `.xlsx` respectively. You can override the inferred file format for both functions with the `format` argument. You could, for example, create a comma\-delimited file called `voc_voyages.xlsx` with `export(voyages, "voc_voyages.xlsx", format = "csv")`. However, this would **not** be a good idea: it is important to ensure that a file’s suffix matches its format.
To provide another example, the code chunk below downloads and imports as a data frame information about the countries of the world stored in `.json` (downloading data from the internet is covered in more detail in Section [5\.5](input-output.html#download)):
```
capitals = import("https://github.com/mledoze/countries/raw/master/countries.json")
```
The ability to import and use `.json` data is becoming increasingly common as it a standard output format for many APIs. The **jsonlite** and **geojsonio** packages have been developed to make this as easy as possible.
### Exercises
1. The final line in the code chunk above shows a neat feature of **rio** and some other packages: the output format is determined by the suffix of the file\-name, which make for concise code. Try opening the `voc_voyages.xlsx` file with an editor such as LibreOffice Calc or Microsoft Excel to ensure that the export worked, before removing this rather inefficient file format from your system:
```
file.remove("voc_voyages.xlsx")
```
2. Try saving the `voyages` data frames into 3 other file formats of your choosing (see `vignette("rio")` for supported formats). Try opening these in external programs. Which file formats are more portable?
3. As a bonus exercise, create a simple benchmark to compare the write times for the different file formats used to complete the previous exercise. Which is fastest? Which is the most space efficient?
5\.3 Plain text formats
-----------------------
‘Plain text’ data files are encoded in a format (typically UTF\-8\) that can be read by humans and computers alike. The great thing about plain text is their simplicity and their ease of use: any programming language can read a plain text file. The most common plain text format is `.csv`, comma\-separated values, in which columns are separated by commas and rows are separated by line breaks. This is illustrated in the simple example below:
```
Person, Nationality, Country of Birth
Robin, British, England
Colin, British, Scotland
```
There is often more than one way to read data into R and `.csv` files are no exception. The method you choose has implications for computational efficiency. This section investigates methods for getting plain text files into R, with a focus on three approaches: base R’s plain text reading functions such as `read.csv()`; the **data.table** approach, which uses the function `fread()`; and the newer **readr** package which provides `read_csv()` and other `read_*()` functions such as `read_tsv()`. Although these functions perform differently, they are largely cross\-compatible, as illustrated in the below chunk, which loads data on the concentration of CO2 in the atmosphere over time:
In general, you should never “hand\-write” a CSV file. Instead, you should use `write.csv()` or an equivalent function. The Internet Engineering Task Force has the [CSV definition](https://www.ietf.org/rfc/rfc4180.txt) that facilitates sharing CSV files between tools and operating systems.
```
df_co2 = read.csv("extdata/co2.csv")
df_co2_readr = readr::read_csv("extdata/co2.csv")
#> Warning: Missing column names filled in: 'X1' [1]
#>
#> ── Column specification ────────────────────────────────────────────────────────
#> cols(
#> X1 = col_double(),
#> time = col_double(),
#> co2 = col_double()
#> )
df_co2_dt = data.table::fread("extdata/co2.csv")
```
Note that a function ‘derived from’ another in this context means that it calls another function. The functions such as `read.csv()` and `read.delim()` in fact are *wrappers* around `read.table()`. This can be seen in the source code of `read.csv()`, for example, which shows that the function is roughly the equivalent of `read.table(file, header = TRUE, sep = “,”)`.
Although this section is focussed on reading text files, it demonstrates the wider principle that the speed and flexibility advantages of additional read functions can be offset by the disadvantage of additional package dependencies (in terms of complexity and maintaining the code) for small datasets. The real benefits kick in on large datasets. Of course, there are some data types that *require* a certain package to load in R: the **readstata13** package, for example, was developed solely to read in `.dta` files generated by versions of Stata 13 and above.
Figure [5\.1](input-output.html#fig:5-1) demonstrates that the relative performance gains of the **data.table** and **readr** approaches increase with data size, especially for data with many rows. Below around \\(1\\) MB `read.csv()` is actually *faster* than `read_csv()` while `fread` is much faster than both, although these savings are likely to be inconsequential for such small datasets.
For files beyond \\(100\\) MB in size `fread()` and `read_csv()` can be expected to be around *5 times faster* than `read.csv()`. This efficiency gain may be inconsequential for a one\-off file of \\(100\\) MB running on a fast computer (which still takes less than a minute with `read.csv()`), but could represent an important speed\-up if you frequently load large text files.
Figure 5\.1: Benchmarks of base, data.table and readr approches for reading csv files, using the functions read.csv(), fread() and read\_csv(), respectively. The facets ranging from \\(2\\) to \\(200\\) represent the number of columns in the csv file.
When tested on a large (\\(4\\)GB) `.csv` file it was found that `fread()` and `read_csv()` were almost identical in load times and that `read.csv()` took around \\(5\\) times longer. This consumed more than \\(10\\)GB of RAM, making it unsuitable to run on many computers (see Section [8\.3](hardware.html#ram) for more on memory). Note that both **readr** and base methods can be made significantly faster by pre\-specifying the column types at the outset (see below). Further details are provided by the help in `?read.table`.
```
read.csv(file_name, colClasses = c("numeric", "numeric"))
```
In some cases with R programming there is a trade\-off between speed and robustness. This is illustrated below with reference to differences in how the **readr**, **data.table** and base R approaches handle unexpected values. Figure [5\.1](input-output.html#fig:5-1) highlights the benefit of switching to `fread()` and (eventually) to `read_csv()` as the dataset size increases. For a small (\\(1\\)MB) dataset:
`fread()` is around \\(5\\) times faster than base R.
### 5\.3\.1 Differences between `fread()` and `read_csv()`
The file `voc_voyages` was taken from a dataset on Dutch naval expeditions used with permission from the CWI Database Architectures Group. The data is described more fully at [monetdb.org](https://www.monetdb.org/Documentation/UserGuide/MonetDB-R). From this dataset we primarily use the ‘voyages’ table which lists Dutch shipping expeditions by their date of departure.
```
fname = system.file("extdata/voc_voyages.tsv", package = "efficient")
voyages_base = read.delim(fname)
```
When we run the equivalent operation using **readr**,
```
voyages_readr = readr::read_tsv(fname)
#>
#> ── Column specification ────────────────────────────────────────────────────────
#> cols(
#> .default = col_character(),
#> number = col_double(),
#> number_sup = col_logical(),
#> trip = col_double(),
#> tonnage = col_double(),
#> hired = col_logical(),
#> departure_date = col_date(format = ""),
#> cape_arrival = col_date(format = ""),
#> cape_departure = col_date(format = ""),
#> cape_call = col_logical(),
#> arrival_date = col_date(format = ""),
#> next_voyage = col_double()
#> )
#> ℹ Use `spec()` for the full column specifications.
#> Warning: 77 parsing failures.
#> row col expected actual file
#> 1023 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1025 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1030 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1034 hired 1/0/T/F/TRUE/FALSE 1664/5 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1035 hired 1/0/T/F/TRUE/FALSE 1665 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> .... ..... .................. ...... ..........................................................
#> See problems(...) for more details.
```
a warning is raised regarding row 1023 in the `hired` variable. This is because `read_*()` decides what class each variable is based on the first \\(1000\\) rows, rather than all rows, as base `read.*()` functions do. Printing the offending element
```
voyages_base$hired[1023] # a character
#> [1] "1664"
voyages_readr$hired[1023] # an NA: text cannot be converted to logical(i.e read_*() interprets this column as logical)
#> [1] NA
```
Reading the file using **data.table**
```
# Verbose warnings not shown
voyages_dt = data.table::fread(fname)
```
generates 5 warning messages stating that columns 2, 4, 9, 10 and 11 were `Bumped to type character on data row ...`, with the offending rows printed in place of `...`. Instead of changing the offending values to `NA`, as **readr** does for the `built` column (9\), `fread()` automatically converts any columns it thought of as numeric into characters. An additional feature of `fread` is that it can read\-in a selection of the columns, either by their index or name, using the `select` argument. This is illustrated below by reading in only half of the columns (the first 11\) from the voyages dataset and comparing the result with `fread()`’ing all the columns in.
```
microbenchmark(times = 5,
with_select = data.table::fread(fname, select = 1:11),
without_select = data.table::fread(fname)
)
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> with_select 11.3 11.4 11.5 11.4 11.4 12.0 5
#> without_select 15.3 15.4 15.4 15.4 15.5 15.6 5
```
To summarise, the differences between base, **readr** and **data.table** functions for reading in data go beyond code execution times. The functions `read_csv()` and `fread()` boost speed partially at the expense of robustness because they decide column classes based on a small sample of available data. The similarities and differences between the approaches are summarised for the Dutch shipping data in Table [5\.1](input-output.html#tab:colclasses).
Table 5\.1: Comparison of base, **readr** and **data.table** reading in the voyages data set.
| number | boatname | built | departure\_date | Function |
| --- | --- | --- | --- | --- |
| integer | character | character | character | base |
| numeric | character | character | Date | readr |
| integer | character | character | IDate, Date | data.table |
Table [5\.1](input-output.html#tab:colclasses) shows 4 main similarities and differences between the three read types of read function:
* For uniform data such as the ‘number’ variable in Table [5\.1](input-output.html#tab:colclasses), all reading methods yield the same result (integer in this case).
* For columns that are obviously characters such as ‘boatname’, the base method results in factors (unless `stringsAsFactors` is set to `FALSE`) whereas `fread()` and `read_csv()` functions return characters.
* For columns in which the first 1000 rows are of one type but which contain anomalies, such as ‘built’ and ‘departure\_data’ in the shipping example, `fread()` coerces the result to characters.
`read_csv()` and siblings, by contrast, keep the class that is correct for the first 1000 rows and sets the anomalous records to `NA`. This is illustrated in [5\.1](input-output.html#tab:colclasses), where `read_tsv()` produces a `numeric` class for the ‘built’ variable, ignoring the non\-numeric text in row 2841\.
* `read_*()` functions generate objects of class `tbl_df`, an extension of the `data.frame` class, as discussed in Section [6\.4](data-carpentry.html#dplyr). `fread()` generates objects of class `data.table()`. These can be used as standard data frames but differ subtly in their behaviour.
An additional difference is that `read_csv()` creates data frames of class `tbl_df`, *and* `data.frame`. This makes no practical difference, unless the **tibble** package is loaded, as described in section [6\.2](data-carpentry.html#efficient-data-frames-with-tibble) in the next chapter.
The wider point associated with these tests is that functions that save time can also lead to additional considerations or complexities for your workflow. Taking a look at what is going on ‘under the hood’ of fast functions to increase speed, as we have done in this section, can help understand the knock\-on consequences of choosing fast functions over slower functions from base R.
### 5\.3\.2 Preprocessing text outside R
There are circumstances when datasets become too large to read directly into R.
Reading in a \\(4\\) GB text file using the functions tested above, for example, consumes all available RAM on a \\(16\\) GB machine. To overcome this limitation, external *stream processing* tools can be used to preprocess large text files.
The following command, using the Linux command line ‘shell’ (or Windows based Linux shell emulator [Cygwin](https://cygwin.com/install.html)) command `split`, for example, will break a large multi GB file into many chunks, each of which is more manageable for R:
```
split -b100m bigfile.csv
```
The result is a series of files, set to 100 MB each with the `-b100m` argument in the above code. By default these will be called `xaa`, `xab` and can be read in *one chunk at a time* (e.g. using `read.csv()`, `fread()` or `read_csv()`, described in the previous section) without crashing most modern computers.
Splitting a large file into individual chunks may allow it to be read into R.
This is not an efficient way to import large datasets, however, because it results in a non\-random sample of the data this way.
A more efficient, robust and scalable way to work with large datasets is via databases, covered in Section [6\.6](data-carpentry.html#working-with-databases) in the next chapter.
5\.4 Binary file formats
------------------------
There are limitations to plain text files. Even the trusty CSV format is “restricted to tabular data, lacks type\-safety, and has limited precision for numeric values” (Eddelbuettel, Stokely, and Ooms [2016](#ref-JSSv071i02)).
Once you have read\-in the raw data (e.g. from a plain text file) and tidied it (covered in the next chapter), it is common to want to save it for future use. Saving it after tidying is recommended, to reduce the chance of having to run all the data cleaning code again. We recommend saving tidied versions of large datasets in one of the binary formats covered in this section: this will decrease read/write times and file sizes, making your data more
portable.[13](#fn13)
Unlike plain text files, data stored in binary formats cannot be read by humans. This allows space\-efficient data compression but means that the files will be less language agnostic. R’s native file format, `.Rds`, for example may be difficult to read and write using external programs such as Python or LibreOffice Calc. This section provides an overview of binary file formats in R, with benchmarks to show how they compare with the plain text format `.csv` covered in the previous section.
### 5\.4\.1 Native binary formats: Rdata or Rds?
`.Rds` and `.RData` are R’s native binary file formats. These formats are optimised for speed and compression ratios. But what is the difference between them? The following code chunk demonstrates the key difference between them:
```
save(df_co2, file = "extdata/co2.RData")
saveRDS(df_co2, "extdata/co2.Rds")
load("extdata/co2.RData")
df_co2_rds = readRDS("extdata/co2.Rds")
identical(df_co2, df_co2_rds)
#> [1] TRUE
```
The first method is the most widely used. It uses the `save()` function which takes any number of R objects and writes them to a file, which must be specified by the `file =` argument. `save()` is like `save.image()`, which saves *all* the objects currently loaded in R.
The second method is slightly less used but we recommend it. Apart from being slightly more concise for saving single R objects, the `readRDS()` function is more flexible: as shown in the subsequent line, the resulting object can be assigned to any name. In this case we called it `df_co2_rds` (which we show to be identical to `df_co2`, loaded with the `load()` command) but we could have called it anything or simply printed it to the console.
Using `saveRDS()` is good practice because it forces you to specify object names. If you use `save()` without care, you could forget the names of the objects you saved and accidentally overwrite objects that already existed.
### 5\.4\.2 The feather file format
Feather was developed as a collaboration between R and Python developers to create a fast, light and language agnostic format for storing data frames. The code chunk below shows how it can be used to save and then re\-load the `df_co2` dataset loaded previously in both R and Python:
```
library("feather")
write_feather(df_co2, "extdata/co2.feather")
df_co2_feather = read_feather("extdata/co2.feather")
```
```
import feather
path = 'data/co2.feather'
df_co2_feather = feather.read_dataframe(path)
```
### 5\.4\.3 Benchmarking binary file formats
We know that binary formats are advantageous from space and read/write time perspectives, but how much so? The benchmarks in this section, based on large matrices containing random numbers, are designed to help answer this question. Figure [5\.2](input-output.html#fig:5-2) shows the *relative* efficiency gains of the feather and Rds formats, compared with base CSV. From left to right, figure [5\.2](input-output.html#fig:5-2) shows benefits in terms of file size, read times, and write times.
In terms of file size, Rds files perform the best, occupying just over a quarter of the hard disc space compared with the equivalent CSV files. The equivalent feather format also outperformed the CSV format, occupying around half the disc space.
```
#> Warning: `frame_data()` was deprecated in tibble 2.0.0.
#> Please use `tribble()` instead.
```
Figure 5\.2: Comparison of the performance of binary formats for reading and writing datasets with 20 column with the plain text format CSV. The functions used to read the files were read.csv(), readRDS() and feather::read\_feather() respectively. The functions used to write the files were write.csv(), saveRDS() and feather::write\_feather().
The results of this simple disk usage benchmark show that saving data in a compressed binary format can save space and if your data will be shared on\-line, reduce data download time and bandwidth usage. But how does each method compare from a computational efficiency perspective? The read and write times for each file format are illustrated in the middle and right hand panels of [5\.2](input-output.html#fig:5-2).
The results show that file size is not a reliable predictor of data read and write times. This is due to the computational overheads of compression. Although feather files occupied more disc space, they were roughly equivalent in terms of read times: the functions `read_feather()` and `readRDS()` were consistently around 10 times faster than `read.csv()`. In terms of write times, feather excels: `write_feather()` was around 10 times faster than `write.csv()`, whereas `saveRDS()` was only around 1\.2 times faster.
Note that the performance of different file formats depends on the content of the data being saved. The benchmarks above showed savings for matrices of random numbers. For real life data, the results would be quite different. The `voyages` dataset, saved as an Rds file, occupied less than a quarter the disc space as the original TSV file, whereas the file size was *larger* than the original when saved as a feather file!
### 5\.4\.4 Protocol Buffers
Google’s [Protocol Buffers](https://developers.google.com/protocol-buffers/) offers a portable, efficient and scalable solution to binary data storage. A recent package, **RProtoBuf**, provides an R interface. This approach is not covered in this book, as it is new, advanced and not (at the time of writing) widely used in the R community. The approach is discussed in detail in a [paper](https://www.jstatsoft.org/article/view/v071i02) on the subject, which also provides an excellent overview of the issues associated with different file formats (Eddelbuettel, Stokely, and Ooms [2016](#ref-JSSv071i02)).
5\.5 Getting data from the internet
-----------------------------------
The code chunk below shows how the functions
`download.file()` and `unzip()` can be used to download and unzip a dataset from the internet.
(Since R 3\.2\.3 the base function `download.file()` can be used to download from secure (`https://`) connections on any operating system.)
R can automate processes that are often performed manually, e.g. through the graphical user interface of a web browser, with potential advantages for reproducibility and programmer efficiency. The result is data stored neatly in the `data` directory ready to be imported. Note we deliberately kept the file name intact, enhancing understanding of the data’s *provenance* so future users can quickly find out where the data came from. Note also that part of the dataset is stored in the **efficient** package. Using R for basic file management can help create a reproducible workflow, as illustrated below.
```
url = "https://www.monetdb.org/sites/default/files/voc_tsvs.zip"
download.file(url, "voc_tsvs.zip") # download file
unzip("voc_tsvs.zip", exdir = "data") # unzip files
file.remove("voc_tsvs.zip") # tidy up by removing the zip file
```
This workflow applies equally to downloading and loading single files. Note that one could make the code more concise by replacing the second line with `df = read.csv(url)`. However, we recommend downloading the file to disk so that if for some reason it fails (e.g. if you would like to skip the first few lines), you don’t have to keep downloading the file over and over again. The code below downloads and loads data on atmospheric concentrations of CO2. Note that this dataset is also available from the **datasets** package.
```
url = "https://vincentarelbundock.github.io/Rdatasets/csv/datasets/co2.csv"
download.file(url, "extdata/co2.csv")
df_co2 = read_csv("extdata/co2.csv")
```
There are now many R packages to assist with the download and import of data. The organisation [rOpenSci](https://ropensci.org/) supports a number of these.
The example below illustrates this using the WDI package (not supported by rOpenSci) to access World Bank data on CO2 emissions in the transport sector:
```
library("WDI")
WDIsearch("CO2") # search for data on a topic
co2_transport = WDI(indicator = "EN.CO2.TRAN.ZS") # import data
```
There will be situations where you cannot download the data directly or when the data cannot be made available. In this case, simply providing a comment relating to the data’s origin (e.g. `# Downloaded from http://example.com`) before referring to the dataset can greatly improve the utility of the code to yourself and others.
There are a number of R packages that provide more advanced functionality than simply downloading files. The CRAN task view on [Web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) provides a comprehensive list. The two packages for interacting with web pages are **httr** and **RCurl**. The former package provides (a relatively) user\-friendly interface for executing standard HTTP methods, such as `GET` and `POST`. It also provides support for web authentication protocols and returns HTTP status codes that are essential for debugging. The **RCurl** package focuses on lower\-level support and is particularly useful for web\-based XML support or FTP operations.
5\.6 Accessing data stored in packages
--------------------------------------
Most well documented packages provide some example data for you to play with. This can help demonstrate use cases in specific domains, that uses a particular data format. The command `data(package = "package_name")` will show the datasets in a package. Datasets provided by **dplyr**, for example, can be viewed with `data(package = "dplyr")`.
Raw data (i.e. data which has not been converted into R’s native `.Rds` format) is usually located within the sub\-folder `extdata` in R (which corresponds to `inst/extdata` when developing packages. The function `system.file()` outputs file paths associated with specific packages. To see all the external files within the **readr** package, for example, one could use the following command:
```
list.files(system.file("extdata", package = "readr"))
#> [1] "challenge.csv" "epa78.txt" "example.log"
#> [4] "fwf-sample.txt" "massey-rating.txt" "mtcars.csv"
#> [7] "mtcars.csv.bz2" "mtcars.csv.zip"
```
Further, to ‘look around’ to see what files are stored in a particular package, one could type the following, taking advantage of RStudio’s intellisense file completion capabilities (using copy and paste to enter the file path):
```
system.file(package = "readr")
#> [1] "/home/robin/R/x86_64-pc-linux-gnu-library/3.3/readr"
```
Hitting `Tab` after the second command should trigger RStudio to create a miniature pop\-up box listing the files within the folder, as illustrated in figure [5\.3](input-output.html#fig:5-3).
Figure 5\.3: Discovering files in R packages using RStudio’s ‘intellisense’.
### Prerequisites
R can read data from a variety of sources. We begin by discussing the generic package **rio** that handles a wide variety of data types. Special attention is paid to CSV files, which leads to the **readr** and **data.table** packages. The relatively new package **feather** is introduced as a binary file format, that has cross\-language support.
```
library("rio")
library("readr")
library("data.table")
library("feather")
```
We also use the **WDI** package to illustrate accessing online data sets
```
library("WDI")
```
5\.1 Top 5 tips for efficient data I/O
--------------------------------------
1. If possible, keep the names of local files downloaded from the internet or copied onto your computer unchanged. This will help you trace the provenance of the data in the future.
2. R’s native file format is `.Rds`. These files can be imported and exported using `readRDS()` and `saveRDS()` for fast and space efficient data storage.
3. Use `import()` from the **rio** package to efficiently import data from a wide range of formats, avoiding the hassle of loading format\-specific libraries.
4. Use the **readr** or **data.table** equivalents of `read.table()` to efficiently import large text files.
5. Use `file.size()` and `object.size()` to keep track of the size of files and R objects and take action if they get too big.
5\.2 Versatile data import with rio
-----------------------------------
**rio** is a ‘A Swiss\-Army Knife for Data I/O’. **rio** provides easy\-to\-use and computationally efficient functions for importing and exporting tabular data in a range of file formats. As stated in the package’s [vignette](https://cran.r-project.org/web/packages/rio/vignettes/rio.html), **rio** aims to “simplify the process of importing data into R and exporting data from R.” The vignette goes on to explain how many of the functions for data I/O described in R’s [Data Import/Export manual](https://cran.r-project.org/doc/manuals/r-release/R-data.html) are out of date (for example referring to **WriteXLS** but not the more recent **readxl** package) and difficult to learn.
This is why **rio** is covered at the outset of this chapter: if you just want to get data into R, with a minimum of time learning new functions, there is a fair chance that **rio** can help, for many common file formats. At the time of writing, these include `.csv`, `.feather`, `.json`, `.dta`, `.xls`, `.xlsx` and Google Sheets (see the package’s [github page](https://github.com/leeper/rio) for up\-to\-date information). Below we illustrate the key **rio** functions of `import()` and `export()`:
```
library("rio")
# Specify a file
fname = system.file("extdata/voc_voyages.tsv", package = "efficient")
# Import the file (uses the fread function from data.table)
voyages = import(fname)
# Export the file as an Excel spreadsheet
export(voyages, "voc_voyages.xlsx")
```
There was no need to specify the optional `format` argument for data import and export functions because this is inferred by the *suffix*, in the above example `.tsv` and `.xlsx` respectively. You can override the inferred file format for both functions with the `format` argument. You could, for example, create a comma\-delimited file called `voc_voyages.xlsx` with `export(voyages, "voc_voyages.xlsx", format = "csv")`. However, this would **not** be a good idea: it is important to ensure that a file’s suffix matches its format.
To provide another example, the code chunk below downloads and imports as a data frame information about the countries of the world stored in `.json` (downloading data from the internet is covered in more detail in Section [5\.5](input-output.html#download)):
```
capitals = import("https://github.com/mledoze/countries/raw/master/countries.json")
```
The ability to import and use `.json` data is becoming increasingly common as it a standard output format for many APIs. The **jsonlite** and **geojsonio** packages have been developed to make this as easy as possible.
### Exercises
1. The final line in the code chunk above shows a neat feature of **rio** and some other packages: the output format is determined by the suffix of the file\-name, which make for concise code. Try opening the `voc_voyages.xlsx` file with an editor such as LibreOffice Calc or Microsoft Excel to ensure that the export worked, before removing this rather inefficient file format from your system:
```
file.remove("voc_voyages.xlsx")
```
2. Try saving the `voyages` data frames into 3 other file formats of your choosing (see `vignette("rio")` for supported formats). Try opening these in external programs. Which file formats are more portable?
3. As a bonus exercise, create a simple benchmark to compare the write times for the different file formats used to complete the previous exercise. Which is fastest? Which is the most space efficient?
### Exercises
1. The final line in the code chunk above shows a neat feature of **rio** and some other packages: the output format is determined by the suffix of the file\-name, which make for concise code. Try opening the `voc_voyages.xlsx` file with an editor such as LibreOffice Calc or Microsoft Excel to ensure that the export worked, before removing this rather inefficient file format from your system:
```
file.remove("voc_voyages.xlsx")
```
2. Try saving the `voyages` data frames into 3 other file formats of your choosing (see `vignette("rio")` for supported formats). Try opening these in external programs. Which file formats are more portable?
3. As a bonus exercise, create a simple benchmark to compare the write times for the different file formats used to complete the previous exercise. Which is fastest? Which is the most space efficient?
5\.3 Plain text formats
-----------------------
‘Plain text’ data files are encoded in a format (typically UTF\-8\) that can be read by humans and computers alike. The great thing about plain text is their simplicity and their ease of use: any programming language can read a plain text file. The most common plain text format is `.csv`, comma\-separated values, in which columns are separated by commas and rows are separated by line breaks. This is illustrated in the simple example below:
```
Person, Nationality, Country of Birth
Robin, British, England
Colin, British, Scotland
```
There is often more than one way to read data into R and `.csv` files are no exception. The method you choose has implications for computational efficiency. This section investigates methods for getting plain text files into R, with a focus on three approaches: base R’s plain text reading functions such as `read.csv()`; the **data.table** approach, which uses the function `fread()`; and the newer **readr** package which provides `read_csv()` and other `read_*()` functions such as `read_tsv()`. Although these functions perform differently, they are largely cross\-compatible, as illustrated in the below chunk, which loads data on the concentration of CO2 in the atmosphere over time:
In general, you should never “hand\-write” a CSV file. Instead, you should use `write.csv()` or an equivalent function. The Internet Engineering Task Force has the [CSV definition](https://www.ietf.org/rfc/rfc4180.txt) that facilitates sharing CSV files between tools and operating systems.
```
df_co2 = read.csv("extdata/co2.csv")
df_co2_readr = readr::read_csv("extdata/co2.csv")
#> Warning: Missing column names filled in: 'X1' [1]
#>
#> ── Column specification ────────────────────────────────────────────────────────
#> cols(
#> X1 = col_double(),
#> time = col_double(),
#> co2 = col_double()
#> )
df_co2_dt = data.table::fread("extdata/co2.csv")
```
Note that a function ‘derived from’ another in this context means that it calls another function. The functions such as `read.csv()` and `read.delim()` in fact are *wrappers* around `read.table()`. This can be seen in the source code of `read.csv()`, for example, which shows that the function is roughly the equivalent of `read.table(file, header = TRUE, sep = “,”)`.
Although this section is focussed on reading text files, it demonstrates the wider principle that the speed and flexibility advantages of additional read functions can be offset by the disadvantage of additional package dependencies (in terms of complexity and maintaining the code) for small datasets. The real benefits kick in on large datasets. Of course, there are some data types that *require* a certain package to load in R: the **readstata13** package, for example, was developed solely to read in `.dta` files generated by versions of Stata 13 and above.
Figure [5\.1](input-output.html#fig:5-1) demonstrates that the relative performance gains of the **data.table** and **readr** approaches increase with data size, especially for data with many rows. Below around \\(1\\) MB `read.csv()` is actually *faster* than `read_csv()` while `fread` is much faster than both, although these savings are likely to be inconsequential for such small datasets.
For files beyond \\(100\\) MB in size `fread()` and `read_csv()` can be expected to be around *5 times faster* than `read.csv()`. This efficiency gain may be inconsequential for a one\-off file of \\(100\\) MB running on a fast computer (which still takes less than a minute with `read.csv()`), but could represent an important speed\-up if you frequently load large text files.
Figure 5\.1: Benchmarks of base, data.table and readr approches for reading csv files, using the functions read.csv(), fread() and read\_csv(), respectively. The facets ranging from \\(2\\) to \\(200\\) represent the number of columns in the csv file.
When tested on a large (\\(4\\)GB) `.csv` file it was found that `fread()` and `read_csv()` were almost identical in load times and that `read.csv()` took around \\(5\\) times longer. This consumed more than \\(10\\)GB of RAM, making it unsuitable to run on many computers (see Section [8\.3](hardware.html#ram) for more on memory). Note that both **readr** and base methods can be made significantly faster by pre\-specifying the column types at the outset (see below). Further details are provided by the help in `?read.table`.
```
read.csv(file_name, colClasses = c("numeric", "numeric"))
```
In some cases with R programming there is a trade\-off between speed and robustness. This is illustrated below with reference to differences in how the **readr**, **data.table** and base R approaches handle unexpected values. Figure [5\.1](input-output.html#fig:5-1) highlights the benefit of switching to `fread()` and (eventually) to `read_csv()` as the dataset size increases. For a small (\\(1\\)MB) dataset:
`fread()` is around \\(5\\) times faster than base R.
### 5\.3\.1 Differences between `fread()` and `read_csv()`
The file `voc_voyages` was taken from a dataset on Dutch naval expeditions used with permission from the CWI Database Architectures Group. The data is described more fully at [monetdb.org](https://www.monetdb.org/Documentation/UserGuide/MonetDB-R). From this dataset we primarily use the ‘voyages’ table which lists Dutch shipping expeditions by their date of departure.
```
fname = system.file("extdata/voc_voyages.tsv", package = "efficient")
voyages_base = read.delim(fname)
```
When we run the equivalent operation using **readr**,
```
voyages_readr = readr::read_tsv(fname)
#>
#> ── Column specification ────────────────────────────────────────────────────────
#> cols(
#> .default = col_character(),
#> number = col_double(),
#> number_sup = col_logical(),
#> trip = col_double(),
#> tonnage = col_double(),
#> hired = col_logical(),
#> departure_date = col_date(format = ""),
#> cape_arrival = col_date(format = ""),
#> cape_departure = col_date(format = ""),
#> cape_call = col_logical(),
#> arrival_date = col_date(format = ""),
#> next_voyage = col_double()
#> )
#> ℹ Use `spec()` for the full column specifications.
#> Warning: 77 parsing failures.
#> row col expected actual file
#> 1023 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1025 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1030 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1034 hired 1/0/T/F/TRUE/FALSE 1664/5 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1035 hired 1/0/T/F/TRUE/FALSE 1665 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> .... ..... .................. ...... ..........................................................
#> See problems(...) for more details.
```
a warning is raised regarding row 1023 in the `hired` variable. This is because `read_*()` decides what class each variable is based on the first \\(1000\\) rows, rather than all rows, as base `read.*()` functions do. Printing the offending element
```
voyages_base$hired[1023] # a character
#> [1] "1664"
voyages_readr$hired[1023] # an NA: text cannot be converted to logical(i.e read_*() interprets this column as logical)
#> [1] NA
```
Reading the file using **data.table**
```
# Verbose warnings not shown
voyages_dt = data.table::fread(fname)
```
generates 5 warning messages stating that columns 2, 4, 9, 10 and 11 were `Bumped to type character on data row ...`, with the offending rows printed in place of `...`. Instead of changing the offending values to `NA`, as **readr** does for the `built` column (9\), `fread()` automatically converts any columns it thought of as numeric into characters. An additional feature of `fread` is that it can read\-in a selection of the columns, either by their index or name, using the `select` argument. This is illustrated below by reading in only half of the columns (the first 11\) from the voyages dataset and comparing the result with `fread()`’ing all the columns in.
```
microbenchmark(times = 5,
with_select = data.table::fread(fname, select = 1:11),
without_select = data.table::fread(fname)
)
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> with_select 11.3 11.4 11.5 11.4 11.4 12.0 5
#> without_select 15.3 15.4 15.4 15.4 15.5 15.6 5
```
To summarise, the differences between base, **readr** and **data.table** functions for reading in data go beyond code execution times. The functions `read_csv()` and `fread()` boost speed partially at the expense of robustness because they decide column classes based on a small sample of available data. The similarities and differences between the approaches are summarised for the Dutch shipping data in Table [5\.1](input-output.html#tab:colclasses).
Table 5\.1: Comparison of base, **readr** and **data.table** reading in the voyages data set.
| number | boatname | built | departure\_date | Function |
| --- | --- | --- | --- | --- |
| integer | character | character | character | base |
| numeric | character | character | Date | readr |
| integer | character | character | IDate, Date | data.table |
Table [5\.1](input-output.html#tab:colclasses) shows 4 main similarities and differences between the three read types of read function:
* For uniform data such as the ‘number’ variable in Table [5\.1](input-output.html#tab:colclasses), all reading methods yield the same result (integer in this case).
* For columns that are obviously characters such as ‘boatname’, the base method results in factors (unless `stringsAsFactors` is set to `FALSE`) whereas `fread()` and `read_csv()` functions return characters.
* For columns in which the first 1000 rows are of one type but which contain anomalies, such as ‘built’ and ‘departure\_data’ in the shipping example, `fread()` coerces the result to characters.
`read_csv()` and siblings, by contrast, keep the class that is correct for the first 1000 rows and sets the anomalous records to `NA`. This is illustrated in [5\.1](input-output.html#tab:colclasses), where `read_tsv()` produces a `numeric` class for the ‘built’ variable, ignoring the non\-numeric text in row 2841\.
* `read_*()` functions generate objects of class `tbl_df`, an extension of the `data.frame` class, as discussed in Section [6\.4](data-carpentry.html#dplyr). `fread()` generates objects of class `data.table()`. These can be used as standard data frames but differ subtly in their behaviour.
An additional difference is that `read_csv()` creates data frames of class `tbl_df`, *and* `data.frame`. This makes no practical difference, unless the **tibble** package is loaded, as described in section [6\.2](data-carpentry.html#efficient-data-frames-with-tibble) in the next chapter.
The wider point associated with these tests is that functions that save time can also lead to additional considerations or complexities for your workflow. Taking a look at what is going on ‘under the hood’ of fast functions to increase speed, as we have done in this section, can help understand the knock\-on consequences of choosing fast functions over slower functions from base R.
### 5\.3\.2 Preprocessing text outside R
There are circumstances when datasets become too large to read directly into R.
Reading in a \\(4\\) GB text file using the functions tested above, for example, consumes all available RAM on a \\(16\\) GB machine. To overcome this limitation, external *stream processing* tools can be used to preprocess large text files.
The following command, using the Linux command line ‘shell’ (or Windows based Linux shell emulator [Cygwin](https://cygwin.com/install.html)) command `split`, for example, will break a large multi GB file into many chunks, each of which is more manageable for R:
```
split -b100m bigfile.csv
```
The result is a series of files, set to 100 MB each with the `-b100m` argument in the above code. By default these will be called `xaa`, `xab` and can be read in *one chunk at a time* (e.g. using `read.csv()`, `fread()` or `read_csv()`, described in the previous section) without crashing most modern computers.
Splitting a large file into individual chunks may allow it to be read into R.
This is not an efficient way to import large datasets, however, because it results in a non\-random sample of the data this way.
A more efficient, robust and scalable way to work with large datasets is via databases, covered in Section [6\.6](data-carpentry.html#working-with-databases) in the next chapter.
### 5\.3\.1 Differences between `fread()` and `read_csv()`
The file `voc_voyages` was taken from a dataset on Dutch naval expeditions used with permission from the CWI Database Architectures Group. The data is described more fully at [monetdb.org](https://www.monetdb.org/Documentation/UserGuide/MonetDB-R). From this dataset we primarily use the ‘voyages’ table which lists Dutch shipping expeditions by their date of departure.
```
fname = system.file("extdata/voc_voyages.tsv", package = "efficient")
voyages_base = read.delim(fname)
```
When we run the equivalent operation using **readr**,
```
voyages_readr = readr::read_tsv(fname)
#>
#> ── Column specification ────────────────────────────────────────────────────────
#> cols(
#> .default = col_character(),
#> number = col_double(),
#> number_sup = col_logical(),
#> trip = col_double(),
#> tonnage = col_double(),
#> hired = col_logical(),
#> departure_date = col_date(format = ""),
#> cape_arrival = col_date(format = ""),
#> cape_departure = col_date(format = ""),
#> cape_call = col_logical(),
#> arrival_date = col_date(format = ""),
#> next_voyage = col_double()
#> )
#> ℹ Use `spec()` for the full column specifications.
#> Warning: 77 parsing failures.
#> row col expected actual file
#> 1023 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1025 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1030 hired 1/0/T/F/TRUE/FALSE 1664 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1034 hired 1/0/T/F/TRUE/FALSE 1664/5 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> 1035 hired 1/0/T/F/TRUE/FALSE 1665 '/home/travis/R/Library/efficient/extdata/voc_voyages.tsv'
#> .... ..... .................. ...... ..........................................................
#> See problems(...) for more details.
```
a warning is raised regarding row 1023 in the `hired` variable. This is because `read_*()` decides what class each variable is based on the first \\(1000\\) rows, rather than all rows, as base `read.*()` functions do. Printing the offending element
```
voyages_base$hired[1023] # a character
#> [1] "1664"
voyages_readr$hired[1023] # an NA: text cannot be converted to logical(i.e read_*() interprets this column as logical)
#> [1] NA
```
Reading the file using **data.table**
```
# Verbose warnings not shown
voyages_dt = data.table::fread(fname)
```
generates 5 warning messages stating that columns 2, 4, 9, 10 and 11 were `Bumped to type character on data row ...`, with the offending rows printed in place of `...`. Instead of changing the offending values to `NA`, as **readr** does for the `built` column (9\), `fread()` automatically converts any columns it thought of as numeric into characters. An additional feature of `fread` is that it can read\-in a selection of the columns, either by their index or name, using the `select` argument. This is illustrated below by reading in only half of the columns (the first 11\) from the voyages dataset and comparing the result with `fread()`’ing all the columns in.
```
microbenchmark(times = 5,
with_select = data.table::fread(fname, select = 1:11),
without_select = data.table::fread(fname)
)
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> with_select 11.3 11.4 11.5 11.4 11.4 12.0 5
#> without_select 15.3 15.4 15.4 15.4 15.5 15.6 5
```
To summarise, the differences between base, **readr** and **data.table** functions for reading in data go beyond code execution times. The functions `read_csv()` and `fread()` boost speed partially at the expense of robustness because they decide column classes based on a small sample of available data. The similarities and differences between the approaches are summarised for the Dutch shipping data in Table [5\.1](input-output.html#tab:colclasses).
Table 5\.1: Comparison of base, **readr** and **data.table** reading in the voyages data set.
| number | boatname | built | departure\_date | Function |
| --- | --- | --- | --- | --- |
| integer | character | character | character | base |
| numeric | character | character | Date | readr |
| integer | character | character | IDate, Date | data.table |
Table [5\.1](input-output.html#tab:colclasses) shows 4 main similarities and differences between the three read types of read function:
* For uniform data such as the ‘number’ variable in Table [5\.1](input-output.html#tab:colclasses), all reading methods yield the same result (integer in this case).
* For columns that are obviously characters such as ‘boatname’, the base method results in factors (unless `stringsAsFactors` is set to `FALSE`) whereas `fread()` and `read_csv()` functions return characters.
* For columns in which the first 1000 rows are of one type but which contain anomalies, such as ‘built’ and ‘departure\_data’ in the shipping example, `fread()` coerces the result to characters.
`read_csv()` and siblings, by contrast, keep the class that is correct for the first 1000 rows and sets the anomalous records to `NA`. This is illustrated in [5\.1](input-output.html#tab:colclasses), where `read_tsv()` produces a `numeric` class for the ‘built’ variable, ignoring the non\-numeric text in row 2841\.
* `read_*()` functions generate objects of class `tbl_df`, an extension of the `data.frame` class, as discussed in Section [6\.4](data-carpentry.html#dplyr). `fread()` generates objects of class `data.table()`. These can be used as standard data frames but differ subtly in their behaviour.
An additional difference is that `read_csv()` creates data frames of class `tbl_df`, *and* `data.frame`. This makes no practical difference, unless the **tibble** package is loaded, as described in section [6\.2](data-carpentry.html#efficient-data-frames-with-tibble) in the next chapter.
The wider point associated with these tests is that functions that save time can also lead to additional considerations or complexities for your workflow. Taking a look at what is going on ‘under the hood’ of fast functions to increase speed, as we have done in this section, can help understand the knock\-on consequences of choosing fast functions over slower functions from base R.
### 5\.3\.2 Preprocessing text outside R
There are circumstances when datasets become too large to read directly into R.
Reading in a \\(4\\) GB text file using the functions tested above, for example, consumes all available RAM on a \\(16\\) GB machine. To overcome this limitation, external *stream processing* tools can be used to preprocess large text files.
The following command, using the Linux command line ‘shell’ (or Windows based Linux shell emulator [Cygwin](https://cygwin.com/install.html)) command `split`, for example, will break a large multi GB file into many chunks, each of which is more manageable for R:
```
split -b100m bigfile.csv
```
The result is a series of files, set to 100 MB each with the `-b100m` argument in the above code. By default these will be called `xaa`, `xab` and can be read in *one chunk at a time* (e.g. using `read.csv()`, `fread()` or `read_csv()`, described in the previous section) without crashing most modern computers.
Splitting a large file into individual chunks may allow it to be read into R.
This is not an efficient way to import large datasets, however, because it results in a non\-random sample of the data this way.
A more efficient, robust and scalable way to work with large datasets is via databases, covered in Section [6\.6](data-carpentry.html#working-with-databases) in the next chapter.
5\.4 Binary file formats
------------------------
There are limitations to plain text files. Even the trusty CSV format is “restricted to tabular data, lacks type\-safety, and has limited precision for numeric values” (Eddelbuettel, Stokely, and Ooms [2016](#ref-JSSv071i02)).
Once you have read\-in the raw data (e.g. from a plain text file) and tidied it (covered in the next chapter), it is common to want to save it for future use. Saving it after tidying is recommended, to reduce the chance of having to run all the data cleaning code again. We recommend saving tidied versions of large datasets in one of the binary formats covered in this section: this will decrease read/write times and file sizes, making your data more
portable.[13](#fn13)
Unlike plain text files, data stored in binary formats cannot be read by humans. This allows space\-efficient data compression but means that the files will be less language agnostic. R’s native file format, `.Rds`, for example may be difficult to read and write using external programs such as Python or LibreOffice Calc. This section provides an overview of binary file formats in R, with benchmarks to show how they compare with the plain text format `.csv` covered in the previous section.
### 5\.4\.1 Native binary formats: Rdata or Rds?
`.Rds` and `.RData` are R’s native binary file formats. These formats are optimised for speed and compression ratios. But what is the difference between them? The following code chunk demonstrates the key difference between them:
```
save(df_co2, file = "extdata/co2.RData")
saveRDS(df_co2, "extdata/co2.Rds")
load("extdata/co2.RData")
df_co2_rds = readRDS("extdata/co2.Rds")
identical(df_co2, df_co2_rds)
#> [1] TRUE
```
The first method is the most widely used. It uses the `save()` function which takes any number of R objects and writes them to a file, which must be specified by the `file =` argument. `save()` is like `save.image()`, which saves *all* the objects currently loaded in R.
The second method is slightly less used but we recommend it. Apart from being slightly more concise for saving single R objects, the `readRDS()` function is more flexible: as shown in the subsequent line, the resulting object can be assigned to any name. In this case we called it `df_co2_rds` (which we show to be identical to `df_co2`, loaded with the `load()` command) but we could have called it anything or simply printed it to the console.
Using `saveRDS()` is good practice because it forces you to specify object names. If you use `save()` without care, you could forget the names of the objects you saved and accidentally overwrite objects that already existed.
### 5\.4\.2 The feather file format
Feather was developed as a collaboration between R and Python developers to create a fast, light and language agnostic format for storing data frames. The code chunk below shows how it can be used to save and then re\-load the `df_co2` dataset loaded previously in both R and Python:
```
library("feather")
write_feather(df_co2, "extdata/co2.feather")
df_co2_feather = read_feather("extdata/co2.feather")
```
```
import feather
path = 'data/co2.feather'
df_co2_feather = feather.read_dataframe(path)
```
### 5\.4\.3 Benchmarking binary file formats
We know that binary formats are advantageous from space and read/write time perspectives, but how much so? The benchmarks in this section, based on large matrices containing random numbers, are designed to help answer this question. Figure [5\.2](input-output.html#fig:5-2) shows the *relative* efficiency gains of the feather and Rds formats, compared with base CSV. From left to right, figure [5\.2](input-output.html#fig:5-2) shows benefits in terms of file size, read times, and write times.
In terms of file size, Rds files perform the best, occupying just over a quarter of the hard disc space compared with the equivalent CSV files. The equivalent feather format also outperformed the CSV format, occupying around half the disc space.
```
#> Warning: `frame_data()` was deprecated in tibble 2.0.0.
#> Please use `tribble()` instead.
```
Figure 5\.2: Comparison of the performance of binary formats for reading and writing datasets with 20 column with the plain text format CSV. The functions used to read the files were read.csv(), readRDS() and feather::read\_feather() respectively. The functions used to write the files were write.csv(), saveRDS() and feather::write\_feather().
The results of this simple disk usage benchmark show that saving data in a compressed binary format can save space and if your data will be shared on\-line, reduce data download time and bandwidth usage. But how does each method compare from a computational efficiency perspective? The read and write times for each file format are illustrated in the middle and right hand panels of [5\.2](input-output.html#fig:5-2).
The results show that file size is not a reliable predictor of data read and write times. This is due to the computational overheads of compression. Although feather files occupied more disc space, they were roughly equivalent in terms of read times: the functions `read_feather()` and `readRDS()` were consistently around 10 times faster than `read.csv()`. In terms of write times, feather excels: `write_feather()` was around 10 times faster than `write.csv()`, whereas `saveRDS()` was only around 1\.2 times faster.
Note that the performance of different file formats depends on the content of the data being saved. The benchmarks above showed savings for matrices of random numbers. For real life data, the results would be quite different. The `voyages` dataset, saved as an Rds file, occupied less than a quarter the disc space as the original TSV file, whereas the file size was *larger* than the original when saved as a feather file!
### 5\.4\.4 Protocol Buffers
Google’s [Protocol Buffers](https://developers.google.com/protocol-buffers/) offers a portable, efficient and scalable solution to binary data storage. A recent package, **RProtoBuf**, provides an R interface. This approach is not covered in this book, as it is new, advanced and not (at the time of writing) widely used in the R community. The approach is discussed in detail in a [paper](https://www.jstatsoft.org/article/view/v071i02) on the subject, which also provides an excellent overview of the issues associated with different file formats (Eddelbuettel, Stokely, and Ooms [2016](#ref-JSSv071i02)).
### 5\.4\.1 Native binary formats: Rdata or Rds?
`.Rds` and `.RData` are R’s native binary file formats. These formats are optimised for speed and compression ratios. But what is the difference between them? The following code chunk demonstrates the key difference between them:
```
save(df_co2, file = "extdata/co2.RData")
saveRDS(df_co2, "extdata/co2.Rds")
load("extdata/co2.RData")
df_co2_rds = readRDS("extdata/co2.Rds")
identical(df_co2, df_co2_rds)
#> [1] TRUE
```
The first method is the most widely used. It uses the `save()` function which takes any number of R objects and writes them to a file, which must be specified by the `file =` argument. `save()` is like `save.image()`, which saves *all* the objects currently loaded in R.
The second method is slightly less used but we recommend it. Apart from being slightly more concise for saving single R objects, the `readRDS()` function is more flexible: as shown in the subsequent line, the resulting object can be assigned to any name. In this case we called it `df_co2_rds` (which we show to be identical to `df_co2`, loaded with the `load()` command) but we could have called it anything or simply printed it to the console.
Using `saveRDS()` is good practice because it forces you to specify object names. If you use `save()` without care, you could forget the names of the objects you saved and accidentally overwrite objects that already existed.
### 5\.4\.2 The feather file format
Feather was developed as a collaboration between R and Python developers to create a fast, light and language agnostic format for storing data frames. The code chunk below shows how it can be used to save and then re\-load the `df_co2` dataset loaded previously in both R and Python:
```
library("feather")
write_feather(df_co2, "extdata/co2.feather")
df_co2_feather = read_feather("extdata/co2.feather")
```
```
import feather
path = 'data/co2.feather'
df_co2_feather = feather.read_dataframe(path)
```
### 5\.4\.3 Benchmarking binary file formats
We know that binary formats are advantageous from space and read/write time perspectives, but how much so? The benchmarks in this section, based on large matrices containing random numbers, are designed to help answer this question. Figure [5\.2](input-output.html#fig:5-2) shows the *relative* efficiency gains of the feather and Rds formats, compared with base CSV. From left to right, figure [5\.2](input-output.html#fig:5-2) shows benefits in terms of file size, read times, and write times.
In terms of file size, Rds files perform the best, occupying just over a quarter of the hard disc space compared with the equivalent CSV files. The equivalent feather format also outperformed the CSV format, occupying around half the disc space.
```
#> Warning: `frame_data()` was deprecated in tibble 2.0.0.
#> Please use `tribble()` instead.
```
Figure 5\.2: Comparison of the performance of binary formats for reading and writing datasets with 20 column with the plain text format CSV. The functions used to read the files were read.csv(), readRDS() and feather::read\_feather() respectively. The functions used to write the files were write.csv(), saveRDS() and feather::write\_feather().
The results of this simple disk usage benchmark show that saving data in a compressed binary format can save space and if your data will be shared on\-line, reduce data download time and bandwidth usage. But how does each method compare from a computational efficiency perspective? The read and write times for each file format are illustrated in the middle and right hand panels of [5\.2](input-output.html#fig:5-2).
The results show that file size is not a reliable predictor of data read and write times. This is due to the computational overheads of compression. Although feather files occupied more disc space, they were roughly equivalent in terms of read times: the functions `read_feather()` and `readRDS()` were consistently around 10 times faster than `read.csv()`. In terms of write times, feather excels: `write_feather()` was around 10 times faster than `write.csv()`, whereas `saveRDS()` was only around 1\.2 times faster.
Note that the performance of different file formats depends on the content of the data being saved. The benchmarks above showed savings for matrices of random numbers. For real life data, the results would be quite different. The `voyages` dataset, saved as an Rds file, occupied less than a quarter the disc space as the original TSV file, whereas the file size was *larger* than the original when saved as a feather file!
### 5\.4\.4 Protocol Buffers
Google’s [Protocol Buffers](https://developers.google.com/protocol-buffers/) offers a portable, efficient and scalable solution to binary data storage. A recent package, **RProtoBuf**, provides an R interface. This approach is not covered in this book, as it is new, advanced and not (at the time of writing) widely used in the R community. The approach is discussed in detail in a [paper](https://www.jstatsoft.org/article/view/v071i02) on the subject, which also provides an excellent overview of the issues associated with different file formats (Eddelbuettel, Stokely, and Ooms [2016](#ref-JSSv071i02)).
5\.5 Getting data from the internet
-----------------------------------
The code chunk below shows how the functions
`download.file()` and `unzip()` can be used to download and unzip a dataset from the internet.
(Since R 3\.2\.3 the base function `download.file()` can be used to download from secure (`https://`) connections on any operating system.)
R can automate processes that are often performed manually, e.g. through the graphical user interface of a web browser, with potential advantages for reproducibility and programmer efficiency. The result is data stored neatly in the `data` directory ready to be imported. Note we deliberately kept the file name intact, enhancing understanding of the data’s *provenance* so future users can quickly find out where the data came from. Note also that part of the dataset is stored in the **efficient** package. Using R for basic file management can help create a reproducible workflow, as illustrated below.
```
url = "https://www.monetdb.org/sites/default/files/voc_tsvs.zip"
download.file(url, "voc_tsvs.zip") # download file
unzip("voc_tsvs.zip", exdir = "data") # unzip files
file.remove("voc_tsvs.zip") # tidy up by removing the zip file
```
This workflow applies equally to downloading and loading single files. Note that one could make the code more concise by replacing the second line with `df = read.csv(url)`. However, we recommend downloading the file to disk so that if for some reason it fails (e.g. if you would like to skip the first few lines), you don’t have to keep downloading the file over and over again. The code below downloads and loads data on atmospheric concentrations of CO2. Note that this dataset is also available from the **datasets** package.
```
url = "https://vincentarelbundock.github.io/Rdatasets/csv/datasets/co2.csv"
download.file(url, "extdata/co2.csv")
df_co2 = read_csv("extdata/co2.csv")
```
There are now many R packages to assist with the download and import of data. The organisation [rOpenSci](https://ropensci.org/) supports a number of these.
The example below illustrates this using the WDI package (not supported by rOpenSci) to access World Bank data on CO2 emissions in the transport sector:
```
library("WDI")
WDIsearch("CO2") # search for data on a topic
co2_transport = WDI(indicator = "EN.CO2.TRAN.ZS") # import data
```
There will be situations where you cannot download the data directly or when the data cannot be made available. In this case, simply providing a comment relating to the data’s origin (e.g. `# Downloaded from http://example.com`) before referring to the dataset can greatly improve the utility of the code to yourself and others.
There are a number of R packages that provide more advanced functionality than simply downloading files. The CRAN task view on [Web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) provides a comprehensive list. The two packages for interacting with web pages are **httr** and **RCurl**. The former package provides (a relatively) user\-friendly interface for executing standard HTTP methods, such as `GET` and `POST`. It also provides support for web authentication protocols and returns HTTP status codes that are essential for debugging. The **RCurl** package focuses on lower\-level support and is particularly useful for web\-based XML support or FTP operations.
5\.6 Accessing data stored in packages
--------------------------------------
Most well documented packages provide some example data for you to play with. This can help demonstrate use cases in specific domains, that uses a particular data format. The command `data(package = "package_name")` will show the datasets in a package. Datasets provided by **dplyr**, for example, can be viewed with `data(package = "dplyr")`.
Raw data (i.e. data which has not been converted into R’s native `.Rds` format) is usually located within the sub\-folder `extdata` in R (which corresponds to `inst/extdata` when developing packages. The function `system.file()` outputs file paths associated with specific packages. To see all the external files within the **readr** package, for example, one could use the following command:
```
list.files(system.file("extdata", package = "readr"))
#> [1] "challenge.csv" "epa78.txt" "example.log"
#> [4] "fwf-sample.txt" "massey-rating.txt" "mtcars.csv"
#> [7] "mtcars.csv.bz2" "mtcars.csv.zip"
```
Further, to ‘look around’ to see what files are stored in a particular package, one could type the following, taking advantage of RStudio’s intellisense file completion capabilities (using copy and paste to enter the file path):
```
system.file(package = "readr")
#> [1] "/home/robin/R/x86_64-pc-linux-gnu-library/3.3/readr"
```
Hitting `Tab` after the second command should trigger RStudio to create a miniature pop\-up box listing the files within the folder, as illustrated in figure [5\.3](input-output.html#fig:5-3).
Figure 5\.3: Discovering files in R packages using RStudio’s ‘intellisense’.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/data-carpentry.html |
6 Efficient data carpentry
==========================
There are many words for data processing. You can **clean**, **hack**, **manipulate**, **munge**, **refine** and **tidy** your dataset, ready for the next stage, typically modelling and visualisation. Each word says something about perceptions towards the process: data processing is often seen as *dirty work*, an unpleasant necessity that must be endured before the *real*, *fun* and *important* work begins. This perception is wrong. Getting your data ‘ship shape’ is a respectable and in some cases vital skill. For this reason we use the more admirable term *data carpentry*.
This metaphor is not accidental. Carpentry is the process of taking rough pieces of wood and working with care, diligence and precision to create a finished product. A carpenter does not hack at the wood at random. He or she will inspect the raw material and select the right tool for the job. In the same way *data carpentry* is the process of taking rough, raw and to some extent randomly arranged input data and creating neatly organised and *tidy* data. Learning the skill of data carpentry early will yield benefits for years to come. “Give me six hours to chop down a tree and I will spend the first four sharpening the axe” as the saying goes.
Data processing is a critical stage in any project involving any datasets from external sources, i.e. most real world applications. In the same way that *technical debt*, discussed in Chapter [5](input-output.html#input-output), can cripple your workflow, working with messy data can lead to project management hell.
Fortunately, done efficiently, at the outset of your project (rather than half way through, when it may be too late), and using appropriate tools, this data processing stage can be highly rewarding. More importantly from an efficiency perspective, working with clean data will be beneficial for every subsequent stage of your R project. So, for data intensive applications, this could be the most important chapter of the book. In it we cover the following topics:
* Tidying data with **tidyr**
* Processing data with **dplyr**
* Working with databases
* Data processing with **data.table**
### Prerequisites
This chapter relies on a number of packages for data cleaning and processing \- test they are installed on your computer and load them with:
```
library("tibble")
library("tidyr")
library("stringr")
library("readr")
library("dplyr")
library("data.table")
```
**RSQLite** and **ggmap** are also used in a couple of examples, although they are not central to the chapter’s content.
6\.1 Top 5 tips for efficient data carpentry
--------------------------------------------
1. Time spent preparing your data at the beginning can save hours of frustration in the long run.
2. ‘Tidy data’ provides a concept for organising data and the package **tidyr** provides some functions for this work.
3. The `data_frame` class defined by the **tibble** package makes datasets efficient to print and easy to work with.
4. **dplyr** provides fast and intuitive data processing functions; **data.table** has unmatched speed for some data processing applications.
5. The `%>%` ‘pipe’ operator can help clarify complex data processing workflows.
6\.2 Efficient data frames with tibble
--------------------------------------
**tibble** is a package that defines a new data frame class for R, the `tbl_df`. These ‘tibble diffs’ (as their inventor [suggests](https://github.com/hadley/tibble) they should be pronounced) are like the base class `data.frame`, but with more user friendly printing, subsetting, and factor handling.
A tibble data frame is an S3 object with three classes, `tbl_df`, `tbl`, and `data.frame`. Since the object has the `data.frame` tag, this means that if a `tbl_df` or `tbl` method isn’t available, the object will be passed on to the appropriate `data.frame` function.
To create a tibble data frame, we use `tibble` function
```
library("tibble")
tibble(x = 1:3, y = c("A", "B", "C"))
#> # A tibble: 3 x 2
#> x y
#> <int> <chr>
#> 1 1 A
#> 2 2 B
#> 3 3 C
```
The example above illustrates the main differences between the **tibble** and base R approach to data frames:
* When printed, the tibble diff reports the class of each variable. `data.frame` objects do not.
* Character vectors are not coerced into factors when they are incorporated into a `tbl_df`, as can be seen by the `<chr>` heading between the variable name and the second column. By contrast, `data.frame()` coerces characters into factors which can cause problems further down the line.
* When printing a tibble diff to screen, only the first ten rows are displayed. The number of columns printed depends on the window size.
Other differences can be found in the associated help page \- `help("tibble")`.
You can create a tibble data frame row\-by\-row using the `tribble` function.
#### Exercise
Create the following data frame
```
df_base = data.frame(colA = "A")
```
Try and guess the output of the following commands
```
print(df_base)
df_base$colA
df_base$col
df_base$colB
```
Now create a tibble data frame and repeat the above commands.
6\.3 Tidying data with tidyr and regular expressions
----------------------------------------------------
A key skill in data analysis is understanding the structure of datasets and being able to ‘reshape’ them. This is important from a workflow efficiency perspective: more than half of a data analyst’s time can be spent re\-formatting datasets (H. Wickham [2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)), so getting it into a suitable form early could save hours in the future. Converting data into a ‘tidy’ form is also advantageous from a computational efficiency perspective: it is usually faster to run analysis and plotting commands on tidy data.
Data tidying includes data cleaning and data reshaping. Data cleaning is the process of re\-formatting and labelling messy data. Packages including **stringi** and **stringr** can help update messy character strings using regular expressions; **assertive** and **assertr** packages can perform diagnostic checks for data integrity at the outset of a data analysis project. A common data cleaning task is the conversion of non\-standard text strings into date formats as described in the **lubridate** vignette (see `vignette("lubridate")`). Tidying is a broader concept, however, and also includes re\-shaping data so that it is in a form more conducive to data analysis and modelling.
The process of reshaping is illustrated by Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt), provided by H. Wickham ([2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)) and loaded using the code below:
```
library("efficient")
data(pew) # see ?pew - dataset from the efficient package
pew[1:3, 1:4] # take a look at the data
#> # A tibble: 3 x 4
#> religion `<$10k` `$10--20k` `$20--30k`
#> <chr> <int> <int> <int>
#> 1 Agnostic 27 34 60
#> 2 Atheist 12 27 37
#> 3 Buddhist 27 21 30
```
Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt) show a subset of the ‘wide’ `pew` and ‘long’ (tidy) `pewt` datasets, respectively. They have different dimensions, but they contain precisely the same information. Column names in the ‘wide’ form in Table [6\.1](data-carpentry.html#tab:tpew) became a new variable in the ‘long’ form in Table [6\.2](data-carpentry.html#tab:tpewt). According to the concept of ‘tidy data’, the long form is correct. Note that ‘correct’ here is used in the context of data analysis and graphical visualisation. Because R is a vector\-based language, tidy data also has efficiency advantages: it’s often faster to operate on few long columns than many short ones. Furthermore the powerful and efficient packages **dplyr** and **ggplot2** were designed around tidy data. Wide data is common, however, can be space efficient and is common for presentation in summary tables, so it’s useful to be able to transfer between wide (or otherwise ‘untidy’) and tidy formats.
Tidy data has the following characteristics (H. Wickham [2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)):
1. Each variable forms a column.
2. Each observation forms a row.
3. Each type of observational unit forms a table.
Because there is only one observational unit in the example (religions), it can be described in a single table.
Large and complex datasets are usually represented by multiple tables, with unique identifiers or ‘keys’ to join them together (Codd [1979](#ref-Codd1979)).
Two common operations facilitated by **tidyr** are *gathering* and *splitting* columns.
### 6\.3\.1 Make wide tables long with `pivot_longer()`
Pivoting Longer means making ‘wide’ tables ‘long’, by converting column names to a new variable. This is done with the function
`pivot_longer()` (the inverse of which is `pivot_wider()`). The process is illustrated in Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt) respectively.
The code that performs this operation is provided in the code block below.
This converts a table with 18 rows and 10 columns into a tidy dataset with 162 rows and 3 columns (compare the output with the output of `pew`, shown above):
```
dim(pew)
#> [1] 18 10
pewt = pivot_longer(data = pew, -religion, names_to = "income", values_to = "count")
dim(pewt)
#> [1] 162 3
pewt[c(1:3, 50), ]
#> # A tibble: 4 x 3
#> religion income count
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Evangelical Protestant Churches $40--50k 881
```
The above code demonstrates the three arguments that `pivot_longer()` requires:
1. `data`, a data frame in which column names will become row values.
2. `names_to`, the name of the categorical variable into which the column names in the original datasets are converted.
3. `values_to`, the name of cell value columns.
As with other functions in the ‘tidyverse’, all arguments are given using bare names, rather than character strings. Arguments 2 and 3 can be specified by the user, and have no relation to the existing data. Furthermore an additional argument, set as `-religion`, was used to remove the religion variable from the gathering, ensuring that the values in this column are the first column in the output. If no `-religion` argument were specified, all column names are used in the key, meaning the results simply report all 180 column/value pairs resulting from the input dataset with 10 columns by 18 rows:
```
pivot_longer(pew, -religion)
#> # A tibble: 162 x 3
#> religion name value
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Agnostic $30--40k 81
#> # … with 158 more rows
```
Table 6\.1: First 3 rows of the aggregated ‘pew’ dataset from Wickham (2014a) in an ‘untidy’ form.
| religion | \<$10k | $10–20k | $20–30k |
| --- | --- | --- | --- |
| Agnostic | 27 | 34 | 60 |
| Atheist | 12 | 27 | 37 |
| Buddhist | 27 | 21 | 30 |
Table 6\.2: Long form of the Pew dataset represented above showing the minimum values for annual incomes (includes part time work).
| religion | name | value |
| --- | --- | --- |
| Agnostic | \<$10k | 27 |
| Agnostic | $10–20k | 34 |
| Agnostic | $20–30k | 60 |
| Atheist | \<$10k | 12 |
| Atheist | $10–20k | 27 |
| Atheist | $20–30k | 37 |
| Buddhist | \<$10k | 27 |
| Buddhist | $10–20k | 21 |
| Buddhist | $20–30k | 30 |
### 6\.3\.2 Split joint variables with `separate()`
Splitting means taking a variable that is really two variables combined and creating two separate columns from it. A classic example is age\-sex variables (e.g. `m0-10` and `f0-10` to represent males and females in the 0 to 10 age band). Splitting such variables can be done with the `separate()` function, as illustrated in the Tables [6\.3](data-carpentry.html#tab:to-separate) and [6\.4](data-carpentry.html#tab:separated) and in the code chunk below. See `?separate` for more information on this function.
```
agesex = c("m0-10", "f0-10") # create compound variable
n = c(3, 5) # create a value for each observation
agesex_df = tibble(agesex, n) # create a data frame
separate(agesex_df, agesex, c("sex", "age"), sep = 1)
#> # A tibble: 2 x 3
#> sex age n
#> <chr> <chr> <dbl>
#> 1 m 0-10 3
#> 2 f 0-10 5
```
Table 6\.3: Joined age and sex variables in one column
| agesex | n |
| --- | --- |
| m0\-10 | 3 |
| f0\-10 | 5 |
Table 6\.4: Age and sex variables separated by the function `separate`.
| sex | age | n |
| --- | --- | --- |
| m | 0\-10 | 3 |
| f | 0\-10 | 5 |
### 6\.3\.3 Other tidyr functions
There are other tidying operations that **tidyr** can perform, as described in the package’s vignette (`vignette("tidy-data")`).
The wider issue of manipulation is a large topic with major potential implications for efficiency (Spector [2008](#ref-Spector_2008)) and this section only covers some of the key operations. More important is understanding the principles behind converting messy data into standard output forms.
These same principles can also be applied to the representation of model results. The **broom** package provides a standard output format for model results, easing interpretation (see [the broom vignette](https://cran.r-project.org/web/packages/broom/vignettes/broom.html)). The function `broom::tidy()` can be applied to a wide range of model objects and return the model’s output in a standardized data frame output.
Usually it is more efficient to use the non\-standard evaluation version of variable names, as these can be auto completed by RStudio. In some cases you may want to use standard evaluation and refer to variable names using quote marks. To do this, affix `_` can be added to **dplyr** and **tidyr** function names to allow the use of standard evaluation. Thus the standard evaluation version of `separate(agesex_df, agesex, c("sex", "age"), 1)` is `separate_(agesex_df, "agesex", c("sex", "age"), 1)`.
### 6\.3\.4 Regular expressions
Regular expressions (commonly known as regex) is a language for describing and manipulating text strings. There are books on the subject, and several good tutorials on regex in R (e.g. Sanchez [2013](#ref-sanchez_handling_2013)), so we’ll just scratch the surface of the topic, and provide a taster of what is possible. Regex is a deep topic. However, knowing the basics can save a huge amount of time from a data tidying perspective, by automating the cleaning of messy strings.
In this section we teach both **stringr** and base R ways of doing pattern matching. The former provides easy to remember function names and consistency. The latter is useful to know as you’ll find lots of base R regex code in other peoples code as **stringr** is relatively new and not installed by default. The foundational regex operation is to detect whether or not a particular text string exists in an element or not which is done with `grepl()` and `str_detect()` in base R and **stringr** respectively:
```
library("stringr")
x = c("Hi I'm Robin.", "DoB 1985")
grepl(pattern = "9", x = x)
#> [1] FALSE TRUE
str_detect(string = x, pattern = "9")
#> [1] FALSE TRUE
```
Note: **stringr** does not include a direct replacement for `grep()`. You can use `which(str_detect())` instead.
Notice that `str_detect()` begins with `str_`. This is a common feature of **stringr** functions: they all do. This can be efficient because if you want to do some regex work, you just need to type `str_` and then hit Tab to see a list of all the options. The various base R regex function names, by contrast, are harder to remember, including `regmatches()`, `strsplit()` and `gsub()`. The **stringr** equivalents have more intuitive names that relate to the intention of the functions: `str_match_all()`, `str_split()` and `str_replace_all()`, respectively.
There is much else to say on the topic but rather than repeat what has been said elsewhere, we feel it is more efficient to direct the interested reader towards existing excellent resources for learning regex in R. We recommend reading, in order:
* The [Strings chapter](http://r4ds.had.co.nz/strings.html) of Grolemund and Wickham ([2016](#ref-grolemund_r_2016)).
* The **stringr** vignette (`vignette("stringr")`).
* A detailed tutorial on regex in base R (Sanchez [2013](#ref-sanchez_handling_2013)).
* For more advanced topics, reading the documentation of and [online articles](http://www.rexamine.com/blog/) about the **stringi** package, on which **stringr** depends.
#### Exercises
1. What are the three criteria of tidy data?
2. Load and look at subsets of these datasets. The first is the `pew` datasets we’ve been using already. The second reports the points that define, roughly, the geographical boundaries of different London boroughs. What is ‘untidy’ about each?
```
head(pew, 10)
#> # A tibble: 10 x 10
#> religion `<$10k` `$10--20k` `$20--30k` `$30--40k` `$40--50k` `$50--75k`
#> <chr> <int> <int> <int> <int> <int> <int>
#> 1 Agnostic 27 34 60 81 76 137
#> 2 Atheist 12 27 37 52 35 70
#> 3 Buddhist 27 21 30 34 33 58
#> 4 Catholic 418 617 732 670 638 1116
#> # … with 6 more rows, and 3 more variables: $75--100k <int>, $100--150k <int>,
#> # >150k <int>
data(lnd_geo_df)
head(lnd_geo_df, 10)
#> name_date population x y
#> 1 Bromley-2001 295535 544362 172379
#> 2 Bromley-2001 295535 549546 169911
#> 3 Bromley-2001 295535 539596 160796
#> 4 Bromley-2001 295535 533693 170730
#> 5 Bromley-2001 295535 533718 170814
#> 6 Bromley-2001 295535 534004 171442
#> 7 Bromley-2001 295535 541105 173356
#> 8 Bromley-2001 295535 544362 172379
#> 9 Richmond upon Thames-2001 172330 523605 176321
#> 10 Richmond upon Thames-2001 172330 521455 172362
```
3. Convert each of the above datasets into tidy form.
4. Consider the following string of phone numbers and fruits from (Wickham [2010](#ref-wickham2010stringr)):
```
strings = c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315", "239 923 8115",
"842 566 4692", "Work: 579-499-7527", "$1000", "Home: 543.355.3679")
```
Write expressions in **stringr** and base R that return:
* A logical vector reporting whether or not each string contains a number.
* Complete words only, without extraneous non\-letter characters.
6\.4 Efficient data processing with dplyr
-----------------------------------------
After tidying your data, the next stage is generally data processing. This includes the creation of new data, for example a new column that is some function of existing columns, or data analysis, the process of asking directed questions of the data and exporting the results in a user\-readable form.
Following the advice in Section [4\.4](workflow.html#package-selection), we have carefully selected an appropriate package for these tasks: **dplyr**, which roughly means ‘data frame pliers’. **dplyr** has a number of advantages over the base R and **data.table** approaches to data processing:
* **dplyr** is fast to run (due to its C\+\+ backend) and intuitive to type
* **dplyr** works well with tidy data, as described above
* **dplyr** works well with databases, providing efficiency gains on large datasets
Furthermore, **dplyr** is efficient to *learn* (see Chapter [10](learning.html#learning)). It has a small number of intuitively named functions, or ‘verbs’. These were partly inspired by SQL, one of the longest established languages for data analysis, which combines multiple simple functions (such as `SELECT` and `WHERE`, roughly analogous to `dplyr::select()` and `dplyr::filter()`) to create powerful analysis workflows. Likewise, **dplyr** functions were designed to be used together to solve a wide range of data processing challenges (see Table [6\.5](data-carpentry.html#tab:verbs)).
Table 6\.5: dplyr verb functions.
| dplyr function(s) | Description | Base R functions |
| --- | --- | --- |
| filter(), slice() | Subset rows by attribute (filter) or position (slice) | subset(), \[ |
| arrange() | Return data ordered by variable(s) | order() |
| select() | Subset columns | subset(), \[, \[\[ |
| rename() | Rename columns | colnames() |
| distinct() | Return unique rows | !duplicated() |
| mutate() | Create new variables (transmute drops existing variables) | transform(), \[\[ |
| summarise() | Collapse data into a single row | aggregate(), tapply() |
| sample\_n() | Return a sample of the data | sample() |
Unlike the base R analogues, **dplyr**’s data processing functions work in a consistent way. Each function takes a data frame object as its first argument and results in another data frame. Variables can be called directly without using the `$` operator. **dplyr** was designed to be used with the ‘pipe’ operator `%>%` provided by the **magrittr** package, allowing each data processing stage to be represented as a new line. This is illustrated in the code chunk below, which loads a tidy country level dataset of greenhouse gas emissions from the **efficient** package, and then identifies the countries with the greatest absolute growth in emissions from 1971 to 2012:
```
library("dplyr")
data("ghg_ems", package = "efficient")
top_table =
ghg_ems %>%
filter(!grepl("World|Europe", Country)) %>%
group_by(Country) %>%
summarise(Mean = mean(Transportation),
Growth = diff(range(Transportation))) %>%
top_n(3, Growth) %>%
arrange(desc(Growth))
```
The results, illustrated in table [6\.6](data-carpentry.html#tab:speed), show that the USA has the highest growth and average emissions from the transport sector, followed closely by China.
The aim of this code chunk is not for you to somehow read it and understand it: it is to provide a taster of **dplyr**’s unique syntax, which is described in more detail throughout the duration of this section.
Table 6\.6: The top 3 countries in terms of average CO2 emissions from transport since 1971, and growth in transport emissions over that period (MTCO2e/yr).
| Country | Mean | Growth |
| --- | --- | --- |
| United States | 1462 | 709 |
| China | 214 | 656 |
| India | 85 | 170 |
Building on the ‘learning by doing’ ethic, the remainder of this section works through these functions to process and begin to analyse a dataset on economic equality provided by the World Bank. The input dataset can be loaded as follows:
```
# Load global inequality data
data(package = "efficient", wb_ineq)
```
**dplyr** is a large package and can be seen as a language in its own right. Following the ‘walk before you run’ principle, we’ll start simple, by filtering and aggregating rows.
### 6\.4\.1 Renaming columns
Renaming data columns is a common task that can make writing code faster by using short, intuitive names. The **dplyr** function `rename()` makes this easy.
Note in this code block the variable name is surrounded by back\-quotes (`\`).
This allows R to refer to column names that are non\-standard.
Note also the syntax:
`rename()` takes the data frame as the first object and then creates new variables by specifying `new_variable_name = original_name`.
```
wb_ineq = rename(wb_ineq, code = `Country Code`)
```
To rename multiple columns the variable names are simply separated by commas.
`rename(x, x = X1, y = X2)` would rename variables `X1` and `X2` in the dataset `x`.
In base R the equivalent function would be `names(x)[1:2] = c("x", "y")` or `setNames(x, c("x", "y"))`, assuming we were dealing with the first and second columns.
### 6\.4\.2 Changing column classes
The *class* of R objects is critical to performance.
If a class is incorrectly specified (e.g. if numbers are treated as factors or characters) this will lead to incorrect results. The class of all columns in a data frame can be queried using the function `str()` (short for display the **str**ucture of an object).[14](#fn14)
Visual inspection of the data (e.g. via `View(wb_ineq)`) clearly shows that all columns except for 1 to 4 (`Country`, `Country Code`, `Year` and `Year Code`) should be numeric.
The class of numeric variables can be altered one\-by one using `mutate()` as follows (which would set the `gini` column to be of class `numeric` if it weren’t already):[15](#fn15)
```
wb_ineq = mutate(wb_ineq, gini = as.numeric(gini))
```
However the purpose of programming languages is to *automate* tasks and reduce typing.
The following code chunk ensures the numeric variables in the `cols_to_change` object are `numeric` using the same function (`vars()` is a helper function to select variables and also words with **dplyr** functions such as `contains()` which select all columns containing a given text string):
```
cols_to_change = 5:9 # column ids to change
wb_ineq = mutate_at(wb_ineq, vars(cols_to_change), as.numeric)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(cols_to_change)` instead of `cols_to_change` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
```
Another way to acheive the same result is to use `data.matrix()`, which converts the data frame to a numeric `matrix`:
```
cols_to_change = 5:9 # column ids to change
wb_ineq[cols_to_change] = data.matrix(wb_ineq[cols_to_change])
```
Each method (base R and **dplyr**) has its merits.
For readers new to R who plan to use other **tidyverse** packages we would provide a slight steer towards `mutate_at()` for its flexibility and expressive syntax.
Other methods for acheiving the same result include the use of loops via `apply()` and `for()`.
These are shown in the chapter’s [source code](https://github.com/csgillespie/efficientR).
### 6\.4\.3 Filtering rows
**dplyr** offers an alternative way of filtering data, using `filter()`.
```
# Base R: wb_ineq[wb_ineq$Country == "Australia",]
aus2 = filter(wb_ineq, Country == "Australia")
```
`filter()` is slightly more flexible than `[`: `filter(wb_ineq, code == "AUS", Year == 1974)` works as well as `filter(wb_ineq, code == "AUS" & Year == 1974)`, and takes any number of conditions (see `?filter`). `filter()` is slightly faster than base R.[16](#fn16) By avoiding the `$` symbol, **dplyr** makes subsetting code concise and consistent with other **dplyr** functions. The first argument is a data frame and subsequent raw variable names can be treated as vector objects: a defining feature of **dplyr**. In the next section we’ll learn how this syntax can be used alongside the `%>%` ‘pipe’ command to write clear data manipulation commands.
There are **dplyr** equivalents of many base R functions but these usually work slightly differently. The **dplyr** equivalent of `aggregate`, for example is to use the grouping function `group_by` in combination with the general purpose function `summarise` (not to be confused with `summary` in base R), as we shall see in Section [6\.4\.5](data-carpentry.html#data-aggregation).
### 6\.4\.4 Chaining operations
Another interesting feature of **dplyr** is its ability to chain operations together. This overcomes one of the aesthetic issues with R code: you can end\-up with very long commands with many functions nested inside each other to answer relatively simple questions. Combined with the `group_by()` function, pipes can help condense thousands of lines of data into something human readable. Here’s how you could use the chains to summarize average Gini indexes per decade, for example:
```
wb_ineq %>%
select(Year, gini) %>%
mutate(decade = floor(as.numeric(Year) / 10) * 10) %>%
group_by(decade) %>%
summarise(mean(gini, na.rm = TRUE))
#> # A tibble: 6 x 2
#> decade `mean(gini, na.rm = TRUE)`
#> <dbl> <dbl>
#> 1 1970 40.1
#> 2 1980 37.8
#> 3 1990 42.0
#> 4 2000 40.5
#> # … with 2 more rows
```
Often the best way to learn is to try and break something, so try running the above commands with different **dplyr** verbs.
By way of explanation, this is what happened:
1. Only the columns `Year` and `gini` were selected, using `select()`.
2. A new variable, `decade` was created, to show only the decade figures (e.g. 1989 becomes 1980\).
3. This new variable was used to group rows in the data frame with the same decade.
4. The mean value per decade was calculated, illustrating how average income inequality was greatest in 1990 but has since decreased slightly.
Let’s ask another question to see how the **dplyr** chaining workflow can be used to answer questions interactively: What are the 5 most unequal years for countries containing the letter g? Here’s how chains can help organise the analysis needed to answer this question step\-by\-step:
```
wb_ineq %>%
filter(grepl("g", Country)) %>%
group_by(Year) %>%
summarise(gini = mean(gini, na.rm = TRUE)) %>%
arrange(desc(gini)) %>%
top_n(n = 5)
#> Selecting by gini
#> # A tibble: 5 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1980 46.8
#> 2 1993 46.0
#> 3 2013 44.6
#> 4 1981 43.6
#> # … with 1 more row
```
The above function consists of 6 stages, each of which corresponds to a new line and **dplyr** function:
1. Filter\-out the countries we’re interested in (any selection criteria could be used in place of `grepl("g", Country)`).
2. Group the output by year.
3. Summarise, for each year, the mean Gini index.
4. Arrange the results by average Gini index
5. Select only the top 5 most unequal years.
To see why this method is preferable to the nested function approach, take a look at the latter. Even after indenting properly it looks terrible and is almost impossible to understand!
```
top_n(
arrange(
summarise(
group_by(
filter(wb_ineq, grepl("g", Country)),
Year),
gini = mean(gini, na.rm = TRUE)),
desc(gini)),
n = 5)
```
This section has provided only a taster of what is possible **dplyr** and why it makes sense from code writing and computational efficiency perspectives. For a more detailed account of data processing with R using this approach we recommend *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)).
#### Exercises
1. Try running each of the chaining examples above line\-by\-line, so the first two entries for the first example would look like this:
```
wb_ineq
#> # A tibble: 6,925 x 9
#> Country code Year `Year Code` top10 bot10 gini b40_cons gdp_percap
#> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Afghanistan AFG 1974 YR1974 NA NA NA NA NA
#> 2 Afghanistan AFG 1975 YR1975 NA NA NA NA NA
#> 3 Afghanistan AFG 1976 YR1976 NA NA NA NA NA
#> 4 Afghanistan AFG 1977 YR1977 NA NA NA NA NA
#> # … with 6,921 more rows
```
followed by:
```
wb_ineq %>%
select(Year, gini)
#> # A tibble: 6,925 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1974 NA
#> 2 1975 NA
#> 3 1976 NA
#> 4 1977 NA
#> # … with 6,921 more rows
```
Explain in your own words what changes each time.
2. Use chained **dplyr** functions to answer the following question: In which year did countries without an ‘a’ in their name have the lowest level of inequality?
### 6\.4\.5 Data aggregation
Data aggregation involves creating summaries of data based on a grouping variable, in a process that has been referred to as ‘split\-apply\-combine’. The end result usually has the same number of rows as there are groups. Because aggregation is a way of condensing datasets it can be a very useful technique for making sense of large datasets. The following code finds the number of unique countries (country being the grouping variable) from the `ghg_ems` dataset stored in the **efficient** package.
```
# data available online, from github.com/csgillespie/efficient_pkg
data(ghg_ems, package = "efficient")
names(ghg_ems)
#> [1] "Country" "Year" "Electricity" "Manufacturing"
#> [5] "Transportation" "Other" "Fugitive"
nrow(ghg_ems)
#> [1] 7896
length(unique(ghg_ems$Country))
#> [1] 188
```
Note that while there are almost \\(8000\\) rows, there are fewer than 200 countries: factors would have been a more space efficient way of storing the countries data.
To aggregate the dataset using **dplyr** package, you divide the task in two: to *group* the dataset first and then to summarise, as illustrated below.[17](#fn17)
```
library("dplyr")
group_by(ghg_ems, Country) %>%
summarise(mean_eco2 = mean(Electricity, na.rm = TRUE))
#> # A tibble: 188 x 2
#> Country mean_eco2
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 0.641
#> 3 Algeria 23.0
#> 4 Angola 0.791
#> # … with 184 more rows
```
The example above relates to a wider programming issue: how much work should one function do? The work could have been done with a single `aggregate()` call. However, the [Unix philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) states that programs should “do one thing well”, which is how **dplyr**’s functions were designed. Shorter functions are easier to understand and debug. But having too many functions can also make your call stack confusing.
To reinforce the point, this operation is also performed below on the `wb_ineq` dataset:
```
countries = group_by(wb_ineq, Country)
summarise(countries, mean_gini = mean(gini, na.rm = TRUE))
#> # A tibble: 176 x 2
#> Country mean_gini
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 30.4
#> 3 Algeria 37.8
#> 4 Angola 50.6
#> # … with 172 more rows
```
Note that `summarise` is highly versatile, and can be used to return a customised range of summary statistics:
```
summarise(countries,
# number of rows per country
obs = n(),
med_t10 = median(top10, na.rm = TRUE),
# standard deviation
sdev = sd(gini, na.rm = TRUE),
# number with gini > 30
n30 = sum(gini > 30, na.rm = TRUE),
sdn30 = sd(gini[gini > 30], na.rm = TRUE),
# range
dif = max(gini, na.rm = TRUE) - min(gini, na.rm = TRUE)
)
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> # A tibble: 176 x 7
#> Country obs med_t10 sdev n30 sdn30 dif
#> <chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
#> 1 Afghanistan 40 NA NA 0 NA -Inf
#> 2 Albania 40 24.4 1.25 3 0.364 2.78
#> 3 Algeria 40 29.8 3.44 2 3.44 4.86
#> 4 Angola 40 38.6 11.3 2 11.3 16.0
#> # … with 172 more rows
```
To showcase the power of `summarise` used on a `grouped_df`, the above code reports a wide range of customised summary statistics *per country*:
* the number of rows in each country group
* standard deviation of Gini indices
* median proportion of income earned by the top 10%
* the number of years in which the Gini index was greater than 30
* the standard deviation of Gini index values over 30
* the range of Gini index values reported for each country.
#### Exercises
1. Refer back to the greenhouse gas emissions example at the outset of section [6\.4](data-carpentry.html#dplyr), in which we found the top 3 countries in terms of emissions growth in the transport sector. a) Explain in words what is going on in each line. b) Try to find the top 3 countries in terms of emissions in 2012 \- how is the list different?
2. Explore **dplyr**’s documentation, starting with the introductory vignette, accessed by entering [`vignette("introduction")`](https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html).
3. Test additional **dplyr** ‘verbs’ on the `wb_ineq` dataset. (More vignette names can be discovered by typing `vignette(package = "dplyr")`.)
### 6\.4\.6 Non standard evaluation
The final thing to say about **dplyr** does not relate to the data but the syntax of the functions. Note that many of the arguments in the code examples in this section are provided as raw names: they are raw variable names, not surrounded by quote marks (e.g. `Country` rather than `"Country"`). This is called non\-standard evaluation (NSE) (see `vignette("nse")`). NSE was used deliberately, with the aim of making the functions more efficient for interactive use. NSE reduces typing and allows autocompletion in RStudio.
This is fine when using R interactively. But when you’d like to use R non\-interactively, code is generally more robust using standard evaluation: it minimises the chance of creating obscure scope\-related bugs. Using standing evaluation also avoids having to declare global variables if you include the code in a package. To overcome this the concept of ‘tidy evaluation’ was developed and implemented in the package **rlang** (part of the tidyverse) to provide functions to control when symbols are evaluated and when they are treated as text strings. Without going into detail, the code below demonstrates how tidy evaluation works (see the [`tidy-evaluation`](https://cran.r-project.org/web/packages/rlang/vignettes/tidy-evaluation.html) vignette and [`Programming-with-dplyr`](https://cran.r-project.org/web/packages/dplyr/vignettes/programming.html) for further information):
```
library(rlang)
# 1: Default NSE function
group_by(cars, speed = cut(speed, c(0, 10, 100))) %>%
summarise(mean_dist = mean(dist))
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 2: Evaluation from character string
group_by(cars, speed = !!parse_quosure("cut(speed, c(0, 10, 100))")) %>%
summarise(mean_dist = !!parse_quosure("mean(dist)"))
#> Warning: `parse_quosure()` is deprecated as of rlang 0.2.0.
#> Please use `parse_quo()` instead.
#> This warning is displayed once per session.
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 3: Using !! to evaluate 'quosures' when appropriate
q1 = quo(cut(speed, c(0, 10, 100)))
q2 = quo(mean(dist))
group_by(cars, speed = !!q1) %>%
summarise(mean_dist = !!q2)
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
```
6\.5 Combining datasets
-----------------------
The usefulness of a dataset can sometimes be greatly enhanced by combining it with other data. If we could merge the global `ghg_ems` dataset with geographic data, for example, we could visualise the spatial distribution of climate pollution. For the purposes of this section we join `ghg_ems` to the `world` data provided by **ggmap** to illustrate the concepts and methods of data *joining* (also referred to as merging).
```
library("ggmap")
world = map_data("world")
names(world)
#> [1] "long" "lat" "group" "order" "region" "subregion"
```
Visually compare this new dataset of the `world` with `ghg_ems` (e.g. via `View(world); View(ghg_ems)`). It is clear that the column `region` in the former contains the same information as `Country` in the latter. This will be the *joining variable*; renaming it in `world` will make the join more efficient.
```
world = rename(world, Country = region)
ghg_ems$All = rowSums(ghg_ems[3:7])
```
Ensure that both joining variables have the same class (combining `character` and `factor` columns can cause havoc).
How large is the overlap between `ghg_ems$Country` and `world$Country`? We can find out using the `%in%` operator, which finds out how many elements in one vector match those in another vector. Specifically, we will find out how many *unique* country names from `ghg_ems` are present in the `world` dataset:
```
unique_countries_ghg_ems = unique(ghg_ems$Country)
unique_countries_world = unique(world$Country)
matched = unique_countries_ghg_ems %in% unique_countries_world
table(matched)
#> matched
#> FALSE TRUE
#> 20 168
```
This comparison exercise has been fruitful: most of the countries in the `co2` dataset exist in the `world` dataset. But what about the 20 country names that do not match? We can identify these as follows:
```
(unmatched_countries_ghg_ems = unique_countries_ghg_ems[!matched])
#> [1] "Antigua & Barbuda" "Bahamas, The"
#> [3] "Bosnia & Herzegovina" "Congo, Dem. Rep."
#> [5] "Congo, Rep." "Cote d'Ivoire"
#> [7] "European Union (15)" "European Union (28)"
#> [9] "Gambia, The" "Korea, Dem. Rep. (North)"
#> [11] "Korea, Rep. (South)" "Macedonia, FYR"
#> [13] "Russian Federation" "Saint Kitts & Nevis"
#> [15] "Saint Vincent & Grenadines" "Sao Tome & Principe"
#> [17] "Trinidad & Tobago" "United Kingdom"
#> [19] "United States" "World"
```
It is clear from the output that some of the non\-matches (e.g. the European Union) are not countries at all. However, others, such as ‘Gambia, The’ and the United States clearly should have matches. *Fuzzy matching* can help find which countries *do* match, as illustrated with the first non\-matching country below:
```
(unmatched_country = unmatched_countries_ghg_ems[1])
#> [1] "Antigua & Barbuda"
unmatched_world_selection = agrep(pattern = unmatched_country,
unique_countries_world,
max.distance = 10)
unmatched_world_countries = unique_countries_world[unmatched_world_selection]
```
What just happened? We verified that the first unmatching country in the `ghg_ems` dataset was not in the `world` country names. So we used the more powerful `agrep` to search for fuzzy matches (with the `max.distance` argument set to `10`. The results show that the country `Antigua & Barbuda` from the `ghg_ems` data matches *two* countries in the `world` dataset. We can update the names in the dataset we are joining to accordingly:
```
world$Country[world$Country %in% unmatched_world_countries] =
unmatched_countries_ghg_ems[1]
```
The above code reduces the number of country names in the `world` dataset by replacing *both* “Antigua” and “Barbuda” to “Antigua \& Barbuda”. This would not work other way around: how would one know whether to change “Antigua \& Barbuda” to “Antigua” or to “Barbuda”.
Thus fuzzy matching is still a laborious process that must be complemented by human judgement. It takes a human to know for sure that `United States` is represented as `USA` in the `world` dataset, without risking false matches via `agrep`.
6\.6 Working with databases
---------------------------
Instead of loading all the data into RAM, as R does, databases query data from the hard\-disk. This can allow a subset of a very large dataset to be defined and read into R quickly, without having to load it first.
R can connect to databases in a number of ways, which are briefly touched on below. Databases is a large subject area undergoing rapid evolution. Rather than aiming at comprehensive coverage, we will provide pointers to developments that enable efficient access to a wide range of database types. An up\-to\-date history of R’s interfaces to databases can be found in the README of the [**DBI** package](https://cran.r-project.org/web/packages/DBI/readme/README.html), which provides a common interface and set of classes for driver packages (such as **RSQLite**).
**RODBC** is a veteran package for querying external databases from within R, using the Open Database Connectivity (ODBC) API. The functionality of **RODBC** is described in the package’s vignette (see `vignette("RODBC")`) and nowadays its main use is to provide an R interface to
SQL Server databases which lack a **DBI** interface.
The **DBI** package is a unified framework for accessing databases allowing for other drivers to be added as modular packages. Thus new packages that build on **DBI** can be seen partly as a replacement of **RODBC** (**RMySQL**, **RPostgreSQL**, and **RSQLite**) (see `vignette("backend")` for more on how **DBI** drivers work). Because the **DBI** syntax applies to a wide range of database types we use it here with a worked example.
```
#> Warning: `src_sqlite()` was deprecated in dplyr 1.0.0.
#> Please use `tbl()` directly with a database connection
```
Imagine you have access to a database that contains the `ghg_ems` data set.
```
# Connect to a database driver
library("RSQLite")
con = dbConnect(SQLite(), dbname = ghg_db) # Also username & password arguments
dbListTables(con)
rs = dbSendQuery(con, "SELECT * FROM `ghg_ems` WHERE (`Country` != 'World')")
df_head = dbFetch(rs, n = 6) # extract first 6 row
```
The above code chunk shows how the function `dbConnect` connects to an external database, in this case a SQLite database. The `username` and `password` arguments are used to establish the connection. Next we query which tables are available with `dbListTables`, query the database (without yet extracting the results to R) with `dbSendQuery` and, finally, load the results into R with `dbFetch`.
Be sure never to release your password by entering it directly into the command. Instead, we recommend saving sensitive information such as database passwords and API keys in `.Renviron`, described in Chapter 2\. Assuming you had saved your password as the environment variable `PSWRD`, you could enter `pwd = Sys.getenv(“PSWRD”)` to minimise the risk of exposing your password through accidentally releasing the code or your session history.
Recently there has been a shift to the ‘noSQL’ approach for storing large datasets.
This is illustrated by the emergence and uptake of software such as MongoDB and Apache Cassandra, which have R interfaces via packages [mongolite](https://cran.r-project.org/web/packages/mongolite/index.html) and [RJDBC](https://cran.r-project.org/web/packages/RJDBC/index.html), which can connect to Apache Cassandra data stores and any source compliant with the Java Database Connectivity (JDBC) API.
MonetDB is a recent alternative to relational and noSQL approaches which offers substantial efficiency advantages for handling large datasets (Kersten et al. [2011](#ref-kersten2011researcher)).
A tutorial on the [MonetDB website](https://www.monetdb.org/Documentation/UserGuide/MonetDB-R) provides an excellent introduction to handling databases from within R.
There are many wider considerations in relation to databases that we will not cover here: who will manage and maintain the database? How will it be backed up locally (local copies should be stored to reduce reliance on the network)? What is the appropriate database for your project. These issues can have major implications for efficiency, especially on large, data intensive projects. However, we will not cover them here because it is a fast\-moving field. Instead, we direct the interested reader towards further resources on the subject, including:
* The website for **[sparklyr](http://spark.rstudio.com/)**, a recent package for efficiently interfacing with the Apache Spark stack.
* [db\-engines.com/en/](http://db-engines.com/en/): a website comparing the relative merits of different databases.
* The `databases` vignette from the **dplyr** package.
* [Getting started with MongoDB in R](https://cran.r-project.org/web/packages/mongolite/vignettes/intro.html), an introductory vignette on non\-relational databases and map reduce from the **mongolite** package.
### 6\.6\.1 Databases and **dplyr**
To access a database in R via **dplyr**, one must use one of the `src_` functions to create a source. Continuing with the SQLite example above, one would create a `tbl` object, that can be queried by **dplyr** as follows:
```
library("dplyr")
ghg_db = src_sqlite(ghg_db)
ghg_tbl = tbl(ghg_db, "ghg_ems")
```
The `ghg_tbl` object can then be queried in a similar way to a standard data frame. For example, suppose we wish to
filter by `Country`. Then we use the `filter` function as before:
```
rm_world = ghg_tbl %>%
filter(Country != "World")
```
In the above code, **dplyr** has actually generated the necessary SQL command, which can be examined using `explain(rm_world)`.
When working with databases, **dplyr** uses lazy evaluation: the data is only fetched at the last moment when it’s needed. The SQL command associated with `rm_world` hasn’t yet been executed, this is why
`tail(rm_world)` doesn’t work. By using lazy evaluation, **dplyr** is more efficient at handling large data structures since it avoids unnecessary copying.
When you want your SQL command to be executed, use `collect(rm_world)`.
The final stage when working with databases in R is to disconnect, e.g.:
```
dbDisconnect(conn = con)
```
#### Exercises
Follow the worked example below to create and query a database on land prices in the UK using **dplyr** as a front end to an SQLite database.
The first stage is to read\-in the data:
```
# See help("land_df", package="efficient") for details
data(land_df, package = "efficient")
```
The next stage is to create an SQLite database to hold the data:
```
# install.packages("RSQLite") # Requires RSQLite package
my_db = src_sqlite("land.sqlite3", create = TRUE)
land_sqlite = copy_to(my_db, land_df, indexes = list("postcode", "price"))
```
What class is the new object `land_sqlite`?
Why did we use the `indexes` argument?
From the above code we can see that we have created a `tbl`. This can be accessed using **dplyr** in the same way as any data frame can. Now we can query the data. You can use SQL code to query the database directly or use standard **dplyr** verbs on the table.
```
# Method 1: using sql
tbl(my_db, sql('SELECT "price", "postcode", "old/new" FROM land_df'))
#> Source: query [?? x 3]
#> Database: sqlite 3.8.6 [land.sqlite3]
#>
#> price postcode `old/new`
#> <int> <chr> <chr>
#> 1 84000 CW9 5EU N
#> 2 123500 TR13 8JH N
#> 3 217950 PL33 9DL N
#> 4 147000 EX39 5XT N
#> # ... with more rows
```
How would you perform the same query using `select()`? Try it to see if you get the same result (hint: use backticks for the `old/new` variable name).
6\.7 Data processing with data.table
------------------------------------
**data.table** is a mature package for fast data processing that presents an alternative to **dplyr**. There is some controversy about which is more appropriate for different
tasks.[18](#fn18)
Which is more efficient to some extent depends on personal preferences and what you are used to.
Both are powerful and efficient packages that take time to learn, so it is best to learn one and stick with it, rather than have the duality of using two for similar purposes. There are situations in which one works better than another: **dplyr** provides a more consistent and flexible interface (e.g. with its interface to databases, demonstrated in the previous section) so for most purposes we recommend learning **dplyr** first if you are new to both packages. **dplyr** can also be used to work with the `data.table` class used by the **data.table** package so you can get the best of both worlds.
**data.table** is faster than **dplyr** for some operations and offers some functionality unavailable in other packages, moreover it has a mature and advanced user community. **data.table** supports [rolling joins](https://www.r-bloggers.com/understanding-data-table-rolling-joins/) (which allow rows in one table to be selected based on proximity between shared variables (typically time) and [non\-equi joins](http://www.w3resource.com/sql/joins/perform-a-non-equi-join.php) (where join criteria can be inequalities rather than equal to).
This section provides a few examples to illustrate how **data.table** differs and (at the risk of inflaming the debate further) some benchmarks to explore which is more efficient. As emphasised throughout the book, efficient code writing is often more important than efficient execution on many everyday tasks so to some extent it’s a matter of preference.
The foundational object class of **data.table** is the `data.table`. Like **dplyr**’s `tbl_df`, **data.table**’s `data.table` objects behave in the same was as the base `data.frame` class. However the **data.table** paradigm has some unique features that make it highly computationally efficient for many common tasks in data analysis. Building on subsetting methods using `[` and `filter()`, mentioned previously, we’ll see **data.tables**’s unique approach to subsetting. Like base R **data.table** uses square brackets but (unlike base R but like **dplyr**) uses non\-standard evaluation so you need not refer to the object name inside the brackets:
```
library("data.table")
# data(wb_ineq) # from the efficient package
wb_ineq_dt = data.table(wb_ineq) # convert to data.table class
aus3a = wb_ineq_dt[Country == "Australia"]
```
Note that the square brackets do not need a comma to refer to rows with `data.table` objects: in base R you would write `wb_ineq[wb_ineq$Country == “Australia”,]`.
To boost performance, one can set ‘keys’, analogous to ‘primary keys in databases’. These are ‘[supercharged rownames](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html)’ which order the table based on one or more variables. This allows a *binary search* algorithm to subset the rows of interest, which is much, much faster than the *vector scan* approach used in base R (see [`vignette("datatable-keys-fast-subset")`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html)). **data.table** uses the key values for subsetting by default so the variable does not need to be mentioned again. Instead, using keys, the search criteria is provided as a list (invoked below with the concise `.()` syntax, which is synonymous with `list()`).
```
setkey(wb_ineq_dt, Country)
aus3b = wb_ineq_dt[.("Australia")]
```
The result is the same, so why add the extra stage of setting the key? The reason is that this one\-off sorting operation can lead to substantial performance gains in situations where repeatedly subsetting rows on large datasets consumes a large proportion of computational time in your workflow. This is illustrated in Figure [6\.1](data-carpentry.html#fig:6-2), which compares 4 methods of subsetting incrementally larger versions of the `wb_ineq` dataset.
Figure 6\.1: Benchmark illustrating the performance gains to be expected for different dataset sizes.
Figure [6\.1](data-carpentry.html#fig:6-2) demonstrates that **data.table** is *much faster* than base R and **dplyr** at subsetting. As with using external packages to read in data (see Section [5\.3](input-output.html#fread)), the relative benefits of **data.table** improve with dataset size, approaching a \~70 fold improvement on base R and a \~50 fold improvement on **dplyr** as the dataset size reaches half a Gigabyte. Interestingly, even the ‘non key’ implementation of **data.table** subset method is faster than the alternatives: this is because **data.table** creates a key internally by default before subsetting. The process of creating the key accounts for the \~10 fold speed\-up in cases where the key has been pre\-generated.
This section has introduced **data.table** as a complimentary approach to base and **dplyr** methods for data processing. It offers performance gains due to its implementation in C and use of *keys* for subsetting tables. **data.table** offers much more, however, including: highly efficient data reshaping; dataset merging (also known as joining, as with `left_join` in **dplyr**); and grouping. For further information on **data.table**, we recommend reading the package’s [`datatable-intro`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html), [`datatable-reshape`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reshape.html) and [`datatable-reference-semantics`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reference-semantics.html) vignettes.
### Prerequisites
This chapter relies on a number of packages for data cleaning and processing \- test they are installed on your computer and load them with:
```
library("tibble")
library("tidyr")
library("stringr")
library("readr")
library("dplyr")
library("data.table")
```
**RSQLite** and **ggmap** are also used in a couple of examples, although they are not central to the chapter’s content.
6\.1 Top 5 tips for efficient data carpentry
--------------------------------------------
1. Time spent preparing your data at the beginning can save hours of frustration in the long run.
2. ‘Tidy data’ provides a concept for organising data and the package **tidyr** provides some functions for this work.
3. The `data_frame` class defined by the **tibble** package makes datasets efficient to print and easy to work with.
4. **dplyr** provides fast and intuitive data processing functions; **data.table** has unmatched speed for some data processing applications.
5. The `%>%` ‘pipe’ operator can help clarify complex data processing workflows.
6\.2 Efficient data frames with tibble
--------------------------------------
**tibble** is a package that defines a new data frame class for R, the `tbl_df`. These ‘tibble diffs’ (as their inventor [suggests](https://github.com/hadley/tibble) they should be pronounced) are like the base class `data.frame`, but with more user friendly printing, subsetting, and factor handling.
A tibble data frame is an S3 object with three classes, `tbl_df`, `tbl`, and `data.frame`. Since the object has the `data.frame` tag, this means that if a `tbl_df` or `tbl` method isn’t available, the object will be passed on to the appropriate `data.frame` function.
To create a tibble data frame, we use `tibble` function
```
library("tibble")
tibble(x = 1:3, y = c("A", "B", "C"))
#> # A tibble: 3 x 2
#> x y
#> <int> <chr>
#> 1 1 A
#> 2 2 B
#> 3 3 C
```
The example above illustrates the main differences between the **tibble** and base R approach to data frames:
* When printed, the tibble diff reports the class of each variable. `data.frame` objects do not.
* Character vectors are not coerced into factors when they are incorporated into a `tbl_df`, as can be seen by the `<chr>` heading between the variable name and the second column. By contrast, `data.frame()` coerces characters into factors which can cause problems further down the line.
* When printing a tibble diff to screen, only the first ten rows are displayed. The number of columns printed depends on the window size.
Other differences can be found in the associated help page \- `help("tibble")`.
You can create a tibble data frame row\-by\-row using the `tribble` function.
#### Exercise
Create the following data frame
```
df_base = data.frame(colA = "A")
```
Try and guess the output of the following commands
```
print(df_base)
df_base$colA
df_base$col
df_base$colB
```
Now create a tibble data frame and repeat the above commands.
#### Exercise
Create the following data frame
```
df_base = data.frame(colA = "A")
```
Try and guess the output of the following commands
```
print(df_base)
df_base$colA
df_base$col
df_base$colB
```
Now create a tibble data frame and repeat the above commands.
6\.3 Tidying data with tidyr and regular expressions
----------------------------------------------------
A key skill in data analysis is understanding the structure of datasets and being able to ‘reshape’ them. This is important from a workflow efficiency perspective: more than half of a data analyst’s time can be spent re\-formatting datasets (H. Wickham [2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)), so getting it into a suitable form early could save hours in the future. Converting data into a ‘tidy’ form is also advantageous from a computational efficiency perspective: it is usually faster to run analysis and plotting commands on tidy data.
Data tidying includes data cleaning and data reshaping. Data cleaning is the process of re\-formatting and labelling messy data. Packages including **stringi** and **stringr** can help update messy character strings using regular expressions; **assertive** and **assertr** packages can perform diagnostic checks for data integrity at the outset of a data analysis project. A common data cleaning task is the conversion of non\-standard text strings into date formats as described in the **lubridate** vignette (see `vignette("lubridate")`). Tidying is a broader concept, however, and also includes re\-shaping data so that it is in a form more conducive to data analysis and modelling.
The process of reshaping is illustrated by Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt), provided by H. Wickham ([2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)) and loaded using the code below:
```
library("efficient")
data(pew) # see ?pew - dataset from the efficient package
pew[1:3, 1:4] # take a look at the data
#> # A tibble: 3 x 4
#> religion `<$10k` `$10--20k` `$20--30k`
#> <chr> <int> <int> <int>
#> 1 Agnostic 27 34 60
#> 2 Atheist 12 27 37
#> 3 Buddhist 27 21 30
```
Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt) show a subset of the ‘wide’ `pew` and ‘long’ (tidy) `pewt` datasets, respectively. They have different dimensions, but they contain precisely the same information. Column names in the ‘wide’ form in Table [6\.1](data-carpentry.html#tab:tpew) became a new variable in the ‘long’ form in Table [6\.2](data-carpentry.html#tab:tpewt). According to the concept of ‘tidy data’, the long form is correct. Note that ‘correct’ here is used in the context of data analysis and graphical visualisation. Because R is a vector\-based language, tidy data also has efficiency advantages: it’s often faster to operate on few long columns than many short ones. Furthermore the powerful and efficient packages **dplyr** and **ggplot2** were designed around tidy data. Wide data is common, however, can be space efficient and is common for presentation in summary tables, so it’s useful to be able to transfer between wide (or otherwise ‘untidy’) and tidy formats.
Tidy data has the following characteristics (H. Wickham [2014](#ref-Wickham_2014)[b](#ref-Wickham_2014)):
1. Each variable forms a column.
2. Each observation forms a row.
3. Each type of observational unit forms a table.
Because there is only one observational unit in the example (religions), it can be described in a single table.
Large and complex datasets are usually represented by multiple tables, with unique identifiers or ‘keys’ to join them together (Codd [1979](#ref-Codd1979)).
Two common operations facilitated by **tidyr** are *gathering* and *splitting* columns.
### 6\.3\.1 Make wide tables long with `pivot_longer()`
Pivoting Longer means making ‘wide’ tables ‘long’, by converting column names to a new variable. This is done with the function
`pivot_longer()` (the inverse of which is `pivot_wider()`). The process is illustrated in Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt) respectively.
The code that performs this operation is provided in the code block below.
This converts a table with 18 rows and 10 columns into a tidy dataset with 162 rows and 3 columns (compare the output with the output of `pew`, shown above):
```
dim(pew)
#> [1] 18 10
pewt = pivot_longer(data = pew, -religion, names_to = "income", values_to = "count")
dim(pewt)
#> [1] 162 3
pewt[c(1:3, 50), ]
#> # A tibble: 4 x 3
#> religion income count
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Evangelical Protestant Churches $40--50k 881
```
The above code demonstrates the three arguments that `pivot_longer()` requires:
1. `data`, a data frame in which column names will become row values.
2. `names_to`, the name of the categorical variable into which the column names in the original datasets are converted.
3. `values_to`, the name of cell value columns.
As with other functions in the ‘tidyverse’, all arguments are given using bare names, rather than character strings. Arguments 2 and 3 can be specified by the user, and have no relation to the existing data. Furthermore an additional argument, set as `-religion`, was used to remove the religion variable from the gathering, ensuring that the values in this column are the first column in the output. If no `-religion` argument were specified, all column names are used in the key, meaning the results simply report all 180 column/value pairs resulting from the input dataset with 10 columns by 18 rows:
```
pivot_longer(pew, -religion)
#> # A tibble: 162 x 3
#> religion name value
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Agnostic $30--40k 81
#> # … with 158 more rows
```
Table 6\.1: First 3 rows of the aggregated ‘pew’ dataset from Wickham (2014a) in an ‘untidy’ form.
| religion | \<$10k | $10–20k | $20–30k |
| --- | --- | --- | --- |
| Agnostic | 27 | 34 | 60 |
| Atheist | 12 | 27 | 37 |
| Buddhist | 27 | 21 | 30 |
Table 6\.2: Long form of the Pew dataset represented above showing the minimum values for annual incomes (includes part time work).
| religion | name | value |
| --- | --- | --- |
| Agnostic | \<$10k | 27 |
| Agnostic | $10–20k | 34 |
| Agnostic | $20–30k | 60 |
| Atheist | \<$10k | 12 |
| Atheist | $10–20k | 27 |
| Atheist | $20–30k | 37 |
| Buddhist | \<$10k | 27 |
| Buddhist | $10–20k | 21 |
| Buddhist | $20–30k | 30 |
### 6\.3\.2 Split joint variables with `separate()`
Splitting means taking a variable that is really two variables combined and creating two separate columns from it. A classic example is age\-sex variables (e.g. `m0-10` and `f0-10` to represent males and females in the 0 to 10 age band). Splitting such variables can be done with the `separate()` function, as illustrated in the Tables [6\.3](data-carpentry.html#tab:to-separate) and [6\.4](data-carpentry.html#tab:separated) and in the code chunk below. See `?separate` for more information on this function.
```
agesex = c("m0-10", "f0-10") # create compound variable
n = c(3, 5) # create a value for each observation
agesex_df = tibble(agesex, n) # create a data frame
separate(agesex_df, agesex, c("sex", "age"), sep = 1)
#> # A tibble: 2 x 3
#> sex age n
#> <chr> <chr> <dbl>
#> 1 m 0-10 3
#> 2 f 0-10 5
```
Table 6\.3: Joined age and sex variables in one column
| agesex | n |
| --- | --- |
| m0\-10 | 3 |
| f0\-10 | 5 |
Table 6\.4: Age and sex variables separated by the function `separate`.
| sex | age | n |
| --- | --- | --- |
| m | 0\-10 | 3 |
| f | 0\-10 | 5 |
### 6\.3\.3 Other tidyr functions
There are other tidying operations that **tidyr** can perform, as described in the package’s vignette (`vignette("tidy-data")`).
The wider issue of manipulation is a large topic with major potential implications for efficiency (Spector [2008](#ref-Spector_2008)) and this section only covers some of the key operations. More important is understanding the principles behind converting messy data into standard output forms.
These same principles can also be applied to the representation of model results. The **broom** package provides a standard output format for model results, easing interpretation (see [the broom vignette](https://cran.r-project.org/web/packages/broom/vignettes/broom.html)). The function `broom::tidy()` can be applied to a wide range of model objects and return the model’s output in a standardized data frame output.
Usually it is more efficient to use the non\-standard evaluation version of variable names, as these can be auto completed by RStudio. In some cases you may want to use standard evaluation and refer to variable names using quote marks. To do this, affix `_` can be added to **dplyr** and **tidyr** function names to allow the use of standard evaluation. Thus the standard evaluation version of `separate(agesex_df, agesex, c("sex", "age"), 1)` is `separate_(agesex_df, "agesex", c("sex", "age"), 1)`.
### 6\.3\.4 Regular expressions
Regular expressions (commonly known as regex) is a language for describing and manipulating text strings. There are books on the subject, and several good tutorials on regex in R (e.g. Sanchez [2013](#ref-sanchez_handling_2013)), so we’ll just scratch the surface of the topic, and provide a taster of what is possible. Regex is a deep topic. However, knowing the basics can save a huge amount of time from a data tidying perspective, by automating the cleaning of messy strings.
In this section we teach both **stringr** and base R ways of doing pattern matching. The former provides easy to remember function names and consistency. The latter is useful to know as you’ll find lots of base R regex code in other peoples code as **stringr** is relatively new and not installed by default. The foundational regex operation is to detect whether or not a particular text string exists in an element or not which is done with `grepl()` and `str_detect()` in base R and **stringr** respectively:
```
library("stringr")
x = c("Hi I'm Robin.", "DoB 1985")
grepl(pattern = "9", x = x)
#> [1] FALSE TRUE
str_detect(string = x, pattern = "9")
#> [1] FALSE TRUE
```
Note: **stringr** does not include a direct replacement for `grep()`. You can use `which(str_detect())` instead.
Notice that `str_detect()` begins with `str_`. This is a common feature of **stringr** functions: they all do. This can be efficient because if you want to do some regex work, you just need to type `str_` and then hit Tab to see a list of all the options. The various base R regex function names, by contrast, are harder to remember, including `regmatches()`, `strsplit()` and `gsub()`. The **stringr** equivalents have more intuitive names that relate to the intention of the functions: `str_match_all()`, `str_split()` and `str_replace_all()`, respectively.
There is much else to say on the topic but rather than repeat what has been said elsewhere, we feel it is more efficient to direct the interested reader towards existing excellent resources for learning regex in R. We recommend reading, in order:
* The [Strings chapter](http://r4ds.had.co.nz/strings.html) of Grolemund and Wickham ([2016](#ref-grolemund_r_2016)).
* The **stringr** vignette (`vignette("stringr")`).
* A detailed tutorial on regex in base R (Sanchez [2013](#ref-sanchez_handling_2013)).
* For more advanced topics, reading the documentation of and [online articles](http://www.rexamine.com/blog/) about the **stringi** package, on which **stringr** depends.
#### Exercises
1. What are the three criteria of tidy data?
2. Load and look at subsets of these datasets. The first is the `pew` datasets we’ve been using already. The second reports the points that define, roughly, the geographical boundaries of different London boroughs. What is ‘untidy’ about each?
```
head(pew, 10)
#> # A tibble: 10 x 10
#> religion `<$10k` `$10--20k` `$20--30k` `$30--40k` `$40--50k` `$50--75k`
#> <chr> <int> <int> <int> <int> <int> <int>
#> 1 Agnostic 27 34 60 81 76 137
#> 2 Atheist 12 27 37 52 35 70
#> 3 Buddhist 27 21 30 34 33 58
#> 4 Catholic 418 617 732 670 638 1116
#> # … with 6 more rows, and 3 more variables: $75--100k <int>, $100--150k <int>,
#> # >150k <int>
data(lnd_geo_df)
head(lnd_geo_df, 10)
#> name_date population x y
#> 1 Bromley-2001 295535 544362 172379
#> 2 Bromley-2001 295535 549546 169911
#> 3 Bromley-2001 295535 539596 160796
#> 4 Bromley-2001 295535 533693 170730
#> 5 Bromley-2001 295535 533718 170814
#> 6 Bromley-2001 295535 534004 171442
#> 7 Bromley-2001 295535 541105 173356
#> 8 Bromley-2001 295535 544362 172379
#> 9 Richmond upon Thames-2001 172330 523605 176321
#> 10 Richmond upon Thames-2001 172330 521455 172362
```
3. Convert each of the above datasets into tidy form.
4. Consider the following string of phone numbers and fruits from (Wickham [2010](#ref-wickham2010stringr)):
```
strings = c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315", "239 923 8115",
"842 566 4692", "Work: 579-499-7527", "$1000", "Home: 543.355.3679")
```
Write expressions in **stringr** and base R that return:
* A logical vector reporting whether or not each string contains a number.
* Complete words only, without extraneous non\-letter characters.
### 6\.3\.1 Make wide tables long with `pivot_longer()`
Pivoting Longer means making ‘wide’ tables ‘long’, by converting column names to a new variable. This is done with the function
`pivot_longer()` (the inverse of which is `pivot_wider()`). The process is illustrated in Tables [6\.1](data-carpentry.html#tab:tpew) and [6\.2](data-carpentry.html#tab:tpewt) respectively.
The code that performs this operation is provided in the code block below.
This converts a table with 18 rows and 10 columns into a tidy dataset with 162 rows and 3 columns (compare the output with the output of `pew`, shown above):
```
dim(pew)
#> [1] 18 10
pewt = pivot_longer(data = pew, -religion, names_to = "income", values_to = "count")
dim(pewt)
#> [1] 162 3
pewt[c(1:3, 50), ]
#> # A tibble: 4 x 3
#> religion income count
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Evangelical Protestant Churches $40--50k 881
```
The above code demonstrates the three arguments that `pivot_longer()` requires:
1. `data`, a data frame in which column names will become row values.
2. `names_to`, the name of the categorical variable into which the column names in the original datasets are converted.
3. `values_to`, the name of cell value columns.
As with other functions in the ‘tidyverse’, all arguments are given using bare names, rather than character strings. Arguments 2 and 3 can be specified by the user, and have no relation to the existing data. Furthermore an additional argument, set as `-religion`, was used to remove the religion variable from the gathering, ensuring that the values in this column are the first column in the output. If no `-religion` argument were specified, all column names are used in the key, meaning the results simply report all 180 column/value pairs resulting from the input dataset with 10 columns by 18 rows:
```
pivot_longer(pew, -religion)
#> # A tibble: 162 x 3
#> religion name value
#> <chr> <chr> <int>
#> 1 Agnostic <$10k 27
#> 2 Agnostic $10--20k 34
#> 3 Agnostic $20--30k 60
#> 4 Agnostic $30--40k 81
#> # … with 158 more rows
```
Table 6\.1: First 3 rows of the aggregated ‘pew’ dataset from Wickham (2014a) in an ‘untidy’ form.
| religion | \<$10k | $10–20k | $20–30k |
| --- | --- | --- | --- |
| Agnostic | 27 | 34 | 60 |
| Atheist | 12 | 27 | 37 |
| Buddhist | 27 | 21 | 30 |
Table 6\.2: Long form of the Pew dataset represented above showing the minimum values for annual incomes (includes part time work).
| religion | name | value |
| --- | --- | --- |
| Agnostic | \<$10k | 27 |
| Agnostic | $10–20k | 34 |
| Agnostic | $20–30k | 60 |
| Atheist | \<$10k | 12 |
| Atheist | $10–20k | 27 |
| Atheist | $20–30k | 37 |
| Buddhist | \<$10k | 27 |
| Buddhist | $10–20k | 21 |
| Buddhist | $20–30k | 30 |
### 6\.3\.2 Split joint variables with `separate()`
Splitting means taking a variable that is really two variables combined and creating two separate columns from it. A classic example is age\-sex variables (e.g. `m0-10` and `f0-10` to represent males and females in the 0 to 10 age band). Splitting such variables can be done with the `separate()` function, as illustrated in the Tables [6\.3](data-carpentry.html#tab:to-separate) and [6\.4](data-carpentry.html#tab:separated) and in the code chunk below. See `?separate` for more information on this function.
```
agesex = c("m0-10", "f0-10") # create compound variable
n = c(3, 5) # create a value for each observation
agesex_df = tibble(agesex, n) # create a data frame
separate(agesex_df, agesex, c("sex", "age"), sep = 1)
#> # A tibble: 2 x 3
#> sex age n
#> <chr> <chr> <dbl>
#> 1 m 0-10 3
#> 2 f 0-10 5
```
Table 6\.3: Joined age and sex variables in one column
| agesex | n |
| --- | --- |
| m0\-10 | 3 |
| f0\-10 | 5 |
Table 6\.4: Age and sex variables separated by the function `separate`.
| sex | age | n |
| --- | --- | --- |
| m | 0\-10 | 3 |
| f | 0\-10 | 5 |
### 6\.3\.3 Other tidyr functions
There are other tidying operations that **tidyr** can perform, as described in the package’s vignette (`vignette("tidy-data")`).
The wider issue of manipulation is a large topic with major potential implications for efficiency (Spector [2008](#ref-Spector_2008)) and this section only covers some of the key operations. More important is understanding the principles behind converting messy data into standard output forms.
These same principles can also be applied to the representation of model results. The **broom** package provides a standard output format for model results, easing interpretation (see [the broom vignette](https://cran.r-project.org/web/packages/broom/vignettes/broom.html)). The function `broom::tidy()` can be applied to a wide range of model objects and return the model’s output in a standardized data frame output.
Usually it is more efficient to use the non\-standard evaluation version of variable names, as these can be auto completed by RStudio. In some cases you may want to use standard evaluation and refer to variable names using quote marks. To do this, affix `_` can be added to **dplyr** and **tidyr** function names to allow the use of standard evaluation. Thus the standard evaluation version of `separate(agesex_df, agesex, c("sex", "age"), 1)` is `separate_(agesex_df, "agesex", c("sex", "age"), 1)`.
### 6\.3\.4 Regular expressions
Regular expressions (commonly known as regex) is a language for describing and manipulating text strings. There are books on the subject, and several good tutorials on regex in R (e.g. Sanchez [2013](#ref-sanchez_handling_2013)), so we’ll just scratch the surface of the topic, and provide a taster of what is possible. Regex is a deep topic. However, knowing the basics can save a huge amount of time from a data tidying perspective, by automating the cleaning of messy strings.
In this section we teach both **stringr** and base R ways of doing pattern matching. The former provides easy to remember function names and consistency. The latter is useful to know as you’ll find lots of base R regex code in other peoples code as **stringr** is relatively new and not installed by default. The foundational regex operation is to detect whether or not a particular text string exists in an element or not which is done with `grepl()` and `str_detect()` in base R and **stringr** respectively:
```
library("stringr")
x = c("Hi I'm Robin.", "DoB 1985")
grepl(pattern = "9", x = x)
#> [1] FALSE TRUE
str_detect(string = x, pattern = "9")
#> [1] FALSE TRUE
```
Note: **stringr** does not include a direct replacement for `grep()`. You can use `which(str_detect())` instead.
Notice that `str_detect()` begins with `str_`. This is a common feature of **stringr** functions: they all do. This can be efficient because if you want to do some regex work, you just need to type `str_` and then hit Tab to see a list of all the options. The various base R regex function names, by contrast, are harder to remember, including `regmatches()`, `strsplit()` and `gsub()`. The **stringr** equivalents have more intuitive names that relate to the intention of the functions: `str_match_all()`, `str_split()` and `str_replace_all()`, respectively.
There is much else to say on the topic but rather than repeat what has been said elsewhere, we feel it is more efficient to direct the interested reader towards existing excellent resources for learning regex in R. We recommend reading, in order:
* The [Strings chapter](http://r4ds.had.co.nz/strings.html) of Grolemund and Wickham ([2016](#ref-grolemund_r_2016)).
* The **stringr** vignette (`vignette("stringr")`).
* A detailed tutorial on regex in base R (Sanchez [2013](#ref-sanchez_handling_2013)).
* For more advanced topics, reading the documentation of and [online articles](http://www.rexamine.com/blog/) about the **stringi** package, on which **stringr** depends.
#### Exercises
1. What are the three criteria of tidy data?
2. Load and look at subsets of these datasets. The first is the `pew` datasets we’ve been using already. The second reports the points that define, roughly, the geographical boundaries of different London boroughs. What is ‘untidy’ about each?
```
head(pew, 10)
#> # A tibble: 10 x 10
#> religion `<$10k` `$10--20k` `$20--30k` `$30--40k` `$40--50k` `$50--75k`
#> <chr> <int> <int> <int> <int> <int> <int>
#> 1 Agnostic 27 34 60 81 76 137
#> 2 Atheist 12 27 37 52 35 70
#> 3 Buddhist 27 21 30 34 33 58
#> 4 Catholic 418 617 732 670 638 1116
#> # … with 6 more rows, and 3 more variables: $75--100k <int>, $100--150k <int>,
#> # >150k <int>
data(lnd_geo_df)
head(lnd_geo_df, 10)
#> name_date population x y
#> 1 Bromley-2001 295535 544362 172379
#> 2 Bromley-2001 295535 549546 169911
#> 3 Bromley-2001 295535 539596 160796
#> 4 Bromley-2001 295535 533693 170730
#> 5 Bromley-2001 295535 533718 170814
#> 6 Bromley-2001 295535 534004 171442
#> 7 Bromley-2001 295535 541105 173356
#> 8 Bromley-2001 295535 544362 172379
#> 9 Richmond upon Thames-2001 172330 523605 176321
#> 10 Richmond upon Thames-2001 172330 521455 172362
```
3. Convert each of the above datasets into tidy form.
4. Consider the following string of phone numbers and fruits from (Wickham [2010](#ref-wickham2010stringr)):
```
strings = c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315", "239 923 8115",
"842 566 4692", "Work: 579-499-7527", "$1000", "Home: 543.355.3679")
```
Write expressions in **stringr** and base R that return:
* A logical vector reporting whether or not each string contains a number.
* Complete words only, without extraneous non\-letter characters.
#### Exercises
1. What are the three criteria of tidy data?
2. Load and look at subsets of these datasets. The first is the `pew` datasets we’ve been using already. The second reports the points that define, roughly, the geographical boundaries of different London boroughs. What is ‘untidy’ about each?
```
head(pew, 10)
#> # A tibble: 10 x 10
#> religion `<$10k` `$10--20k` `$20--30k` `$30--40k` `$40--50k` `$50--75k`
#> <chr> <int> <int> <int> <int> <int> <int>
#> 1 Agnostic 27 34 60 81 76 137
#> 2 Atheist 12 27 37 52 35 70
#> 3 Buddhist 27 21 30 34 33 58
#> 4 Catholic 418 617 732 670 638 1116
#> # … with 6 more rows, and 3 more variables: $75--100k <int>, $100--150k <int>,
#> # >150k <int>
data(lnd_geo_df)
head(lnd_geo_df, 10)
#> name_date population x y
#> 1 Bromley-2001 295535 544362 172379
#> 2 Bromley-2001 295535 549546 169911
#> 3 Bromley-2001 295535 539596 160796
#> 4 Bromley-2001 295535 533693 170730
#> 5 Bromley-2001 295535 533718 170814
#> 6 Bromley-2001 295535 534004 171442
#> 7 Bromley-2001 295535 541105 173356
#> 8 Bromley-2001 295535 544362 172379
#> 9 Richmond upon Thames-2001 172330 523605 176321
#> 10 Richmond upon Thames-2001 172330 521455 172362
```
3. Convert each of the above datasets into tidy form.
4. Consider the following string of phone numbers and fruits from (Wickham [2010](#ref-wickham2010stringr)):
```
strings = c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315", "239 923 8115",
"842 566 4692", "Work: 579-499-7527", "$1000", "Home: 543.355.3679")
```
Write expressions in **stringr** and base R that return:
* A logical vector reporting whether or not each string contains a number.
* Complete words only, without extraneous non\-letter characters.
6\.4 Efficient data processing with dplyr
-----------------------------------------
After tidying your data, the next stage is generally data processing. This includes the creation of new data, for example a new column that is some function of existing columns, or data analysis, the process of asking directed questions of the data and exporting the results in a user\-readable form.
Following the advice in Section [4\.4](workflow.html#package-selection), we have carefully selected an appropriate package for these tasks: **dplyr**, which roughly means ‘data frame pliers’. **dplyr** has a number of advantages over the base R and **data.table** approaches to data processing:
* **dplyr** is fast to run (due to its C\+\+ backend) and intuitive to type
* **dplyr** works well with tidy data, as described above
* **dplyr** works well with databases, providing efficiency gains on large datasets
Furthermore, **dplyr** is efficient to *learn* (see Chapter [10](learning.html#learning)). It has a small number of intuitively named functions, or ‘verbs’. These were partly inspired by SQL, one of the longest established languages for data analysis, which combines multiple simple functions (such as `SELECT` and `WHERE`, roughly analogous to `dplyr::select()` and `dplyr::filter()`) to create powerful analysis workflows. Likewise, **dplyr** functions were designed to be used together to solve a wide range of data processing challenges (see Table [6\.5](data-carpentry.html#tab:verbs)).
Table 6\.5: dplyr verb functions.
| dplyr function(s) | Description | Base R functions |
| --- | --- | --- |
| filter(), slice() | Subset rows by attribute (filter) or position (slice) | subset(), \[ |
| arrange() | Return data ordered by variable(s) | order() |
| select() | Subset columns | subset(), \[, \[\[ |
| rename() | Rename columns | colnames() |
| distinct() | Return unique rows | !duplicated() |
| mutate() | Create new variables (transmute drops existing variables) | transform(), \[\[ |
| summarise() | Collapse data into a single row | aggregate(), tapply() |
| sample\_n() | Return a sample of the data | sample() |
Unlike the base R analogues, **dplyr**’s data processing functions work in a consistent way. Each function takes a data frame object as its first argument and results in another data frame. Variables can be called directly without using the `$` operator. **dplyr** was designed to be used with the ‘pipe’ operator `%>%` provided by the **magrittr** package, allowing each data processing stage to be represented as a new line. This is illustrated in the code chunk below, which loads a tidy country level dataset of greenhouse gas emissions from the **efficient** package, and then identifies the countries with the greatest absolute growth in emissions from 1971 to 2012:
```
library("dplyr")
data("ghg_ems", package = "efficient")
top_table =
ghg_ems %>%
filter(!grepl("World|Europe", Country)) %>%
group_by(Country) %>%
summarise(Mean = mean(Transportation),
Growth = diff(range(Transportation))) %>%
top_n(3, Growth) %>%
arrange(desc(Growth))
```
The results, illustrated in table [6\.6](data-carpentry.html#tab:speed), show that the USA has the highest growth and average emissions from the transport sector, followed closely by China.
The aim of this code chunk is not for you to somehow read it and understand it: it is to provide a taster of **dplyr**’s unique syntax, which is described in more detail throughout the duration of this section.
Table 6\.6: The top 3 countries in terms of average CO2 emissions from transport since 1971, and growth in transport emissions over that period (MTCO2e/yr).
| Country | Mean | Growth |
| --- | --- | --- |
| United States | 1462 | 709 |
| China | 214 | 656 |
| India | 85 | 170 |
Building on the ‘learning by doing’ ethic, the remainder of this section works through these functions to process and begin to analyse a dataset on economic equality provided by the World Bank. The input dataset can be loaded as follows:
```
# Load global inequality data
data(package = "efficient", wb_ineq)
```
**dplyr** is a large package and can be seen as a language in its own right. Following the ‘walk before you run’ principle, we’ll start simple, by filtering and aggregating rows.
### 6\.4\.1 Renaming columns
Renaming data columns is a common task that can make writing code faster by using short, intuitive names. The **dplyr** function `rename()` makes this easy.
Note in this code block the variable name is surrounded by back\-quotes (`\`).
This allows R to refer to column names that are non\-standard.
Note also the syntax:
`rename()` takes the data frame as the first object and then creates new variables by specifying `new_variable_name = original_name`.
```
wb_ineq = rename(wb_ineq, code = `Country Code`)
```
To rename multiple columns the variable names are simply separated by commas.
`rename(x, x = X1, y = X2)` would rename variables `X1` and `X2` in the dataset `x`.
In base R the equivalent function would be `names(x)[1:2] = c("x", "y")` or `setNames(x, c("x", "y"))`, assuming we were dealing with the first and second columns.
### 6\.4\.2 Changing column classes
The *class* of R objects is critical to performance.
If a class is incorrectly specified (e.g. if numbers are treated as factors or characters) this will lead to incorrect results. The class of all columns in a data frame can be queried using the function `str()` (short for display the **str**ucture of an object).[14](#fn14)
Visual inspection of the data (e.g. via `View(wb_ineq)`) clearly shows that all columns except for 1 to 4 (`Country`, `Country Code`, `Year` and `Year Code`) should be numeric.
The class of numeric variables can be altered one\-by one using `mutate()` as follows (which would set the `gini` column to be of class `numeric` if it weren’t already):[15](#fn15)
```
wb_ineq = mutate(wb_ineq, gini = as.numeric(gini))
```
However the purpose of programming languages is to *automate* tasks and reduce typing.
The following code chunk ensures the numeric variables in the `cols_to_change` object are `numeric` using the same function (`vars()` is a helper function to select variables and also words with **dplyr** functions such as `contains()` which select all columns containing a given text string):
```
cols_to_change = 5:9 # column ids to change
wb_ineq = mutate_at(wb_ineq, vars(cols_to_change), as.numeric)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(cols_to_change)` instead of `cols_to_change` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
```
Another way to acheive the same result is to use `data.matrix()`, which converts the data frame to a numeric `matrix`:
```
cols_to_change = 5:9 # column ids to change
wb_ineq[cols_to_change] = data.matrix(wb_ineq[cols_to_change])
```
Each method (base R and **dplyr**) has its merits.
For readers new to R who plan to use other **tidyverse** packages we would provide a slight steer towards `mutate_at()` for its flexibility and expressive syntax.
Other methods for acheiving the same result include the use of loops via `apply()` and `for()`.
These are shown in the chapter’s [source code](https://github.com/csgillespie/efficientR).
### 6\.4\.3 Filtering rows
**dplyr** offers an alternative way of filtering data, using `filter()`.
```
# Base R: wb_ineq[wb_ineq$Country == "Australia",]
aus2 = filter(wb_ineq, Country == "Australia")
```
`filter()` is slightly more flexible than `[`: `filter(wb_ineq, code == "AUS", Year == 1974)` works as well as `filter(wb_ineq, code == "AUS" & Year == 1974)`, and takes any number of conditions (see `?filter`). `filter()` is slightly faster than base R.[16](#fn16) By avoiding the `$` symbol, **dplyr** makes subsetting code concise and consistent with other **dplyr** functions. The first argument is a data frame and subsequent raw variable names can be treated as vector objects: a defining feature of **dplyr**. In the next section we’ll learn how this syntax can be used alongside the `%>%` ‘pipe’ command to write clear data manipulation commands.
There are **dplyr** equivalents of many base R functions but these usually work slightly differently. The **dplyr** equivalent of `aggregate`, for example is to use the grouping function `group_by` in combination with the general purpose function `summarise` (not to be confused with `summary` in base R), as we shall see in Section [6\.4\.5](data-carpentry.html#data-aggregation).
### 6\.4\.4 Chaining operations
Another interesting feature of **dplyr** is its ability to chain operations together. This overcomes one of the aesthetic issues with R code: you can end\-up with very long commands with many functions nested inside each other to answer relatively simple questions. Combined with the `group_by()` function, pipes can help condense thousands of lines of data into something human readable. Here’s how you could use the chains to summarize average Gini indexes per decade, for example:
```
wb_ineq %>%
select(Year, gini) %>%
mutate(decade = floor(as.numeric(Year) / 10) * 10) %>%
group_by(decade) %>%
summarise(mean(gini, na.rm = TRUE))
#> # A tibble: 6 x 2
#> decade `mean(gini, na.rm = TRUE)`
#> <dbl> <dbl>
#> 1 1970 40.1
#> 2 1980 37.8
#> 3 1990 42.0
#> 4 2000 40.5
#> # … with 2 more rows
```
Often the best way to learn is to try and break something, so try running the above commands with different **dplyr** verbs.
By way of explanation, this is what happened:
1. Only the columns `Year` and `gini` were selected, using `select()`.
2. A new variable, `decade` was created, to show only the decade figures (e.g. 1989 becomes 1980\).
3. This new variable was used to group rows in the data frame with the same decade.
4. The mean value per decade was calculated, illustrating how average income inequality was greatest in 1990 but has since decreased slightly.
Let’s ask another question to see how the **dplyr** chaining workflow can be used to answer questions interactively: What are the 5 most unequal years for countries containing the letter g? Here’s how chains can help organise the analysis needed to answer this question step\-by\-step:
```
wb_ineq %>%
filter(grepl("g", Country)) %>%
group_by(Year) %>%
summarise(gini = mean(gini, na.rm = TRUE)) %>%
arrange(desc(gini)) %>%
top_n(n = 5)
#> Selecting by gini
#> # A tibble: 5 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1980 46.8
#> 2 1993 46.0
#> 3 2013 44.6
#> 4 1981 43.6
#> # … with 1 more row
```
The above function consists of 6 stages, each of which corresponds to a new line and **dplyr** function:
1. Filter\-out the countries we’re interested in (any selection criteria could be used in place of `grepl("g", Country)`).
2. Group the output by year.
3. Summarise, for each year, the mean Gini index.
4. Arrange the results by average Gini index
5. Select only the top 5 most unequal years.
To see why this method is preferable to the nested function approach, take a look at the latter. Even after indenting properly it looks terrible and is almost impossible to understand!
```
top_n(
arrange(
summarise(
group_by(
filter(wb_ineq, grepl("g", Country)),
Year),
gini = mean(gini, na.rm = TRUE)),
desc(gini)),
n = 5)
```
This section has provided only a taster of what is possible **dplyr** and why it makes sense from code writing and computational efficiency perspectives. For a more detailed account of data processing with R using this approach we recommend *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)).
#### Exercises
1. Try running each of the chaining examples above line\-by\-line, so the first two entries for the first example would look like this:
```
wb_ineq
#> # A tibble: 6,925 x 9
#> Country code Year `Year Code` top10 bot10 gini b40_cons gdp_percap
#> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Afghanistan AFG 1974 YR1974 NA NA NA NA NA
#> 2 Afghanistan AFG 1975 YR1975 NA NA NA NA NA
#> 3 Afghanistan AFG 1976 YR1976 NA NA NA NA NA
#> 4 Afghanistan AFG 1977 YR1977 NA NA NA NA NA
#> # … with 6,921 more rows
```
followed by:
```
wb_ineq %>%
select(Year, gini)
#> # A tibble: 6,925 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1974 NA
#> 2 1975 NA
#> 3 1976 NA
#> 4 1977 NA
#> # … with 6,921 more rows
```
Explain in your own words what changes each time.
2. Use chained **dplyr** functions to answer the following question: In which year did countries without an ‘a’ in their name have the lowest level of inequality?
### 6\.4\.5 Data aggregation
Data aggregation involves creating summaries of data based on a grouping variable, in a process that has been referred to as ‘split\-apply\-combine’. The end result usually has the same number of rows as there are groups. Because aggregation is a way of condensing datasets it can be a very useful technique for making sense of large datasets. The following code finds the number of unique countries (country being the grouping variable) from the `ghg_ems` dataset stored in the **efficient** package.
```
# data available online, from github.com/csgillespie/efficient_pkg
data(ghg_ems, package = "efficient")
names(ghg_ems)
#> [1] "Country" "Year" "Electricity" "Manufacturing"
#> [5] "Transportation" "Other" "Fugitive"
nrow(ghg_ems)
#> [1] 7896
length(unique(ghg_ems$Country))
#> [1] 188
```
Note that while there are almost \\(8000\\) rows, there are fewer than 200 countries: factors would have been a more space efficient way of storing the countries data.
To aggregate the dataset using **dplyr** package, you divide the task in two: to *group* the dataset first and then to summarise, as illustrated below.[17](#fn17)
```
library("dplyr")
group_by(ghg_ems, Country) %>%
summarise(mean_eco2 = mean(Electricity, na.rm = TRUE))
#> # A tibble: 188 x 2
#> Country mean_eco2
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 0.641
#> 3 Algeria 23.0
#> 4 Angola 0.791
#> # … with 184 more rows
```
The example above relates to a wider programming issue: how much work should one function do? The work could have been done with a single `aggregate()` call. However, the [Unix philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) states that programs should “do one thing well”, which is how **dplyr**’s functions were designed. Shorter functions are easier to understand and debug. But having too many functions can also make your call stack confusing.
To reinforce the point, this operation is also performed below on the `wb_ineq` dataset:
```
countries = group_by(wb_ineq, Country)
summarise(countries, mean_gini = mean(gini, na.rm = TRUE))
#> # A tibble: 176 x 2
#> Country mean_gini
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 30.4
#> 3 Algeria 37.8
#> 4 Angola 50.6
#> # … with 172 more rows
```
Note that `summarise` is highly versatile, and can be used to return a customised range of summary statistics:
```
summarise(countries,
# number of rows per country
obs = n(),
med_t10 = median(top10, na.rm = TRUE),
# standard deviation
sdev = sd(gini, na.rm = TRUE),
# number with gini > 30
n30 = sum(gini > 30, na.rm = TRUE),
sdn30 = sd(gini[gini > 30], na.rm = TRUE),
# range
dif = max(gini, na.rm = TRUE) - min(gini, na.rm = TRUE)
)
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> # A tibble: 176 x 7
#> Country obs med_t10 sdev n30 sdn30 dif
#> <chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
#> 1 Afghanistan 40 NA NA 0 NA -Inf
#> 2 Albania 40 24.4 1.25 3 0.364 2.78
#> 3 Algeria 40 29.8 3.44 2 3.44 4.86
#> 4 Angola 40 38.6 11.3 2 11.3 16.0
#> # … with 172 more rows
```
To showcase the power of `summarise` used on a `grouped_df`, the above code reports a wide range of customised summary statistics *per country*:
* the number of rows in each country group
* standard deviation of Gini indices
* median proportion of income earned by the top 10%
* the number of years in which the Gini index was greater than 30
* the standard deviation of Gini index values over 30
* the range of Gini index values reported for each country.
#### Exercises
1. Refer back to the greenhouse gas emissions example at the outset of section [6\.4](data-carpentry.html#dplyr), in which we found the top 3 countries in terms of emissions growth in the transport sector. a) Explain in words what is going on in each line. b) Try to find the top 3 countries in terms of emissions in 2012 \- how is the list different?
2. Explore **dplyr**’s documentation, starting with the introductory vignette, accessed by entering [`vignette("introduction")`](https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html).
3. Test additional **dplyr** ‘verbs’ on the `wb_ineq` dataset. (More vignette names can be discovered by typing `vignette(package = "dplyr")`.)
### 6\.4\.6 Non standard evaluation
The final thing to say about **dplyr** does not relate to the data but the syntax of the functions. Note that many of the arguments in the code examples in this section are provided as raw names: they are raw variable names, not surrounded by quote marks (e.g. `Country` rather than `"Country"`). This is called non\-standard evaluation (NSE) (see `vignette("nse")`). NSE was used deliberately, with the aim of making the functions more efficient for interactive use. NSE reduces typing and allows autocompletion in RStudio.
This is fine when using R interactively. But when you’d like to use R non\-interactively, code is generally more robust using standard evaluation: it minimises the chance of creating obscure scope\-related bugs. Using standing evaluation also avoids having to declare global variables if you include the code in a package. To overcome this the concept of ‘tidy evaluation’ was developed and implemented in the package **rlang** (part of the tidyverse) to provide functions to control when symbols are evaluated and when they are treated as text strings. Without going into detail, the code below demonstrates how tidy evaluation works (see the [`tidy-evaluation`](https://cran.r-project.org/web/packages/rlang/vignettes/tidy-evaluation.html) vignette and [`Programming-with-dplyr`](https://cran.r-project.org/web/packages/dplyr/vignettes/programming.html) for further information):
```
library(rlang)
# 1: Default NSE function
group_by(cars, speed = cut(speed, c(0, 10, 100))) %>%
summarise(mean_dist = mean(dist))
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 2: Evaluation from character string
group_by(cars, speed = !!parse_quosure("cut(speed, c(0, 10, 100))")) %>%
summarise(mean_dist = !!parse_quosure("mean(dist)"))
#> Warning: `parse_quosure()` is deprecated as of rlang 0.2.0.
#> Please use `parse_quo()` instead.
#> This warning is displayed once per session.
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 3: Using !! to evaluate 'quosures' when appropriate
q1 = quo(cut(speed, c(0, 10, 100)))
q2 = quo(mean(dist))
group_by(cars, speed = !!q1) %>%
summarise(mean_dist = !!q2)
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
```
### 6\.4\.1 Renaming columns
Renaming data columns is a common task that can make writing code faster by using short, intuitive names. The **dplyr** function `rename()` makes this easy.
Note in this code block the variable name is surrounded by back\-quotes (`\`).
This allows R to refer to column names that are non\-standard.
Note also the syntax:
`rename()` takes the data frame as the first object and then creates new variables by specifying `new_variable_name = original_name`.
```
wb_ineq = rename(wb_ineq, code = `Country Code`)
```
To rename multiple columns the variable names are simply separated by commas.
`rename(x, x = X1, y = X2)` would rename variables `X1` and `X2` in the dataset `x`.
In base R the equivalent function would be `names(x)[1:2] = c("x", "y")` or `setNames(x, c("x", "y"))`, assuming we were dealing with the first and second columns.
### 6\.4\.2 Changing column classes
The *class* of R objects is critical to performance.
If a class is incorrectly specified (e.g. if numbers are treated as factors or characters) this will lead to incorrect results. The class of all columns in a data frame can be queried using the function `str()` (short for display the **str**ucture of an object).[14](#fn14)
Visual inspection of the data (e.g. via `View(wb_ineq)`) clearly shows that all columns except for 1 to 4 (`Country`, `Country Code`, `Year` and `Year Code`) should be numeric.
The class of numeric variables can be altered one\-by one using `mutate()` as follows (which would set the `gini` column to be of class `numeric` if it weren’t already):[15](#fn15)
```
wb_ineq = mutate(wb_ineq, gini = as.numeric(gini))
```
However the purpose of programming languages is to *automate* tasks and reduce typing.
The following code chunk ensures the numeric variables in the `cols_to_change` object are `numeric` using the same function (`vars()` is a helper function to select variables and also words with **dplyr** functions such as `contains()` which select all columns containing a given text string):
```
cols_to_change = 5:9 # column ids to change
wb_ineq = mutate_at(wb_ineq, vars(cols_to_change), as.numeric)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(cols_to_change)` instead of `cols_to_change` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
```
Another way to acheive the same result is to use `data.matrix()`, which converts the data frame to a numeric `matrix`:
```
cols_to_change = 5:9 # column ids to change
wb_ineq[cols_to_change] = data.matrix(wb_ineq[cols_to_change])
```
Each method (base R and **dplyr**) has its merits.
For readers new to R who plan to use other **tidyverse** packages we would provide a slight steer towards `mutate_at()` for its flexibility and expressive syntax.
Other methods for acheiving the same result include the use of loops via `apply()` and `for()`.
These are shown in the chapter’s [source code](https://github.com/csgillespie/efficientR).
### 6\.4\.3 Filtering rows
**dplyr** offers an alternative way of filtering data, using `filter()`.
```
# Base R: wb_ineq[wb_ineq$Country == "Australia",]
aus2 = filter(wb_ineq, Country == "Australia")
```
`filter()` is slightly more flexible than `[`: `filter(wb_ineq, code == "AUS", Year == 1974)` works as well as `filter(wb_ineq, code == "AUS" & Year == 1974)`, and takes any number of conditions (see `?filter`). `filter()` is slightly faster than base R.[16](#fn16) By avoiding the `$` symbol, **dplyr** makes subsetting code concise and consistent with other **dplyr** functions. The first argument is a data frame and subsequent raw variable names can be treated as vector objects: a defining feature of **dplyr**. In the next section we’ll learn how this syntax can be used alongside the `%>%` ‘pipe’ command to write clear data manipulation commands.
There are **dplyr** equivalents of many base R functions but these usually work slightly differently. The **dplyr** equivalent of `aggregate`, for example is to use the grouping function `group_by` in combination with the general purpose function `summarise` (not to be confused with `summary` in base R), as we shall see in Section [6\.4\.5](data-carpentry.html#data-aggregation).
### 6\.4\.4 Chaining operations
Another interesting feature of **dplyr** is its ability to chain operations together. This overcomes one of the aesthetic issues with R code: you can end\-up with very long commands with many functions nested inside each other to answer relatively simple questions. Combined with the `group_by()` function, pipes can help condense thousands of lines of data into something human readable. Here’s how you could use the chains to summarize average Gini indexes per decade, for example:
```
wb_ineq %>%
select(Year, gini) %>%
mutate(decade = floor(as.numeric(Year) / 10) * 10) %>%
group_by(decade) %>%
summarise(mean(gini, na.rm = TRUE))
#> # A tibble: 6 x 2
#> decade `mean(gini, na.rm = TRUE)`
#> <dbl> <dbl>
#> 1 1970 40.1
#> 2 1980 37.8
#> 3 1990 42.0
#> 4 2000 40.5
#> # … with 2 more rows
```
Often the best way to learn is to try and break something, so try running the above commands with different **dplyr** verbs.
By way of explanation, this is what happened:
1. Only the columns `Year` and `gini` were selected, using `select()`.
2. A new variable, `decade` was created, to show only the decade figures (e.g. 1989 becomes 1980\).
3. This new variable was used to group rows in the data frame with the same decade.
4. The mean value per decade was calculated, illustrating how average income inequality was greatest in 1990 but has since decreased slightly.
Let’s ask another question to see how the **dplyr** chaining workflow can be used to answer questions interactively: What are the 5 most unequal years for countries containing the letter g? Here’s how chains can help organise the analysis needed to answer this question step\-by\-step:
```
wb_ineq %>%
filter(grepl("g", Country)) %>%
group_by(Year) %>%
summarise(gini = mean(gini, na.rm = TRUE)) %>%
arrange(desc(gini)) %>%
top_n(n = 5)
#> Selecting by gini
#> # A tibble: 5 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1980 46.8
#> 2 1993 46.0
#> 3 2013 44.6
#> 4 1981 43.6
#> # … with 1 more row
```
The above function consists of 6 stages, each of which corresponds to a new line and **dplyr** function:
1. Filter\-out the countries we’re interested in (any selection criteria could be used in place of `grepl("g", Country)`).
2. Group the output by year.
3. Summarise, for each year, the mean Gini index.
4. Arrange the results by average Gini index
5. Select only the top 5 most unequal years.
To see why this method is preferable to the nested function approach, take a look at the latter. Even after indenting properly it looks terrible and is almost impossible to understand!
```
top_n(
arrange(
summarise(
group_by(
filter(wb_ineq, grepl("g", Country)),
Year),
gini = mean(gini, na.rm = TRUE)),
desc(gini)),
n = 5)
```
This section has provided only a taster of what is possible **dplyr** and why it makes sense from code writing and computational efficiency perspectives. For a more detailed account of data processing with R using this approach we recommend *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)).
#### Exercises
1. Try running each of the chaining examples above line\-by\-line, so the first two entries for the first example would look like this:
```
wb_ineq
#> # A tibble: 6,925 x 9
#> Country code Year `Year Code` top10 bot10 gini b40_cons gdp_percap
#> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Afghanistan AFG 1974 YR1974 NA NA NA NA NA
#> 2 Afghanistan AFG 1975 YR1975 NA NA NA NA NA
#> 3 Afghanistan AFG 1976 YR1976 NA NA NA NA NA
#> 4 Afghanistan AFG 1977 YR1977 NA NA NA NA NA
#> # … with 6,921 more rows
```
followed by:
```
wb_ineq %>%
select(Year, gini)
#> # A tibble: 6,925 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1974 NA
#> 2 1975 NA
#> 3 1976 NA
#> 4 1977 NA
#> # … with 6,921 more rows
```
Explain in your own words what changes each time.
2. Use chained **dplyr** functions to answer the following question: In which year did countries without an ‘a’ in their name have the lowest level of inequality?
#### Exercises
1. Try running each of the chaining examples above line\-by\-line, so the first two entries for the first example would look like this:
```
wb_ineq
#> # A tibble: 6,925 x 9
#> Country code Year `Year Code` top10 bot10 gini b40_cons gdp_percap
#> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Afghanistan AFG 1974 YR1974 NA NA NA NA NA
#> 2 Afghanistan AFG 1975 YR1975 NA NA NA NA NA
#> 3 Afghanistan AFG 1976 YR1976 NA NA NA NA NA
#> 4 Afghanistan AFG 1977 YR1977 NA NA NA NA NA
#> # … with 6,921 more rows
```
followed by:
```
wb_ineq %>%
select(Year, gini)
#> # A tibble: 6,925 x 2
#> Year gini
#> <chr> <dbl>
#> 1 1974 NA
#> 2 1975 NA
#> 3 1976 NA
#> 4 1977 NA
#> # … with 6,921 more rows
```
Explain in your own words what changes each time.
2. Use chained **dplyr** functions to answer the following question: In which year did countries without an ‘a’ in their name have the lowest level of inequality?
### 6\.4\.5 Data aggregation
Data aggregation involves creating summaries of data based on a grouping variable, in a process that has been referred to as ‘split\-apply\-combine’. The end result usually has the same number of rows as there are groups. Because aggregation is a way of condensing datasets it can be a very useful technique for making sense of large datasets. The following code finds the number of unique countries (country being the grouping variable) from the `ghg_ems` dataset stored in the **efficient** package.
```
# data available online, from github.com/csgillespie/efficient_pkg
data(ghg_ems, package = "efficient")
names(ghg_ems)
#> [1] "Country" "Year" "Electricity" "Manufacturing"
#> [5] "Transportation" "Other" "Fugitive"
nrow(ghg_ems)
#> [1] 7896
length(unique(ghg_ems$Country))
#> [1] 188
```
Note that while there are almost \\(8000\\) rows, there are fewer than 200 countries: factors would have been a more space efficient way of storing the countries data.
To aggregate the dataset using **dplyr** package, you divide the task in two: to *group* the dataset first and then to summarise, as illustrated below.[17](#fn17)
```
library("dplyr")
group_by(ghg_ems, Country) %>%
summarise(mean_eco2 = mean(Electricity, na.rm = TRUE))
#> # A tibble: 188 x 2
#> Country mean_eco2
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 0.641
#> 3 Algeria 23.0
#> 4 Angola 0.791
#> # … with 184 more rows
```
The example above relates to a wider programming issue: how much work should one function do? The work could have been done with a single `aggregate()` call. However, the [Unix philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) states that programs should “do one thing well”, which is how **dplyr**’s functions were designed. Shorter functions are easier to understand and debug. But having too many functions can also make your call stack confusing.
To reinforce the point, this operation is also performed below on the `wb_ineq` dataset:
```
countries = group_by(wb_ineq, Country)
summarise(countries, mean_gini = mean(gini, na.rm = TRUE))
#> # A tibble: 176 x 2
#> Country mean_gini
#> <chr> <dbl>
#> 1 Afghanistan NaN
#> 2 Albania 30.4
#> 3 Algeria 37.8
#> 4 Angola 50.6
#> # … with 172 more rows
```
Note that `summarise` is highly versatile, and can be used to return a customised range of summary statistics:
```
summarise(countries,
# number of rows per country
obs = n(),
med_t10 = median(top10, na.rm = TRUE),
# standard deviation
sdev = sd(gini, na.rm = TRUE),
# number with gini > 30
n30 = sum(gini > 30, na.rm = TRUE),
sdn30 = sd(gini[gini > 30], na.rm = TRUE),
# range
dif = max(gini, na.rm = TRUE) - min(gini, na.rm = TRUE)
)
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> Warning in max(gini, na.rm = TRUE): no non-missing arguments to max; returning -
#> Inf
#> Warning in min(gini, na.rm = TRUE): no non-missing arguments to min; returning
#> Inf
#> # A tibble: 176 x 7
#> Country obs med_t10 sdev n30 sdn30 dif
#> <chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
#> 1 Afghanistan 40 NA NA 0 NA -Inf
#> 2 Albania 40 24.4 1.25 3 0.364 2.78
#> 3 Algeria 40 29.8 3.44 2 3.44 4.86
#> 4 Angola 40 38.6 11.3 2 11.3 16.0
#> # … with 172 more rows
```
To showcase the power of `summarise` used on a `grouped_df`, the above code reports a wide range of customised summary statistics *per country*:
* the number of rows in each country group
* standard deviation of Gini indices
* median proportion of income earned by the top 10%
* the number of years in which the Gini index was greater than 30
* the standard deviation of Gini index values over 30
* the range of Gini index values reported for each country.
#### Exercises
1. Refer back to the greenhouse gas emissions example at the outset of section [6\.4](data-carpentry.html#dplyr), in which we found the top 3 countries in terms of emissions growth in the transport sector. a) Explain in words what is going on in each line. b) Try to find the top 3 countries in terms of emissions in 2012 \- how is the list different?
2. Explore **dplyr**’s documentation, starting with the introductory vignette, accessed by entering [`vignette("introduction")`](https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html).
3. Test additional **dplyr** ‘verbs’ on the `wb_ineq` dataset. (More vignette names can be discovered by typing `vignette(package = "dplyr")`.)
#### Exercises
1. Refer back to the greenhouse gas emissions example at the outset of section [6\.4](data-carpentry.html#dplyr), in which we found the top 3 countries in terms of emissions growth in the transport sector. a) Explain in words what is going on in each line. b) Try to find the top 3 countries in terms of emissions in 2012 \- how is the list different?
2. Explore **dplyr**’s documentation, starting with the introductory vignette, accessed by entering [`vignette("introduction")`](https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html).
3. Test additional **dplyr** ‘verbs’ on the `wb_ineq` dataset. (More vignette names can be discovered by typing `vignette(package = "dplyr")`.)
### 6\.4\.6 Non standard evaluation
The final thing to say about **dplyr** does not relate to the data but the syntax of the functions. Note that many of the arguments in the code examples in this section are provided as raw names: they are raw variable names, not surrounded by quote marks (e.g. `Country` rather than `"Country"`). This is called non\-standard evaluation (NSE) (see `vignette("nse")`). NSE was used deliberately, with the aim of making the functions more efficient for interactive use. NSE reduces typing and allows autocompletion in RStudio.
This is fine when using R interactively. But when you’d like to use R non\-interactively, code is generally more robust using standard evaluation: it minimises the chance of creating obscure scope\-related bugs. Using standing evaluation also avoids having to declare global variables if you include the code in a package. To overcome this the concept of ‘tidy evaluation’ was developed and implemented in the package **rlang** (part of the tidyverse) to provide functions to control when symbols are evaluated and when they are treated as text strings. Without going into detail, the code below demonstrates how tidy evaluation works (see the [`tidy-evaluation`](https://cran.r-project.org/web/packages/rlang/vignettes/tidy-evaluation.html) vignette and [`Programming-with-dplyr`](https://cran.r-project.org/web/packages/dplyr/vignettes/programming.html) for further information):
```
library(rlang)
# 1: Default NSE function
group_by(cars, speed = cut(speed, c(0, 10, 100))) %>%
summarise(mean_dist = mean(dist))
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 2: Evaluation from character string
group_by(cars, speed = !!parse_quosure("cut(speed, c(0, 10, 100))")) %>%
summarise(mean_dist = !!parse_quosure("mean(dist)"))
#> Warning: `parse_quosure()` is deprecated as of rlang 0.2.0.
#> Please use `parse_quo()` instead.
#> This warning is displayed once per session.
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
# 3: Using !! to evaluate 'quosures' when appropriate
q1 = quo(cut(speed, c(0, 10, 100)))
q2 = quo(mean(dist))
group_by(cars, speed = !!q1) %>%
summarise(mean_dist = !!q2)
#> # A tibble: 2 x 2
#> speed mean_dist
#> <fct> <dbl>
#> 1 (0,10] 15.8
#> 2 (10,100] 49.0
```
6\.5 Combining datasets
-----------------------
The usefulness of a dataset can sometimes be greatly enhanced by combining it with other data. If we could merge the global `ghg_ems` dataset with geographic data, for example, we could visualise the spatial distribution of climate pollution. For the purposes of this section we join `ghg_ems` to the `world` data provided by **ggmap** to illustrate the concepts and methods of data *joining* (also referred to as merging).
```
library("ggmap")
world = map_data("world")
names(world)
#> [1] "long" "lat" "group" "order" "region" "subregion"
```
Visually compare this new dataset of the `world` with `ghg_ems` (e.g. via `View(world); View(ghg_ems)`). It is clear that the column `region` in the former contains the same information as `Country` in the latter. This will be the *joining variable*; renaming it in `world` will make the join more efficient.
```
world = rename(world, Country = region)
ghg_ems$All = rowSums(ghg_ems[3:7])
```
Ensure that both joining variables have the same class (combining `character` and `factor` columns can cause havoc).
How large is the overlap between `ghg_ems$Country` and `world$Country`? We can find out using the `%in%` operator, which finds out how many elements in one vector match those in another vector. Specifically, we will find out how many *unique* country names from `ghg_ems` are present in the `world` dataset:
```
unique_countries_ghg_ems = unique(ghg_ems$Country)
unique_countries_world = unique(world$Country)
matched = unique_countries_ghg_ems %in% unique_countries_world
table(matched)
#> matched
#> FALSE TRUE
#> 20 168
```
This comparison exercise has been fruitful: most of the countries in the `co2` dataset exist in the `world` dataset. But what about the 20 country names that do not match? We can identify these as follows:
```
(unmatched_countries_ghg_ems = unique_countries_ghg_ems[!matched])
#> [1] "Antigua & Barbuda" "Bahamas, The"
#> [3] "Bosnia & Herzegovina" "Congo, Dem. Rep."
#> [5] "Congo, Rep." "Cote d'Ivoire"
#> [7] "European Union (15)" "European Union (28)"
#> [9] "Gambia, The" "Korea, Dem. Rep. (North)"
#> [11] "Korea, Rep. (South)" "Macedonia, FYR"
#> [13] "Russian Federation" "Saint Kitts & Nevis"
#> [15] "Saint Vincent & Grenadines" "Sao Tome & Principe"
#> [17] "Trinidad & Tobago" "United Kingdom"
#> [19] "United States" "World"
```
It is clear from the output that some of the non\-matches (e.g. the European Union) are not countries at all. However, others, such as ‘Gambia, The’ and the United States clearly should have matches. *Fuzzy matching* can help find which countries *do* match, as illustrated with the first non\-matching country below:
```
(unmatched_country = unmatched_countries_ghg_ems[1])
#> [1] "Antigua & Barbuda"
unmatched_world_selection = agrep(pattern = unmatched_country,
unique_countries_world,
max.distance = 10)
unmatched_world_countries = unique_countries_world[unmatched_world_selection]
```
What just happened? We verified that the first unmatching country in the `ghg_ems` dataset was not in the `world` country names. So we used the more powerful `agrep` to search for fuzzy matches (with the `max.distance` argument set to `10`. The results show that the country `Antigua & Barbuda` from the `ghg_ems` data matches *two* countries in the `world` dataset. We can update the names in the dataset we are joining to accordingly:
```
world$Country[world$Country %in% unmatched_world_countries] =
unmatched_countries_ghg_ems[1]
```
The above code reduces the number of country names in the `world` dataset by replacing *both* “Antigua” and “Barbuda” to “Antigua \& Barbuda”. This would not work other way around: how would one know whether to change “Antigua \& Barbuda” to “Antigua” or to “Barbuda”.
Thus fuzzy matching is still a laborious process that must be complemented by human judgement. It takes a human to know for sure that `United States` is represented as `USA` in the `world` dataset, without risking false matches via `agrep`.
6\.6 Working with databases
---------------------------
Instead of loading all the data into RAM, as R does, databases query data from the hard\-disk. This can allow a subset of a very large dataset to be defined and read into R quickly, without having to load it first.
R can connect to databases in a number of ways, which are briefly touched on below. Databases is a large subject area undergoing rapid evolution. Rather than aiming at comprehensive coverage, we will provide pointers to developments that enable efficient access to a wide range of database types. An up\-to\-date history of R’s interfaces to databases can be found in the README of the [**DBI** package](https://cran.r-project.org/web/packages/DBI/readme/README.html), which provides a common interface and set of classes for driver packages (such as **RSQLite**).
**RODBC** is a veteran package for querying external databases from within R, using the Open Database Connectivity (ODBC) API. The functionality of **RODBC** is described in the package’s vignette (see `vignette("RODBC")`) and nowadays its main use is to provide an R interface to
SQL Server databases which lack a **DBI** interface.
The **DBI** package is a unified framework for accessing databases allowing for other drivers to be added as modular packages. Thus new packages that build on **DBI** can be seen partly as a replacement of **RODBC** (**RMySQL**, **RPostgreSQL**, and **RSQLite**) (see `vignette("backend")` for more on how **DBI** drivers work). Because the **DBI** syntax applies to a wide range of database types we use it here with a worked example.
```
#> Warning: `src_sqlite()` was deprecated in dplyr 1.0.0.
#> Please use `tbl()` directly with a database connection
```
Imagine you have access to a database that contains the `ghg_ems` data set.
```
# Connect to a database driver
library("RSQLite")
con = dbConnect(SQLite(), dbname = ghg_db) # Also username & password arguments
dbListTables(con)
rs = dbSendQuery(con, "SELECT * FROM `ghg_ems` WHERE (`Country` != 'World')")
df_head = dbFetch(rs, n = 6) # extract first 6 row
```
The above code chunk shows how the function `dbConnect` connects to an external database, in this case a SQLite database. The `username` and `password` arguments are used to establish the connection. Next we query which tables are available with `dbListTables`, query the database (without yet extracting the results to R) with `dbSendQuery` and, finally, load the results into R with `dbFetch`.
Be sure never to release your password by entering it directly into the command. Instead, we recommend saving sensitive information such as database passwords and API keys in `.Renviron`, described in Chapter 2\. Assuming you had saved your password as the environment variable `PSWRD`, you could enter `pwd = Sys.getenv(“PSWRD”)` to minimise the risk of exposing your password through accidentally releasing the code or your session history.
Recently there has been a shift to the ‘noSQL’ approach for storing large datasets.
This is illustrated by the emergence and uptake of software such as MongoDB and Apache Cassandra, which have R interfaces via packages [mongolite](https://cran.r-project.org/web/packages/mongolite/index.html) and [RJDBC](https://cran.r-project.org/web/packages/RJDBC/index.html), which can connect to Apache Cassandra data stores and any source compliant with the Java Database Connectivity (JDBC) API.
MonetDB is a recent alternative to relational and noSQL approaches which offers substantial efficiency advantages for handling large datasets (Kersten et al. [2011](#ref-kersten2011researcher)).
A tutorial on the [MonetDB website](https://www.monetdb.org/Documentation/UserGuide/MonetDB-R) provides an excellent introduction to handling databases from within R.
There are many wider considerations in relation to databases that we will not cover here: who will manage and maintain the database? How will it be backed up locally (local copies should be stored to reduce reliance on the network)? What is the appropriate database for your project. These issues can have major implications for efficiency, especially on large, data intensive projects. However, we will not cover them here because it is a fast\-moving field. Instead, we direct the interested reader towards further resources on the subject, including:
* The website for **[sparklyr](http://spark.rstudio.com/)**, a recent package for efficiently interfacing with the Apache Spark stack.
* [db\-engines.com/en/](http://db-engines.com/en/): a website comparing the relative merits of different databases.
* The `databases` vignette from the **dplyr** package.
* [Getting started with MongoDB in R](https://cran.r-project.org/web/packages/mongolite/vignettes/intro.html), an introductory vignette on non\-relational databases and map reduce from the **mongolite** package.
### 6\.6\.1 Databases and **dplyr**
To access a database in R via **dplyr**, one must use one of the `src_` functions to create a source. Continuing with the SQLite example above, one would create a `tbl` object, that can be queried by **dplyr** as follows:
```
library("dplyr")
ghg_db = src_sqlite(ghg_db)
ghg_tbl = tbl(ghg_db, "ghg_ems")
```
The `ghg_tbl` object can then be queried in a similar way to a standard data frame. For example, suppose we wish to
filter by `Country`. Then we use the `filter` function as before:
```
rm_world = ghg_tbl %>%
filter(Country != "World")
```
In the above code, **dplyr** has actually generated the necessary SQL command, which can be examined using `explain(rm_world)`.
When working with databases, **dplyr** uses lazy evaluation: the data is only fetched at the last moment when it’s needed. The SQL command associated with `rm_world` hasn’t yet been executed, this is why
`tail(rm_world)` doesn’t work. By using lazy evaluation, **dplyr** is more efficient at handling large data structures since it avoids unnecessary copying.
When you want your SQL command to be executed, use `collect(rm_world)`.
The final stage when working with databases in R is to disconnect, e.g.:
```
dbDisconnect(conn = con)
```
#### Exercises
Follow the worked example below to create and query a database on land prices in the UK using **dplyr** as a front end to an SQLite database.
The first stage is to read\-in the data:
```
# See help("land_df", package="efficient") for details
data(land_df, package = "efficient")
```
The next stage is to create an SQLite database to hold the data:
```
# install.packages("RSQLite") # Requires RSQLite package
my_db = src_sqlite("land.sqlite3", create = TRUE)
land_sqlite = copy_to(my_db, land_df, indexes = list("postcode", "price"))
```
What class is the new object `land_sqlite`?
Why did we use the `indexes` argument?
From the above code we can see that we have created a `tbl`. This can be accessed using **dplyr** in the same way as any data frame can. Now we can query the data. You can use SQL code to query the database directly or use standard **dplyr** verbs on the table.
```
# Method 1: using sql
tbl(my_db, sql('SELECT "price", "postcode", "old/new" FROM land_df'))
#> Source: query [?? x 3]
#> Database: sqlite 3.8.6 [land.sqlite3]
#>
#> price postcode `old/new`
#> <int> <chr> <chr>
#> 1 84000 CW9 5EU N
#> 2 123500 TR13 8JH N
#> 3 217950 PL33 9DL N
#> 4 147000 EX39 5XT N
#> # ... with more rows
```
How would you perform the same query using `select()`? Try it to see if you get the same result (hint: use backticks for the `old/new` variable name).
### 6\.6\.1 Databases and **dplyr**
To access a database in R via **dplyr**, one must use one of the `src_` functions to create a source. Continuing with the SQLite example above, one would create a `tbl` object, that can be queried by **dplyr** as follows:
```
library("dplyr")
ghg_db = src_sqlite(ghg_db)
ghg_tbl = tbl(ghg_db, "ghg_ems")
```
The `ghg_tbl` object can then be queried in a similar way to a standard data frame. For example, suppose we wish to
filter by `Country`. Then we use the `filter` function as before:
```
rm_world = ghg_tbl %>%
filter(Country != "World")
```
In the above code, **dplyr** has actually generated the necessary SQL command, which can be examined using `explain(rm_world)`.
When working with databases, **dplyr** uses lazy evaluation: the data is only fetched at the last moment when it’s needed. The SQL command associated with `rm_world` hasn’t yet been executed, this is why
`tail(rm_world)` doesn’t work. By using lazy evaluation, **dplyr** is more efficient at handling large data structures since it avoids unnecessary copying.
When you want your SQL command to be executed, use `collect(rm_world)`.
The final stage when working with databases in R is to disconnect, e.g.:
```
dbDisconnect(conn = con)
```
#### Exercises
Follow the worked example below to create and query a database on land prices in the UK using **dplyr** as a front end to an SQLite database.
The first stage is to read\-in the data:
```
# See help("land_df", package="efficient") for details
data(land_df, package = "efficient")
```
The next stage is to create an SQLite database to hold the data:
```
# install.packages("RSQLite") # Requires RSQLite package
my_db = src_sqlite("land.sqlite3", create = TRUE)
land_sqlite = copy_to(my_db, land_df, indexes = list("postcode", "price"))
```
What class is the new object `land_sqlite`?
Why did we use the `indexes` argument?
From the above code we can see that we have created a `tbl`. This can be accessed using **dplyr** in the same way as any data frame can. Now we can query the data. You can use SQL code to query the database directly or use standard **dplyr** verbs on the table.
```
# Method 1: using sql
tbl(my_db, sql('SELECT "price", "postcode", "old/new" FROM land_df'))
#> Source: query [?? x 3]
#> Database: sqlite 3.8.6 [land.sqlite3]
#>
#> price postcode `old/new`
#> <int> <chr> <chr>
#> 1 84000 CW9 5EU N
#> 2 123500 TR13 8JH N
#> 3 217950 PL33 9DL N
#> 4 147000 EX39 5XT N
#> # ... with more rows
```
How would you perform the same query using `select()`? Try it to see if you get the same result (hint: use backticks for the `old/new` variable name).
#### Exercises
Follow the worked example below to create and query a database on land prices in the UK using **dplyr** as a front end to an SQLite database.
The first stage is to read\-in the data:
```
# See help("land_df", package="efficient") for details
data(land_df, package = "efficient")
```
The next stage is to create an SQLite database to hold the data:
```
# install.packages("RSQLite") # Requires RSQLite package
my_db = src_sqlite("land.sqlite3", create = TRUE)
land_sqlite = copy_to(my_db, land_df, indexes = list("postcode", "price"))
```
What class is the new object `land_sqlite`?
Why did we use the `indexes` argument?
From the above code we can see that we have created a `tbl`. This can be accessed using **dplyr** in the same way as any data frame can. Now we can query the data. You can use SQL code to query the database directly or use standard **dplyr** verbs on the table.
```
# Method 1: using sql
tbl(my_db, sql('SELECT "price", "postcode", "old/new" FROM land_df'))
#> Source: query [?? x 3]
#> Database: sqlite 3.8.6 [land.sqlite3]
#>
#> price postcode `old/new`
#> <int> <chr> <chr>
#> 1 84000 CW9 5EU N
#> 2 123500 TR13 8JH N
#> 3 217950 PL33 9DL N
#> 4 147000 EX39 5XT N
#> # ... with more rows
```
How would you perform the same query using `select()`? Try it to see if you get the same result (hint: use backticks for the `old/new` variable name).
6\.7 Data processing with data.table
------------------------------------
**data.table** is a mature package for fast data processing that presents an alternative to **dplyr**. There is some controversy about which is more appropriate for different
tasks.[18](#fn18)
Which is more efficient to some extent depends on personal preferences and what you are used to.
Both are powerful and efficient packages that take time to learn, so it is best to learn one and stick with it, rather than have the duality of using two for similar purposes. There are situations in which one works better than another: **dplyr** provides a more consistent and flexible interface (e.g. with its interface to databases, demonstrated in the previous section) so for most purposes we recommend learning **dplyr** first if you are new to both packages. **dplyr** can also be used to work with the `data.table` class used by the **data.table** package so you can get the best of both worlds.
**data.table** is faster than **dplyr** for some operations and offers some functionality unavailable in other packages, moreover it has a mature and advanced user community. **data.table** supports [rolling joins](https://www.r-bloggers.com/understanding-data-table-rolling-joins/) (which allow rows in one table to be selected based on proximity between shared variables (typically time) and [non\-equi joins](http://www.w3resource.com/sql/joins/perform-a-non-equi-join.php) (where join criteria can be inequalities rather than equal to).
This section provides a few examples to illustrate how **data.table** differs and (at the risk of inflaming the debate further) some benchmarks to explore which is more efficient. As emphasised throughout the book, efficient code writing is often more important than efficient execution on many everyday tasks so to some extent it’s a matter of preference.
The foundational object class of **data.table** is the `data.table`. Like **dplyr**’s `tbl_df`, **data.table**’s `data.table` objects behave in the same was as the base `data.frame` class. However the **data.table** paradigm has some unique features that make it highly computationally efficient for many common tasks in data analysis. Building on subsetting methods using `[` and `filter()`, mentioned previously, we’ll see **data.tables**’s unique approach to subsetting. Like base R **data.table** uses square brackets but (unlike base R but like **dplyr**) uses non\-standard evaluation so you need not refer to the object name inside the brackets:
```
library("data.table")
# data(wb_ineq) # from the efficient package
wb_ineq_dt = data.table(wb_ineq) # convert to data.table class
aus3a = wb_ineq_dt[Country == "Australia"]
```
Note that the square brackets do not need a comma to refer to rows with `data.table` objects: in base R you would write `wb_ineq[wb_ineq$Country == “Australia”,]`.
To boost performance, one can set ‘keys’, analogous to ‘primary keys in databases’. These are ‘[supercharged rownames](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html)’ which order the table based on one or more variables. This allows a *binary search* algorithm to subset the rows of interest, which is much, much faster than the *vector scan* approach used in base R (see [`vignette("datatable-keys-fast-subset")`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html)). **data.table** uses the key values for subsetting by default so the variable does not need to be mentioned again. Instead, using keys, the search criteria is provided as a list (invoked below with the concise `.()` syntax, which is synonymous with `list()`).
```
setkey(wb_ineq_dt, Country)
aus3b = wb_ineq_dt[.("Australia")]
```
The result is the same, so why add the extra stage of setting the key? The reason is that this one\-off sorting operation can lead to substantial performance gains in situations where repeatedly subsetting rows on large datasets consumes a large proportion of computational time in your workflow. This is illustrated in Figure [6\.1](data-carpentry.html#fig:6-2), which compares 4 methods of subsetting incrementally larger versions of the `wb_ineq` dataset.
Figure 6\.1: Benchmark illustrating the performance gains to be expected for different dataset sizes.
Figure [6\.1](data-carpentry.html#fig:6-2) demonstrates that **data.table** is *much faster* than base R and **dplyr** at subsetting. As with using external packages to read in data (see Section [5\.3](input-output.html#fread)), the relative benefits of **data.table** improve with dataset size, approaching a \~70 fold improvement on base R and a \~50 fold improvement on **dplyr** as the dataset size reaches half a Gigabyte. Interestingly, even the ‘non key’ implementation of **data.table** subset method is faster than the alternatives: this is because **data.table** creates a key internally by default before subsetting. The process of creating the key accounts for the \~10 fold speed\-up in cases where the key has been pre\-generated.
This section has introduced **data.table** as a complimentary approach to base and **dplyr** methods for data processing. It offers performance gains due to its implementation in C and use of *keys* for subsetting tables. **data.table** offers much more, however, including: highly efficient data reshaping; dataset merging (also known as joining, as with `left_join` in **dplyr**); and grouping. For further information on **data.table**, we recommend reading the package’s [`datatable-intro`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html), [`datatable-reshape`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reshape.html) and [`datatable-reference-semantics`](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reference-semantics.html) vignettes.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/performance.html |
7 Efficient optimisation
========================
[Donald Knuth](https://en.wikiquote.org/wiki/Donald_Knuth) is a legendary American computer scientist who developed a number of the key algorithms that we use today (see for example `?Random`). On the subject of optimisation he gives this advice:
> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimisation is the root of all evil (or at least most of it) in programming.
Knuth’s point is that it is easy to undertake code optimisation inefficiently. When developing code, the causes of inefficiencies may shift so that what originally caused slowness at the beginning of your work may not be relevant at a later stage. This means that time spent optimizing code early in the developmental stage could be wasted. Even worse, there is a trade\-off between code speed and code readability; we’ve already made this trade\-off once by using readable, (but slow) R compared with verbose C code!
For this reason this chapter is covered towards the latter half of the book. The previous chapters deliberately focussed on concepts, packages and functions to increase efficiency. These are (relatively) easy ways of saving time that, once implemented, will work for future projects. Code optimisation, by contrast, is an advanced topic that should only be tackled once ‘low hanging fruit’ for efficiency gains have been taken.
In this chapter we assume that you already have well\-developed code that is mature conceptually and has been tried and tested. Now you want to optimize this code, but not prematurely. The chapter is organised as follows. First we begin with general hints and tips about optimising base R code. Code profiling can identify key bottlenecks in the code in need of optimisation, and this is covered in the next section. Section [7\.5](performance.html#performance-parallel) discusses how parallel code can overcome efficiency bottlenecks for some problems. The final section explains how `Rcpp` can be used to efficiently incorporate C\+\+ code into an R analysis.
### Prerequisites
In this chapter, some of the examples require a working C\+\+ compiler. The installation method depends on your operating system:
* Linux: A compiler should already be installed. Otherwise, install `r-base` and a compiler will be installed as a dependency.
* Macs: Install `Xcode`.
* Windows: Install [Rtools](http://cran.r-project.org/bin/windows/). Make sure you select the version that corresponds to your version of R.
The packages used in this chapter are
```
library("microbenchmark")
library("ggplot2movies")
library("profvis")
library("Rcpp")
```
7\.1 Top 5 tips for efficient performance
-----------------------------------------
1. Before you start to optimise your code, ensure you know where the bottleneck lies; use
a code profiler.
2. If the data in your data frame is all of the same type, consider converting it
to a matrix for a speed boost.
3. Use specialised row and column functions whenever possible.
4. The **parallel** package is ideal for Monte\-Carlo simulations.
5. For optimal performance, consider re\-writing key parts of your code in C\+\+.
7\.2 Code profiling
-------------------
Often you will have working code, but simply want it to run faster. In some cases it’s obvious where the bottleneck lies. Sometimes you will guess, relying on intuition. A drawback of this is that you could be wrong, and waste time optimising the wrong piece of code. To make slow code run faster, it is first important to determine where the slow code lives. This is the purpose of code profiling.
The `Rprof()` function is a built\-in tool for profiling the execution of R expressions. At regular time intervals, the profiler stops the R interpreter, records the current function call stack, and saves the information to a file. The results from `Rprof()` are stochastic. Each time we run a function in R, the conditions have changed. Hence, each time you profile your code, the result will be slightly different.
Unfortunately `Rprof()` is not user friendly. For this reason we recommend using the **profvis** package for profiling your R code.
**profvis** provides an interactive graphical interface for visualising code profiling data data from `Rprof()`.
### 7\.2\.1 Getting started with **profvis**
After installing **profvis**, e.g. with `install.packages("profvis")`, it can be used to profile R code. As a simple example, we will use the `movies` data set, which contains information on around 60,000 movies. First, we’ll select movies that are classed as comedies, then plot year the movie was made versus the movie rating, and draw a local polynomial regression line to pick out the trend. The main function from the **profvis** package is `profvis()`, which profiles the code and creates an interactive HTML page of the results. The first argument of `profvis()` is the R expression of interest. This can be many lines long:
```
library("profvis")
profvis({
data(movies, package = "ggplot2movies") # Load data
movies = movies[movies$Comedy == 1,]
plot(movies$year, movies$rating)
model = loess(rating ~ year, data = movies) # loess regression line
j = order(movies$year)
lines(movies$year[j], model$fitted[j]) # Add line to the plot
})
```
The above code provides an interactive HTML page (figure [7\.1](performance.html#fig:7-1)). On the left side is the code and on the right is a flame graph (horizontal direction is time in milliseconds and the vertical direction is the call stack).
Figure 7\.1: Output from profvis
The left hand panel gives the amount of time spent on each line of code. It shows that majority of time is spent calculating the `loess()` smoothing line. The bottom line of the right panel also highlights that most of the execution time is spent on the `loess()` function. Travelling up the function, we see that `loess()` calls `simpleLoess()` which in turn calls `.C()` function.
The conclusion from this graph is that if optimisation were required, it would make sense to focus on the `loess()` and possibly the `order()` function calls.
### 7\.2\.2 Example: Monopoly Simulation
Monopoly is a board game that originated in the United States over \\(100\\) years ago. The objective of the game is to go round the board and purchase squares (properties). If other players land on your properties they have to pay a tax. The player with the most money at the end of the game, wins. To make things more interesting, there are Chance and Community Chest squares. If you land on one of these squares, you draw card, which may send to you to other parts of the board. The other special square, is Jail. One way of entering Jail is to roll three successive doubles.
The **efficient** package contains a Monte\-Carlo function for simulating a simplified game of monopoly. By keeping track of where a person lands when going round the board, we obtain an estimate of the probability of landing on a certain square. The entire code is around 100 lines long. In order for **profvis** to fully profile the code, the **efficient** package needs to be installed from source
```
devtools::install_github("csgillespie/efficient",
args = "--with-keep.source")
```
The function can then be profiled via the following code, which results in figure [7\.2](performance.html#fig:7-2).
```
library("efficient")
profvis(simulate_monopoly(10000))
```
Figure 7\.2: Code profiling for simulating the game of Monopoly.
The output from **profvis** shows that the vast majority of time (around 65%) is spent in the `move_square()` function.
In Monopoly moving around the board is complicated by the fact that rolling a double(a pair of 1’s, 2’s, …, 6’s) is special:
* Roll two dice (`total1`): `total_score = total1`;
* If you get a double, roll again (`total2`) and `total_score = total1 + total2`;
* If you get a double, roll again (`total3`) and `total_score = total1 + total2 + total3`;
* If roll three is a double, Go To Jail, otherwise move `total_score`.
The function `move_square()` captures this logic. Now we know where the code is slow, how can we speed up the computation? In the next section, we will discuss standard techniques that can be used. We will then revisit this example.
7\.3 Efficient base R
---------------------
In R there is often more than one way to solve a problem. In this section we highlight standard tricks or alternative methods that may improve performance.
### The `if()` vs `ifelse()` functions
`ifelse()` is a vectorised version of the standard control\-flow function `if(test) if_yes else if_no` that works as follows:
```
ifelse(test, if_yes, if_no)
```
In the above imaginary example, the return value is filled with elements from the `if_yes` and `if_no` arguments that are determined by whether the element of `test` is `TRUE` or `FALSE`. For example, suppose we have a vector of exam marks. `ifelse()` could be used to classify them as pass or fail:
```
marks = c(25, 55, 75)
ifelse(marks >= 40, "pass", "fail")
#> [1] "fail" "pass" "pass"
```
If the length of `test` condition is equal to \\(1\\), i.e. `length(test) == 1`, then the standard conditional statement
```
mark = 25
if(mark >= 40) {
"pass"
} else {
"fail"
}
```
is around five to ten times faster than `ifelse(mark >= 40, "pass", "fail")`.
An additional quirk of `ifelse()` is that although it is more *programmer efficient*, as it is more concise and understandable than multi\-line alternatives, it is often **less** *computationally efficient* than a more verbose alternative. This is illustrated with the following benchmark, in which the second option runs around 20 times faster, despite the results being identical:
```
marks = runif(n = 10e6, min = 30, max = 99)
system.time({
result1 = ifelse(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 2.459 0.177 2.635
system.time({
result2 = rep("fail", length(marks))
result2[marks >= 40] = "pass"
})
#> user system elapsed
#> 0.138 0.072 0.209
identical(result1, result2)
#> [1] TRUE
```
There is talk on the [R\-devel email](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html) list of speeding up `ifelse()` in base R. A simple solution is to use the `if_else()` function from **dplyr**, although, as discussed in the [same thread](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html), it cannot replace `ifelse()` in all situations. For our exam result test example, `if_else()` works fine and is much faster than base R’s implementation (although it is still around 3 times slower than the hard\-coded solution):
```
system.time({
result3 = dplyr::if_else(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 0.453 0.180 0.633
identical(result1, result3)
#> [1] TRUE
```
### Sorting and ordering
Sorting a vector is relatively quick; sorting a vector of length \\(10^7\\) takes around \\(0\.01\\) seconds. If you only sort a vector once at the top of a script, then don’t worry too much about this. However if you are sorting inside a loop, or in a shiny application, then it can be worthwhile thinking about how to optimise this operation.
There are currently three sorting algorithms, `c("shell", "quick", "radix")` that can be specified in the `sort()` function; with `radix` being a new addition to R 3\.3\. Typically the `radix` (the non\-default option) is the most computationally efficient option for most situations (it is around 20% faster when sorting a large vector of doubles).
Another useful trick is to partially order the results. For example, if you only want to display the top ten results, then use the `partial` argument, i.e. `sort(x, partial = 1:10)`. For very large vectors, this can give a three fold speed increase.
### Reversing elements
The `rev()` function provides a reversed version of its argument. If you wish to sort in decreasing order, `sort(x, decreasing = TRUE)` is marginally (around 10%) faster than `rev(sort(x))`.
### Which indices are `TRUE`
To determine which index of a vector or array are `TRUE`, we would typically use the `which()` function. If we want to find the index of just the minimum or maximum value, i.e. `which(x == min(x))` then using the efficient `which.min()`/`which.max()` variants can be orders of magnitude faster (see figure [7\.3](performance.html#fig:7-3))
Figure 7\.3: Comparison of `which.min()` with `which()`.
### Converting factors to numerics
A factor is just a vector of integers with associated levels. Occasionally we want to convert a factor into its numerical equivalent. The most efficient way of doing this (especially for long factors) is:
```
as.numeric(levels(f))[f]
```
### Logical AND and OR
The logical AND (`&`) and OR (`|`) operators are vectorised functions and are typically used during multi\-criteria subsetting operations. The code below, for example, returns `TRUE` for all elements of `x` less than \\(0\.4\\) or greater than \\(0\.6\\).
```
x < 0.4 | x > 0.6
#> [1] TRUE FALSE TRUE
```
When R executes the above comparison, it will **always** calculate `x > 0.6` regardless of the value of `x < 0.4`. In contrast, the non\-vectorised version, `&&`, only executes the second component if needed. This is efficient and leads to neater code, e.g.
```
# We only calculate the mean if data doesn't contain NAs
if(all(!is.na(x)) && mean(x) > 0) {
# Do something
}
```
compared to
```
if(all(!is.na(x))) {
if(mean(x) > 0) {
# do somthing
}
}
```
However care must be taken not to use `&&` or `||` on vectors since it only evaluates the first element of the vector, giving the incorrect answer. This is illustrated below:
```
x < 0.4 || x > 0.6
#> [1] TRUE
```
### Row and column operations
In data analysis we often want to apply a function to each column or row of a data set. For example, we might want to calculate the column or row sums. The `apply()` function makes this type of operation straightforward.
```
# Second argument: 1 -> rows. 2 -> columns
apply(data_set, 1, function_name)
```
There are optimised functions for calculating row and columns sums/means, `rowSums()`, `colSums()`, `rowMeans()` and `colMeans()` that should be used whenever possible. The package **matrixStats** contains many optimised row/col functions.
### `is.na()` and `anyNA()`
To test whether a vector (or other object) contains missing values we use the `is.na()` function. Often we are interested in whether a vector contains *any* missing values. In this case, `anyNA(x)` is more efficient than `any(is.na(x))`.
### Matrices
A matrix is similar to a data frame: it is a two dimensional object and sub\-setting and other functions work in the same way. However all matrix elements must have the same type. Matrices tend to be used during statistical calculations. The `lm()` function, for example, internally converts the data to a matrix before calculating the results; any characters are thus recoded as numeric dummy variables.
Matrices are generally faster than data frames. For example, the datasets `ex_mat` and `ex_df` from the **efficient** package each have \\(1000\\) rows and \\(100\\) columns and contain the same random numbers. However selecting rows from the data frame is around \\(150\\) times slower than a matrix, as illustrated below:
```
data(ex_mat, ex_df, package="efficient")
microbenchmark(times=100, unit="ms", ex_mat[1,], ex_df[1,])
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> ex_mat[1, ] 0.0027 0.0034 0.0503 0.00424 0.00605 4.54 100
#> ex_df[1, ] 0.4855 0.4974 0.5549 0.50535 0.51790 5.25 100
```
Use the `data.matrix()` function to efficiently convert a data frame into a matrix.
### The integer data type
Numbers in R are usually stored in [double\-precision floating\-point format](https://goo.gl/ZA5R8a), which is described in detail in Braun and Murdoch ([2007](#ref-Braun2007)) and Goldberg ([1991](#ref-Goldberg1991)). The term ‘double’ refers to the fact that on \\(32\\) bit systems (for which the format was developed) two memory locations are used to store a single number. Each double\-precision number is accurate to around \\(17\\) decimal places.
When comparing floating point numbers we should be particularly careful, since `y = sqrt(2) * sqrt(2)` is not exactly \\(2\\), instead it’s **almost** \\(2\\). Using `sprintf(“%.17f”, y)` will give you the true value of `y` (to 17 decimal places).
Integers are another numeric data type. Integers primarily exist to be passed to C or Fortran code. You do not need to create integers for most applications. However, they are occasionally used to optimise sub\-setting operations. When we subset a data frame or matrix, we are interacting with C code so we might be tempted to use integers with the purpose of speeding up our code. For example, if we look at the arguments for the `head` function
```
args(head.matrix)
#> function (x, n = 6L, ...)
#> NULL
```
Using the `:` operator automatically creates a vector of integers.
we see that the default argument for `n` is `6L` rather than simply `6` (the `L` is short for Literal and is used to create an
integer). This gives a tiny speed boost (around 0\.1 microseconds!)
```
x = runif(10)
microbenchmark(head(x, 6.0), head(x, 6L), times=1000000)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# head(x, 6) 7.067 8.309 9.058 8.686 9.098 105266 1e+06 a
# head(x, 6L) 6.947 8.219 8.933 8.594 9.007 106307 1e+06 a
```
Since this function is ubiquitous, this low level optimisation is useful. In general, if you are worried about shaving microseconds off your R code run time, you should probably consider switching to another language.
Integers are more space efficient. The code below compares the size of an integer vector to a standard numeric vector:
```
pryr::object_size(1:10000)
#> Registered S3 method overwritten by 'pryr':
#> method from
#> print.bytes Rcpp
#> 40 kB
pryr::object_size(y = seq(1, 10000, by=1.0))
#> 80 kB
```
The results show that the integer version is roughly half the size. However, most mathematical operations will convert the integer vector into a standard numerical vector, as illustrated below:
```
is.integer(1L + 1)
#> [1] FALSE
```
Further storage savings can be obtained using the **bit** package.
### Sparse matrices
Another data structure that can be stored efficiently is a sparse matrix. This is simply a matrix where most of the elements are zero. Conversely, if most elements are non\-zero, the matrix is considered dense. The proportion of non\-zero elements is called the sparsity. Large sparse matrices often crop up when performing numerical calculations. Typically, our data isn’t sparse but the resulting data structures we create may be sparse. There are a number of techniques/methods used to store sparse matrices. Methods for creating sparse matrices can be found in the **Matrix** package[19](#fn19).
As an example, suppose we have a large matrix where the diagonal elements are non\-zero:
```
library("Matrix")
N = 10000
sp = sparseMatrix(1:N, 1:N, x = 1)
m = diag(1, N, N)
```
Both objects contain the same information, but the data is stored differently; since we have the same value multiple times in the matrix, we only need to store the value once and link it to multiple matrix locations. The matrix object stores each individual element, while the sparse matrix object only stores the location of the non\-zero elements. This is much more memory efficient, as illustrated below:
```
pryr::object_size(sp)
#> 162 kB
pryr::object_size(m)
#> 800 MB
```
#### Exercises
1. Create a vector `x`. Benchmark `any(is.na(x))` against `anyNA()`. Do the results vary with the size of the vector?
2. Examine the following function definitions to give you an idea of how integers are used.
\* `tail.matrix()`
\* `lm()`.
3. Construct a matrix of integers and a matrix of numerics. Using `pryr::object_size()`, compare the objects.
4. How does the function `seq.int()`, which was used in the `tail.matrix()` function, differ to the standard `seq()` function?
A related memory saving idea is to replace `logical` vectors with vectors from the **bit** package which take up just over a 16th of the space (but you can’t use `NA`s).
7\.4 Example: Optimising the `move_square()` function
-----------------------------------------------------
Figure [7\.2](performance.html#fig:7-2) shows that our main bottleneck in simulating the game of Monopoly is the `move_square()` function. Within this function, we spend around 50% of the time creating a data frame, 20% of the time calculating row sums, and the remainder on comparison operations. This piece of code can be optimised fairly easily (while still retaining the same overall structure) by incorporating the following improvements[20](#fn20):
* Instead of using `seq(1, 6)` to generate the 6 possible values of rolling a dice, use `1:6`. Also, instead of a data frame, use a matrix and perform a single call to the `sample()` function
```
matrix(sample(1:6, 6, replace = TRUE), ncol = 2)
```
Overall, this revised line is around 25 times faster; most of the speed boost came from switching to a matrix.
* Using `rowSums()` instead of `apply()`. The `apply()` function call is already faster since we’ve switched from a data frame to a matrix (around 3 times). Using `rowSums()` with a matrix, gives a 10 fold speed boost.
* Use `&&` in the `if` condition; this is around twice as fast compared to `&`.
Impressively the refactored code runs 20 times faster than the original code, compare figures [7\.2](performance.html#fig:7-2) and [7\.4](performance.html#fig:7-4), with the main speed boost coming from using a matrix instead of a data frame.
Figure 7\.4: Code profiling of the optimised code.
#### Exercise
The `move_square()` function above uses a vectorised solution. Whenever we move, we always roll six dice, then examine the outcome and determine the number of doubles. However, this is potentially wasteful, since the probability of getting one double is \\(1/6\\) and two doubles is \\(1/36\\). Another method is too only roll additional dice if and when they are needed. Implement and time this solution.
7\.5 Parallel computing
-----------------------
This section provides a brief foray into the world of parallel computing. It only looks at methods for parallel computing on ‘shared memory systems’. This simply means computers in which multiple central processor unit (CPU) cores can access the same block, i.e. most laptops and desktops sold worldwide. This section provides a flavour of what is possible; for a fuller account of parallel processing in R, see McCallum and Weston ([2011](#ref-mccallum2011)).
The foundational package for parallel computing in R is **parallel**. In recent R versions (since R 2\.14\.0\) this comes pre\-installed with base R. The **parallel** package must still be loaded before use, however, and you must determine the number of available cores manually, as illustrated below:
```
library("parallel")
no_of_cores = detectCores()
```
The value returned by `detectCores()` turns out to be operating system and chip maker dependent \- see `help(“detectCores”)` for full details. For most standard machines, `detectCores()` returns the number of simultaneous threads.
### 7\.5\.1 Parallel versions of apply functions
The most commonly used parallel applications are parallelised replacements of `lapply()`, `sapply()` and `apply()`. The parallel implementations and their arguments are shown below.
```
parLapply(cl, x, FUN, ...)
parApply(cl = NULL, X, MARGIN, FUN, ...)
parSapply(cl = NULL, X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
```
The key point is that there is very little difference in arguments between `parLapply()` and `lapply()`, so the barrier to using (this form) of parallel computing is low, assuming you are proficient with the apply family of functions. Each function above has an argument `cl`, which is created by a `makeCluster()` call. This function, amongst other things, specifies the number of processors to use.
### 7\.5\.2 Example: Snakes and Ladders
Parallel computing is ideal for Monte\-Carlo simulations. Each core independently simulates a realisation from the model. At the end, we gather up the results. In the **efficient** package, there is a function that simulates a single game of Snakes and Ladders \- `snakes_ladders()`[21](#fn21)
The following code illustrates how to simulate `N` games using `sapply()`:
```
N = 10^4
sapply(1:N, snakes_ladders)
```
Rewriting this code to make use of the **parallel** package is straightforward.
Begin by making a cluster object:
```
library("parallel")
cl = makeCluster(4)
```
Then simply swap `sapply()` for `parSapply()`:
```
parSapply(cl, 1:N, snakes_ladders)
```
Not stopping the clusters can lead to memory leaks,[22](#fn22) so it is important to stop the created clusters as illustrated below:
```
stopCluster(cl)
```
and used a four (or more) core, then we would
obtain a four\-fold speed up (we set `makeCluster(4)`).
On a multi\-processor computer with four (or more) cores, if we achieved perfect parallelisation this could lead to a four\-fold speed\-up. However, it is rare that we would achieve this optimal speed\-up since there is always communication between threads.
### 7\.5\.3 Exit functions with care
Always call `stopCluster()` to free resources when you finish with the cluster object. However if the parallel code is within function, it’s possible that function ends as the results of an error and so `stopCluster()` is omitted.
The `on.exit()` function handles this problem with the minimum of fuss; regardless of how the function ends, `on.exit()` is always called. In the context of parallel programming we will have something similar to:
```
simulate = function(cores) {
cl = makeCluster(cores)
on.exit(stopCluster(cl))
# Do something
}
```
Another common use of `on.exit()` is with the `par()` function. If you use `par()` to change graphical parameters within a function, `on.exit()` ensures these parameters are reset to their previous value when the function ends.
### 7\.5\.4 Parallel code under Linux \& OS X
If you are using Linux or OS X, then another way of running code in parallel is to use the `mclapply()` and `mcmapply()` functions
```
# This will run on Windows, but will only use 1 core
mclapply(1:N, snakes_ladders)
```
These functions use forking, that is creating a new copy of a process running on the CPU. However Windows does not support this low\-level functionality in the way that Linux does. The main advantage of `mclapply()` is that you don’t have to start and stop cluster objects. The big disadvantage is that on Windows machines, you are limited to a single core.
7\.6 Rcpp
---------
Sometimes R is just slow. You’ve tried every trick you know, and your code is still crawling along. At this point you could consider rewriting key parts of your code in another, faster language. R has interfaces to other languages via packages, such as **Rcpp**, **rJava**, **rPython** and recently **V8**. These provide R interfaces to C\+\+, Java, Python and JavaScript respectively. **Rcpp** is the most popular of these (figure [7\.5](performance.html#fig:7-5)).
Figure 7\.5: Downloads per day from the RStudio CRAN mirror of packages that provide R interfaces to other languages.
C\+\+ is a modern, fast and very well\-supported language with libraries for performing many kinds of computational tasks. **Rcpp** makes incorporating C\+\+ code into your R workflow easy.
Although C/Fortran routines can be used using the `.Call()` function this is not recommended: using `.Call()` can be a painful experience. **Rcpp** provides a friendly API (Application Program Interface) that lets you write high\-performance code, bypassing R’s tricky C API. Typical bottlenecks that C\+\+ addresses are loops and recursive functions.
C\+\+ is a powerful programming language about which entire books have been written. This section therefore is focussed on getting started and providing a flavour of what is possible. It is structured as follows. After ensuring that your computer is set\-up for **Rcpp**, we proceed by creating a simple C\+\+ function, to show how C\+\+ compares with R (Section [7\.6\.1](performance.html#simple-c)). This is converted into an R function using `cppFunction()` in Section [7\.6\.2](performance.html#cppfunction). The remainder of the chapter explains C\+\+ data types (Section [7\.6\.3](performance.html#c-types)), illustrates how to source C\+\+ code directly (Section [7\.6\.4](performance.html#sourcecpp)), explains vectors (Section [7\.6\.5](performance.html#vectors-and-loops)) and **Rcpp** sugar (Section [7\.6\.6](performance.html#sugar)) and finally provides guidance on further resources on the subject (Section [7\.6\.7](performance.html#rcpp-resources)).
### 7\.6\.1 A simple C\+\+ function
To write and compile C\+\+ functions, you need a working C\+\+ compiler (see the Prerequiste section at the beginning of this chapter). The code in this chapter was generated using version 1\.0\.6 of **Rcpp**.
**Rcpp** is well documented, as illustrated by the number of vignettes on the package’s [CRAN](https://cran.r-project.org/web/packages/Rcpp/) page. In addition to its popularity, many other packages depend on **Rcpp**, which can be seen by looking at the `Reverse Imports` section.
To check that you have everything needed for this chapter, run the following piece of code from the course R package:
```
efficient::test_rcpp()
```
A C\+\+ function is similar to an R function: you pass a set of inputs to the function, some code is run, a single object is returned. However there are some key differences.
1. In the C\+\+ function each line must be terminated with `;` In R, we use `;` only when we have multiple statements on the same line.
2. We must declare object types in the C\+\+ version. In particular we need to declare the types of the function arguments, return value and any intermediate objects we create.
3. The function must have an explicit `return` statement. Similar to R, there can be multiple returns, but the function will terminate when it hits it’s first `return` statement.
4. You do not use assignment when creating a function.
5. Object assignment must use `=` sign. The `<-` operator isn’t valid.
6. One line comments can be created using `//`. Multi\-line comments are created using `/*...*/`
Suppose we want to create a function that adds two numbers together. In R this would be a simple one line affair:
```
add_r = function(x, y) x + y
```
In C\+\+ it is a bit more long winded:
```
/* Return type double
* Two arguments, also doubles
*/
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
If we were writing a C\+\+ program we would also need another function called `main()`. We would then compile the code to obtain an executable. The executable is platform dependent. The beauty of using **Rcpp** is that it makes it very easy to call C\+\+ functions from R and the user doesn’t have to worry about the platform, or compilers or the R/C\+\+ interface.
### 7\.6\.2 The `cppFunction()` command
If we pass the C\+\+ function created in the previous section as a text string argument to `cppFunction()`:
```
library("Rcpp")
cppFunction('
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
')
```
**Rcpp** will magically compile the C\+\+ code and construct a function that bridges the gap between R and C\+\+. After running the above code, we now have access to the `add_cpp()` function
```
add_cpp
#> function (x, y)
#> .Call(<pointer: 0x7feb3c8ffb70>, x, y)
```
and can call the `add_cpp()` function in the usual way
```
add_cpp(1, 2)
#> [1] 3
```
We don’t have to worry about compilers. Also, if you include this function in a package, users don’t have to worry about any of the **Rcpp** magic. It just works.
### 7\.6\.3 C\+\+ data types
The most basic type of variable is an integer, `int`. An `int` variable can store a value in the range \\(\-32768\\) to \\(\+32767\\). To store floating point numbers, there are single precision numbers, `float` and double precision numbers, `double`. A `double` takes twice as much memory as a `float` (in general, we should always work with double precision numbers unless we have a compelling reason to switch to floats). For **single** characters, we use the `char` data type.
There is also something called an unsigned int, which goes from \\(0\\) to \\(65,535\\) and a `long int` that ranges from \\(0\\) to \\(2^{31}\-1\\).
Table 7\.1: Overview of key C\+\+ object types.
| Type | Description |
| --- | --- |
| char | A single character. |
| int | An integer. |
| float | A single precision floating point number. |
| double | A double\-precision floating point number. |
| void | A valueless quantity. |
A pointer object is a variable that points to an area of memory that has been given a name. Pointers are a very powerful, but primitive facility contained in the C\+\+ language. They are very useful since rather than passing large objects around, we pass a pointer to the memory location; rather than pass the house, we just give the address. We won’t use pointers in this chapter, but mention them for completeness. Table [7\.1](performance.html#tab:cpptypes) gives an overview.
### 7\.6\.4 The `sourceCpp()` function
The `cppFunction()` is great for getting small examples up and running. But it is better practice to put your C\+\+ code in a separate file (with file extension `cpp`) and use the function call `sourceCpp("path/to/file.cpp")` to compile them. However we need to include a few headers at the top of the file. The first line we add gives us access to the **Rcpp** functions. The file `Rcpp.h` contains a list of function and class definitions supplied by **Rcpp**. This file will be located where **Rcpp** is installed. The `include` line
```
#include <Rcpp.h>
```
causes the compiler to replace that lines with the contents of the named source file. This means that we can access the functions defined by **Rcpp**. To access the **Rcpp** functions we would have to type `Rcpp::function_1`. To avoid typing `Rcpp::`, we use the namespace facility
```
using namespace Rcpp;
```
Now we can just type `function_1()`; this is the same concept that R uses for managing function name collisions when loading packages. Above each function we want to export/use in R, we add the tag
```
// [[Rcpp::export]]
```
Similar to packages and the `library()` function in R, we access additional functions via `#include`. A standard header to include is `#include <math.h>` which contains standard mathematics functions.
This would give the complete file
```
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
There are two main benefits with putting your C\+\+ functions in separate files. First, we have the benefit of syntax highlighting (RStudio has great support for C\+\+ editing). Second, it’s easier to make syntax errors when switching between R and C\+\+ in the same file. To save space we’ll omit the headers for the remainder of the chapter.
### 7\.6\.5 Vectors and loops
Let’s now consider a slightly more complicated example. Here we want to write our own function that calculates the mean. This is just an illustrative example: R’s version is much better and more robust to scale differences in our data. For comparison, let’s create a corresponding R function \- this is the same function we used in chapter [3](programming.html#programming). The function takes a single vector `x` as input, and returns the mean value, `m`:
```
mean_r = function(x) {
m = 0
n = length(x)
for(i in 1:n)
m = m + x[i] / n
m
}
```
This is a very bad R function; we should just use the base function `mean()` for real world applications. However the purpose of `mean_r()` is to provide a comparison for the C\+\+ version, which we will write in a similar way.
In this example, we will let **Rcpp** smooth the interface between C\+\+ and R by using the `NumericVector` data type. This **Rcpp** data type mirrors the R vector object type. Other common classes are: `IntegerVector`, `CharacterVector`, and `LogicalVector`.
In the C\+\+ version of the mean function, we specify the arguments types: `x` (`NumericVector`) and the return value (`double`). The C\+\+ version of the `mean()` function is a few lines longer. Almost always, the corresponding C\+\+ version will be, possibly much, longer. In general R optimises for reduced development time; C\+\+ optimises for fast execution time. The corresponding C\+\+ function for calculating the mean is:
```
double mean_cpp(NumericVector x) {
int i;
int n = x.size();
double mean = 0;
for(i = 0; i < n; i++) {
mean = mean + x[i] / n;
}
return mean;
}
```
To use the C\+\+ function we need to source the file (remember to put the necessary headers in).
```
sourceCpp("src/mean_cpp.cpp")
```
Although the C\+\+ version is similar, there are a few crucial differences.
1. We use the `.size()` method to find the length of `x`.
2. The `for` loop has a more complicated syntax.
```
for (variable initialisation; condition; variable update ) {
// Code to execute
}
```
In this example, the loop initialises `i = 0` and will continue running until `i < n` is false.
The statement `i++` increases the value of `i` by `1`; essentially it’s just shortcut for `i = i + 1`.
3. Similar to `i++`, C\+\+ provides other operators to modify variables in place. For example we could rewrite part of the loop as
```
mean += x[i] / n;
```
The above code adds `x[i] / n` to the value of `mean`. Other similar operators are `-=`, `*=`, `/=` and `i--`.
4. A C\+\+ vector starts at `0` **not** `1`
To compare the C\+\+ and R functions, we’ll generate some normal random numbers for the comparison:
```
x = rnorm(1e4)
```
Then call the `microbenchmark()` function (results plotted in figure [7\.6](performance.html#fig:7-6)).
```
# com_mean_r is the compiled version of mean_r
z = microbenchmark(
mean(x), mean_r(x), com_mean_r(x), mean_cpp(x),
times = 1000
)
```
In this simple example, the Rcpp variant is around \\(100\\) times faster than the corresponding pure R version. This sort of speed\-up is not uncommon when switching to an Rcpp solution. Notice that the Rcpp version and standard base function `mean()` run at roughly the same speed; after all, the base R function is written in C. However, `mean()` uses a more sophisticated algorithm when calculating the mean to ensure accuracy.
Figure 7\.6: Comparison of mean functions.
#### Exercises
Consider the following piece of code:
```
double test1() {
double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++i)
b = b + a;
return b;
}
```
1. Save the function `test1()` in a separate file. Make sure it works.
2. Write a similar function in R and compare the speed of the C\+\+ and R versions.
3. Create a function called `test2()` where the `double` variables have been replaced by `float`. Do you still get the correct answer?
4. Change `b = b + a` to `b += a` to make your code more C\+\+ like.
5. (Hard) What’s the difference between `i++` and `++i`?
#### Matrices
Each vector type has a corresponding matrix equivalent: `NumericMatrix`, `IntegerMatrix`, `CharacterMatrix` and `LogicalMatrix`. We use these types in a similar way to how we used `NumericVector`’s. The main differences are:
* When we initialise, we need to specify the number of rows and columns
```
// 10 rows, 5 columns
NumericMatrix mat(10, 5);
// Length 10
NumericVector v(10);
```
* We subset using `()`, i.e. `mat(5, 4)`.
* The first element in a matrix is `mat(0, 0)` \- remember indexes start with `0` not `1`.
* To determine the number of rows and columns, we use the `.nrow()` and `.ncol()` methods.
### 7\.6\.6 C\+\+ with sugar on top
**Rcpp** sugar brings a higher\-level of abstraction to C\+\+ code written using the **Rcpp** API. What this means in practice is that we can write C\+\+ code in the style of R. For example, suppose we wanted to find the squared difference of two vectors; a squared residual in regression. In R we would use
```
sq_diff_r = function(x, y) (x - y)^2
```
Rewriting the function in standard C\+\+ would give
```
NumericVector res_c(NumericVector x, NumericVector y) {
int i;
int n = x.size();
NumericVector residuals(n);
for(i = 0; i < n; i++) {
residuals[i] = pow(x[i] - y[i], 2);
}
return residuals;
}
```
With **Rcpp** sugar we can rewrite this code to be more succinct and have more of an R feel:
```
NumericVector res_sugar(NumericVector x, NumericVector y) {
return pow(x - y, 2);
}
```
In the above C\+\+ code, the `pow()` function and `x-y` are valid due to **Rcpp** sugar. Other functions that are available include the d/q/p/r statistical functions, such as `rnorm()` and `pnorm()`. The sweetened versions aren’t usually faster than the C\+\+ version, but typically there’s very little difference between the two. However with the sugared variety, the code is shorter and is constantly being improved.
#### Exercises
1. Construct an R version (using a `for` loop rather than the vectorised solution), `res_r()` and compare the three function variants.
2. In the above example, `res_sugar()` is faster than `res_c()`. Do you know why?
### 7\.6\.7 Rcpp resources
The aim of this section was to provide an introduction to **Rcpp**. One of the selling features of **Rcpp** is that there is a great deal of documentation available.
* The **Rcpp** [website](http://www.rcpp.org/);
* The original Journal of Statistical Software paper describing **Rcpp** and the follow\-up book (Eddelbuettel and François [2011](#ref-Eddelbuettel2011); Eddelbuettel [2013](#ref-Eddelbuettel2013));
* H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) provides a very readable chapter on **Rcpp** that goes into a bit more detail than this section;
* The **Rcpp** section on the [StackOverflow](https://stackoverflow.com/questions/tagged/rcpp) website. Questions are often answered by the **Rcpp** authors.
### Prerequisites
In this chapter, some of the examples require a working C\+\+ compiler. The installation method depends on your operating system:
* Linux: A compiler should already be installed. Otherwise, install `r-base` and a compiler will be installed as a dependency.
* Macs: Install `Xcode`.
* Windows: Install [Rtools](http://cran.r-project.org/bin/windows/). Make sure you select the version that corresponds to your version of R.
The packages used in this chapter are
```
library("microbenchmark")
library("ggplot2movies")
library("profvis")
library("Rcpp")
```
7\.1 Top 5 tips for efficient performance
-----------------------------------------
1. Before you start to optimise your code, ensure you know where the bottleneck lies; use
a code profiler.
2. If the data in your data frame is all of the same type, consider converting it
to a matrix for a speed boost.
3. Use specialised row and column functions whenever possible.
4. The **parallel** package is ideal for Monte\-Carlo simulations.
5. For optimal performance, consider re\-writing key parts of your code in C\+\+.
7\.2 Code profiling
-------------------
Often you will have working code, but simply want it to run faster. In some cases it’s obvious where the bottleneck lies. Sometimes you will guess, relying on intuition. A drawback of this is that you could be wrong, and waste time optimising the wrong piece of code. To make slow code run faster, it is first important to determine where the slow code lives. This is the purpose of code profiling.
The `Rprof()` function is a built\-in tool for profiling the execution of R expressions. At regular time intervals, the profiler stops the R interpreter, records the current function call stack, and saves the information to a file. The results from `Rprof()` are stochastic. Each time we run a function in R, the conditions have changed. Hence, each time you profile your code, the result will be slightly different.
Unfortunately `Rprof()` is not user friendly. For this reason we recommend using the **profvis** package for profiling your R code.
**profvis** provides an interactive graphical interface for visualising code profiling data data from `Rprof()`.
### 7\.2\.1 Getting started with **profvis**
After installing **profvis**, e.g. with `install.packages("profvis")`, it can be used to profile R code. As a simple example, we will use the `movies` data set, which contains information on around 60,000 movies. First, we’ll select movies that are classed as comedies, then plot year the movie was made versus the movie rating, and draw a local polynomial regression line to pick out the trend. The main function from the **profvis** package is `profvis()`, which profiles the code and creates an interactive HTML page of the results. The first argument of `profvis()` is the R expression of interest. This can be many lines long:
```
library("profvis")
profvis({
data(movies, package = "ggplot2movies") # Load data
movies = movies[movies$Comedy == 1,]
plot(movies$year, movies$rating)
model = loess(rating ~ year, data = movies) # loess regression line
j = order(movies$year)
lines(movies$year[j], model$fitted[j]) # Add line to the plot
})
```
The above code provides an interactive HTML page (figure [7\.1](performance.html#fig:7-1)). On the left side is the code and on the right is a flame graph (horizontal direction is time in milliseconds and the vertical direction is the call stack).
Figure 7\.1: Output from profvis
The left hand panel gives the amount of time spent on each line of code. It shows that majority of time is spent calculating the `loess()` smoothing line. The bottom line of the right panel also highlights that most of the execution time is spent on the `loess()` function. Travelling up the function, we see that `loess()` calls `simpleLoess()` which in turn calls `.C()` function.
The conclusion from this graph is that if optimisation were required, it would make sense to focus on the `loess()` and possibly the `order()` function calls.
### 7\.2\.2 Example: Monopoly Simulation
Monopoly is a board game that originated in the United States over \\(100\\) years ago. The objective of the game is to go round the board and purchase squares (properties). If other players land on your properties they have to pay a tax. The player with the most money at the end of the game, wins. To make things more interesting, there are Chance and Community Chest squares. If you land on one of these squares, you draw card, which may send to you to other parts of the board. The other special square, is Jail. One way of entering Jail is to roll three successive doubles.
The **efficient** package contains a Monte\-Carlo function for simulating a simplified game of monopoly. By keeping track of where a person lands when going round the board, we obtain an estimate of the probability of landing on a certain square. The entire code is around 100 lines long. In order for **profvis** to fully profile the code, the **efficient** package needs to be installed from source
```
devtools::install_github("csgillespie/efficient",
args = "--with-keep.source")
```
The function can then be profiled via the following code, which results in figure [7\.2](performance.html#fig:7-2).
```
library("efficient")
profvis(simulate_monopoly(10000))
```
Figure 7\.2: Code profiling for simulating the game of Monopoly.
The output from **profvis** shows that the vast majority of time (around 65%) is spent in the `move_square()` function.
In Monopoly moving around the board is complicated by the fact that rolling a double(a pair of 1’s, 2’s, …, 6’s) is special:
* Roll two dice (`total1`): `total_score = total1`;
* If you get a double, roll again (`total2`) and `total_score = total1 + total2`;
* If you get a double, roll again (`total3`) and `total_score = total1 + total2 + total3`;
* If roll three is a double, Go To Jail, otherwise move `total_score`.
The function `move_square()` captures this logic. Now we know where the code is slow, how can we speed up the computation? In the next section, we will discuss standard techniques that can be used. We will then revisit this example.
### 7\.2\.1 Getting started with **profvis**
After installing **profvis**, e.g. with `install.packages("profvis")`, it can be used to profile R code. As a simple example, we will use the `movies` data set, which contains information on around 60,000 movies. First, we’ll select movies that are classed as comedies, then plot year the movie was made versus the movie rating, and draw a local polynomial regression line to pick out the trend. The main function from the **profvis** package is `profvis()`, which profiles the code and creates an interactive HTML page of the results. The first argument of `profvis()` is the R expression of interest. This can be many lines long:
```
library("profvis")
profvis({
data(movies, package = "ggplot2movies") # Load data
movies = movies[movies$Comedy == 1,]
plot(movies$year, movies$rating)
model = loess(rating ~ year, data = movies) # loess regression line
j = order(movies$year)
lines(movies$year[j], model$fitted[j]) # Add line to the plot
})
```
The above code provides an interactive HTML page (figure [7\.1](performance.html#fig:7-1)). On the left side is the code and on the right is a flame graph (horizontal direction is time in milliseconds and the vertical direction is the call stack).
Figure 7\.1: Output from profvis
The left hand panel gives the amount of time spent on each line of code. It shows that majority of time is spent calculating the `loess()` smoothing line. The bottom line of the right panel also highlights that most of the execution time is spent on the `loess()` function. Travelling up the function, we see that `loess()` calls `simpleLoess()` which in turn calls `.C()` function.
The conclusion from this graph is that if optimisation were required, it would make sense to focus on the `loess()` and possibly the `order()` function calls.
### 7\.2\.2 Example: Monopoly Simulation
Monopoly is a board game that originated in the United States over \\(100\\) years ago. The objective of the game is to go round the board and purchase squares (properties). If other players land on your properties they have to pay a tax. The player with the most money at the end of the game, wins. To make things more interesting, there are Chance and Community Chest squares. If you land on one of these squares, you draw card, which may send to you to other parts of the board. The other special square, is Jail. One way of entering Jail is to roll three successive doubles.
The **efficient** package contains a Monte\-Carlo function for simulating a simplified game of monopoly. By keeping track of where a person lands when going round the board, we obtain an estimate of the probability of landing on a certain square. The entire code is around 100 lines long. In order for **profvis** to fully profile the code, the **efficient** package needs to be installed from source
```
devtools::install_github("csgillespie/efficient",
args = "--with-keep.source")
```
The function can then be profiled via the following code, which results in figure [7\.2](performance.html#fig:7-2).
```
library("efficient")
profvis(simulate_monopoly(10000))
```
Figure 7\.2: Code profiling for simulating the game of Monopoly.
The output from **profvis** shows that the vast majority of time (around 65%) is spent in the `move_square()` function.
In Monopoly moving around the board is complicated by the fact that rolling a double(a pair of 1’s, 2’s, …, 6’s) is special:
* Roll two dice (`total1`): `total_score = total1`;
* If you get a double, roll again (`total2`) and `total_score = total1 + total2`;
* If you get a double, roll again (`total3`) and `total_score = total1 + total2 + total3`;
* If roll three is a double, Go To Jail, otherwise move `total_score`.
The function `move_square()` captures this logic. Now we know where the code is slow, how can we speed up the computation? In the next section, we will discuss standard techniques that can be used. We will then revisit this example.
7\.3 Efficient base R
---------------------
In R there is often more than one way to solve a problem. In this section we highlight standard tricks or alternative methods that may improve performance.
### The `if()` vs `ifelse()` functions
`ifelse()` is a vectorised version of the standard control\-flow function `if(test) if_yes else if_no` that works as follows:
```
ifelse(test, if_yes, if_no)
```
In the above imaginary example, the return value is filled with elements from the `if_yes` and `if_no` arguments that are determined by whether the element of `test` is `TRUE` or `FALSE`. For example, suppose we have a vector of exam marks. `ifelse()` could be used to classify them as pass or fail:
```
marks = c(25, 55, 75)
ifelse(marks >= 40, "pass", "fail")
#> [1] "fail" "pass" "pass"
```
If the length of `test` condition is equal to \\(1\\), i.e. `length(test) == 1`, then the standard conditional statement
```
mark = 25
if(mark >= 40) {
"pass"
} else {
"fail"
}
```
is around five to ten times faster than `ifelse(mark >= 40, "pass", "fail")`.
An additional quirk of `ifelse()` is that although it is more *programmer efficient*, as it is more concise and understandable than multi\-line alternatives, it is often **less** *computationally efficient* than a more verbose alternative. This is illustrated with the following benchmark, in which the second option runs around 20 times faster, despite the results being identical:
```
marks = runif(n = 10e6, min = 30, max = 99)
system.time({
result1 = ifelse(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 2.459 0.177 2.635
system.time({
result2 = rep("fail", length(marks))
result2[marks >= 40] = "pass"
})
#> user system elapsed
#> 0.138 0.072 0.209
identical(result1, result2)
#> [1] TRUE
```
There is talk on the [R\-devel email](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html) list of speeding up `ifelse()` in base R. A simple solution is to use the `if_else()` function from **dplyr**, although, as discussed in the [same thread](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html), it cannot replace `ifelse()` in all situations. For our exam result test example, `if_else()` works fine and is much faster than base R’s implementation (although it is still around 3 times slower than the hard\-coded solution):
```
system.time({
result3 = dplyr::if_else(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 0.453 0.180 0.633
identical(result1, result3)
#> [1] TRUE
```
### Sorting and ordering
Sorting a vector is relatively quick; sorting a vector of length \\(10^7\\) takes around \\(0\.01\\) seconds. If you only sort a vector once at the top of a script, then don’t worry too much about this. However if you are sorting inside a loop, or in a shiny application, then it can be worthwhile thinking about how to optimise this operation.
There are currently three sorting algorithms, `c("shell", "quick", "radix")` that can be specified in the `sort()` function; with `radix` being a new addition to R 3\.3\. Typically the `radix` (the non\-default option) is the most computationally efficient option for most situations (it is around 20% faster when sorting a large vector of doubles).
Another useful trick is to partially order the results. For example, if you only want to display the top ten results, then use the `partial` argument, i.e. `sort(x, partial = 1:10)`. For very large vectors, this can give a three fold speed increase.
### Reversing elements
The `rev()` function provides a reversed version of its argument. If you wish to sort in decreasing order, `sort(x, decreasing = TRUE)` is marginally (around 10%) faster than `rev(sort(x))`.
### Which indices are `TRUE`
To determine which index of a vector or array are `TRUE`, we would typically use the `which()` function. If we want to find the index of just the minimum or maximum value, i.e. `which(x == min(x))` then using the efficient `which.min()`/`which.max()` variants can be orders of magnitude faster (see figure [7\.3](performance.html#fig:7-3))
Figure 7\.3: Comparison of `which.min()` with `which()`.
### Converting factors to numerics
A factor is just a vector of integers with associated levels. Occasionally we want to convert a factor into its numerical equivalent. The most efficient way of doing this (especially for long factors) is:
```
as.numeric(levels(f))[f]
```
### Logical AND and OR
The logical AND (`&`) and OR (`|`) operators are vectorised functions and are typically used during multi\-criteria subsetting operations. The code below, for example, returns `TRUE` for all elements of `x` less than \\(0\.4\\) or greater than \\(0\.6\\).
```
x < 0.4 | x > 0.6
#> [1] TRUE FALSE TRUE
```
When R executes the above comparison, it will **always** calculate `x > 0.6` regardless of the value of `x < 0.4`. In contrast, the non\-vectorised version, `&&`, only executes the second component if needed. This is efficient and leads to neater code, e.g.
```
# We only calculate the mean if data doesn't contain NAs
if(all(!is.na(x)) && mean(x) > 0) {
# Do something
}
```
compared to
```
if(all(!is.na(x))) {
if(mean(x) > 0) {
# do somthing
}
}
```
However care must be taken not to use `&&` or `||` on vectors since it only evaluates the first element of the vector, giving the incorrect answer. This is illustrated below:
```
x < 0.4 || x > 0.6
#> [1] TRUE
```
### Row and column operations
In data analysis we often want to apply a function to each column or row of a data set. For example, we might want to calculate the column or row sums. The `apply()` function makes this type of operation straightforward.
```
# Second argument: 1 -> rows. 2 -> columns
apply(data_set, 1, function_name)
```
There are optimised functions for calculating row and columns sums/means, `rowSums()`, `colSums()`, `rowMeans()` and `colMeans()` that should be used whenever possible. The package **matrixStats** contains many optimised row/col functions.
### `is.na()` and `anyNA()`
To test whether a vector (or other object) contains missing values we use the `is.na()` function. Often we are interested in whether a vector contains *any* missing values. In this case, `anyNA(x)` is more efficient than `any(is.na(x))`.
### Matrices
A matrix is similar to a data frame: it is a two dimensional object and sub\-setting and other functions work in the same way. However all matrix elements must have the same type. Matrices tend to be used during statistical calculations. The `lm()` function, for example, internally converts the data to a matrix before calculating the results; any characters are thus recoded as numeric dummy variables.
Matrices are generally faster than data frames. For example, the datasets `ex_mat` and `ex_df` from the **efficient** package each have \\(1000\\) rows and \\(100\\) columns and contain the same random numbers. However selecting rows from the data frame is around \\(150\\) times slower than a matrix, as illustrated below:
```
data(ex_mat, ex_df, package="efficient")
microbenchmark(times=100, unit="ms", ex_mat[1,], ex_df[1,])
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> ex_mat[1, ] 0.0027 0.0034 0.0503 0.00424 0.00605 4.54 100
#> ex_df[1, ] 0.4855 0.4974 0.5549 0.50535 0.51790 5.25 100
```
Use the `data.matrix()` function to efficiently convert a data frame into a matrix.
### The integer data type
Numbers in R are usually stored in [double\-precision floating\-point format](https://goo.gl/ZA5R8a), which is described in detail in Braun and Murdoch ([2007](#ref-Braun2007)) and Goldberg ([1991](#ref-Goldberg1991)). The term ‘double’ refers to the fact that on \\(32\\) bit systems (for which the format was developed) two memory locations are used to store a single number. Each double\-precision number is accurate to around \\(17\\) decimal places.
When comparing floating point numbers we should be particularly careful, since `y = sqrt(2) * sqrt(2)` is not exactly \\(2\\), instead it’s **almost** \\(2\\). Using `sprintf(“%.17f”, y)` will give you the true value of `y` (to 17 decimal places).
Integers are another numeric data type. Integers primarily exist to be passed to C or Fortran code. You do not need to create integers for most applications. However, they are occasionally used to optimise sub\-setting operations. When we subset a data frame or matrix, we are interacting with C code so we might be tempted to use integers with the purpose of speeding up our code. For example, if we look at the arguments for the `head` function
```
args(head.matrix)
#> function (x, n = 6L, ...)
#> NULL
```
Using the `:` operator automatically creates a vector of integers.
we see that the default argument for `n` is `6L` rather than simply `6` (the `L` is short for Literal and is used to create an
integer). This gives a tiny speed boost (around 0\.1 microseconds!)
```
x = runif(10)
microbenchmark(head(x, 6.0), head(x, 6L), times=1000000)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# head(x, 6) 7.067 8.309 9.058 8.686 9.098 105266 1e+06 a
# head(x, 6L) 6.947 8.219 8.933 8.594 9.007 106307 1e+06 a
```
Since this function is ubiquitous, this low level optimisation is useful. In general, if you are worried about shaving microseconds off your R code run time, you should probably consider switching to another language.
Integers are more space efficient. The code below compares the size of an integer vector to a standard numeric vector:
```
pryr::object_size(1:10000)
#> Registered S3 method overwritten by 'pryr':
#> method from
#> print.bytes Rcpp
#> 40 kB
pryr::object_size(y = seq(1, 10000, by=1.0))
#> 80 kB
```
The results show that the integer version is roughly half the size. However, most mathematical operations will convert the integer vector into a standard numerical vector, as illustrated below:
```
is.integer(1L + 1)
#> [1] FALSE
```
Further storage savings can be obtained using the **bit** package.
### Sparse matrices
Another data structure that can be stored efficiently is a sparse matrix. This is simply a matrix where most of the elements are zero. Conversely, if most elements are non\-zero, the matrix is considered dense. The proportion of non\-zero elements is called the sparsity. Large sparse matrices often crop up when performing numerical calculations. Typically, our data isn’t sparse but the resulting data structures we create may be sparse. There are a number of techniques/methods used to store sparse matrices. Methods for creating sparse matrices can be found in the **Matrix** package[19](#fn19).
As an example, suppose we have a large matrix where the diagonal elements are non\-zero:
```
library("Matrix")
N = 10000
sp = sparseMatrix(1:N, 1:N, x = 1)
m = diag(1, N, N)
```
Both objects contain the same information, but the data is stored differently; since we have the same value multiple times in the matrix, we only need to store the value once and link it to multiple matrix locations. The matrix object stores each individual element, while the sparse matrix object only stores the location of the non\-zero elements. This is much more memory efficient, as illustrated below:
```
pryr::object_size(sp)
#> 162 kB
pryr::object_size(m)
#> 800 MB
```
#### Exercises
1. Create a vector `x`. Benchmark `any(is.na(x))` against `anyNA()`. Do the results vary with the size of the vector?
2. Examine the following function definitions to give you an idea of how integers are used.
\* `tail.matrix()`
\* `lm()`.
3. Construct a matrix of integers and a matrix of numerics. Using `pryr::object_size()`, compare the objects.
4. How does the function `seq.int()`, which was used in the `tail.matrix()` function, differ to the standard `seq()` function?
A related memory saving idea is to replace `logical` vectors with vectors from the **bit** package which take up just over a 16th of the space (but you can’t use `NA`s).
### The `if()` vs `ifelse()` functions
`ifelse()` is a vectorised version of the standard control\-flow function `if(test) if_yes else if_no` that works as follows:
```
ifelse(test, if_yes, if_no)
```
In the above imaginary example, the return value is filled with elements from the `if_yes` and `if_no` arguments that are determined by whether the element of `test` is `TRUE` or `FALSE`. For example, suppose we have a vector of exam marks. `ifelse()` could be used to classify them as pass or fail:
```
marks = c(25, 55, 75)
ifelse(marks >= 40, "pass", "fail")
#> [1] "fail" "pass" "pass"
```
If the length of `test` condition is equal to \\(1\\), i.e. `length(test) == 1`, then the standard conditional statement
```
mark = 25
if(mark >= 40) {
"pass"
} else {
"fail"
}
```
is around five to ten times faster than `ifelse(mark >= 40, "pass", "fail")`.
An additional quirk of `ifelse()` is that although it is more *programmer efficient*, as it is more concise and understandable than multi\-line alternatives, it is often **less** *computationally efficient* than a more verbose alternative. This is illustrated with the following benchmark, in which the second option runs around 20 times faster, despite the results being identical:
```
marks = runif(n = 10e6, min = 30, max = 99)
system.time({
result1 = ifelse(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 2.459 0.177 2.635
system.time({
result2 = rep("fail", length(marks))
result2[marks >= 40] = "pass"
})
#> user system elapsed
#> 0.138 0.072 0.209
identical(result1, result2)
#> [1] TRUE
```
There is talk on the [R\-devel email](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html) list of speeding up `ifelse()` in base R. A simple solution is to use the `if_else()` function from **dplyr**, although, as discussed in the [same thread](http://r.789695.n4.nabble.com/ifelse-woes-can-we-agree-on-a-ifelse2-td4723584.html), it cannot replace `ifelse()` in all situations. For our exam result test example, `if_else()` works fine and is much faster than base R’s implementation (although it is still around 3 times slower than the hard\-coded solution):
```
system.time({
result3 = dplyr::if_else(marks >= 40, "pass", "fail")
})
#> user system elapsed
#> 0.453 0.180 0.633
identical(result1, result3)
#> [1] TRUE
```
### Sorting and ordering
Sorting a vector is relatively quick; sorting a vector of length \\(10^7\\) takes around \\(0\.01\\) seconds. If you only sort a vector once at the top of a script, then don’t worry too much about this. However if you are sorting inside a loop, or in a shiny application, then it can be worthwhile thinking about how to optimise this operation.
There are currently three sorting algorithms, `c("shell", "quick", "radix")` that can be specified in the `sort()` function; with `radix` being a new addition to R 3\.3\. Typically the `radix` (the non\-default option) is the most computationally efficient option for most situations (it is around 20% faster when sorting a large vector of doubles).
Another useful trick is to partially order the results. For example, if you only want to display the top ten results, then use the `partial` argument, i.e. `sort(x, partial = 1:10)`. For very large vectors, this can give a three fold speed increase.
### Reversing elements
The `rev()` function provides a reversed version of its argument. If you wish to sort in decreasing order, `sort(x, decreasing = TRUE)` is marginally (around 10%) faster than `rev(sort(x))`.
### Which indices are `TRUE`
To determine which index of a vector or array are `TRUE`, we would typically use the `which()` function. If we want to find the index of just the minimum or maximum value, i.e. `which(x == min(x))` then using the efficient `which.min()`/`which.max()` variants can be orders of magnitude faster (see figure [7\.3](performance.html#fig:7-3))
Figure 7\.3: Comparison of `which.min()` with `which()`.
### Converting factors to numerics
A factor is just a vector of integers with associated levels. Occasionally we want to convert a factor into its numerical equivalent. The most efficient way of doing this (especially for long factors) is:
```
as.numeric(levels(f))[f]
```
### Logical AND and OR
The logical AND (`&`) and OR (`|`) operators are vectorised functions and are typically used during multi\-criteria subsetting operations. The code below, for example, returns `TRUE` for all elements of `x` less than \\(0\.4\\) or greater than \\(0\.6\\).
```
x < 0.4 | x > 0.6
#> [1] TRUE FALSE TRUE
```
When R executes the above comparison, it will **always** calculate `x > 0.6` regardless of the value of `x < 0.4`. In contrast, the non\-vectorised version, `&&`, only executes the second component if needed. This is efficient and leads to neater code, e.g.
```
# We only calculate the mean if data doesn't contain NAs
if(all(!is.na(x)) && mean(x) > 0) {
# Do something
}
```
compared to
```
if(all(!is.na(x))) {
if(mean(x) > 0) {
# do somthing
}
}
```
However care must be taken not to use `&&` or `||` on vectors since it only evaluates the first element of the vector, giving the incorrect answer. This is illustrated below:
```
x < 0.4 || x > 0.6
#> [1] TRUE
```
### Row and column operations
In data analysis we often want to apply a function to each column or row of a data set. For example, we might want to calculate the column or row sums. The `apply()` function makes this type of operation straightforward.
```
# Second argument: 1 -> rows. 2 -> columns
apply(data_set, 1, function_name)
```
There are optimised functions for calculating row and columns sums/means, `rowSums()`, `colSums()`, `rowMeans()` and `colMeans()` that should be used whenever possible. The package **matrixStats** contains many optimised row/col functions.
### `is.na()` and `anyNA()`
To test whether a vector (or other object) contains missing values we use the `is.na()` function. Often we are interested in whether a vector contains *any* missing values. In this case, `anyNA(x)` is more efficient than `any(is.na(x))`.
### Matrices
A matrix is similar to a data frame: it is a two dimensional object and sub\-setting and other functions work in the same way. However all matrix elements must have the same type. Matrices tend to be used during statistical calculations. The `lm()` function, for example, internally converts the data to a matrix before calculating the results; any characters are thus recoded as numeric dummy variables.
Matrices are generally faster than data frames. For example, the datasets `ex_mat` and `ex_df` from the **efficient** package each have \\(1000\\) rows and \\(100\\) columns and contain the same random numbers. However selecting rows from the data frame is around \\(150\\) times slower than a matrix, as illustrated below:
```
data(ex_mat, ex_df, package="efficient")
microbenchmark(times=100, unit="ms", ex_mat[1,], ex_df[1,])
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> ex_mat[1, ] 0.0027 0.0034 0.0503 0.00424 0.00605 4.54 100
#> ex_df[1, ] 0.4855 0.4974 0.5549 0.50535 0.51790 5.25 100
```
Use the `data.matrix()` function to efficiently convert a data frame into a matrix.
### The integer data type
Numbers in R are usually stored in [double\-precision floating\-point format](https://goo.gl/ZA5R8a), which is described in detail in Braun and Murdoch ([2007](#ref-Braun2007)) and Goldberg ([1991](#ref-Goldberg1991)). The term ‘double’ refers to the fact that on \\(32\\) bit systems (for which the format was developed) two memory locations are used to store a single number. Each double\-precision number is accurate to around \\(17\\) decimal places.
When comparing floating point numbers we should be particularly careful, since `y = sqrt(2) * sqrt(2)` is not exactly \\(2\\), instead it’s **almost** \\(2\\). Using `sprintf(“%.17f”, y)` will give you the true value of `y` (to 17 decimal places).
Integers are another numeric data type. Integers primarily exist to be passed to C or Fortran code. You do not need to create integers for most applications. However, they are occasionally used to optimise sub\-setting operations. When we subset a data frame or matrix, we are interacting with C code so we might be tempted to use integers with the purpose of speeding up our code. For example, if we look at the arguments for the `head` function
```
args(head.matrix)
#> function (x, n = 6L, ...)
#> NULL
```
Using the `:` operator automatically creates a vector of integers.
we see that the default argument for `n` is `6L` rather than simply `6` (the `L` is short for Literal and is used to create an
integer). This gives a tiny speed boost (around 0\.1 microseconds!)
```
x = runif(10)
microbenchmark(head(x, 6.0), head(x, 6L), times=1000000)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# head(x, 6) 7.067 8.309 9.058 8.686 9.098 105266 1e+06 a
# head(x, 6L) 6.947 8.219 8.933 8.594 9.007 106307 1e+06 a
```
Since this function is ubiquitous, this low level optimisation is useful. In general, if you are worried about shaving microseconds off your R code run time, you should probably consider switching to another language.
Integers are more space efficient. The code below compares the size of an integer vector to a standard numeric vector:
```
pryr::object_size(1:10000)
#> Registered S3 method overwritten by 'pryr':
#> method from
#> print.bytes Rcpp
#> 40 kB
pryr::object_size(y = seq(1, 10000, by=1.0))
#> 80 kB
```
The results show that the integer version is roughly half the size. However, most mathematical operations will convert the integer vector into a standard numerical vector, as illustrated below:
```
is.integer(1L + 1)
#> [1] FALSE
```
Further storage savings can be obtained using the **bit** package.
### Sparse matrices
Another data structure that can be stored efficiently is a sparse matrix. This is simply a matrix where most of the elements are zero. Conversely, if most elements are non\-zero, the matrix is considered dense. The proportion of non\-zero elements is called the sparsity. Large sparse matrices often crop up when performing numerical calculations. Typically, our data isn’t sparse but the resulting data structures we create may be sparse. There are a number of techniques/methods used to store sparse matrices. Methods for creating sparse matrices can be found in the **Matrix** package[19](#fn19).
As an example, suppose we have a large matrix where the diagonal elements are non\-zero:
```
library("Matrix")
N = 10000
sp = sparseMatrix(1:N, 1:N, x = 1)
m = diag(1, N, N)
```
Both objects contain the same information, but the data is stored differently; since we have the same value multiple times in the matrix, we only need to store the value once and link it to multiple matrix locations. The matrix object stores each individual element, while the sparse matrix object only stores the location of the non\-zero elements. This is much more memory efficient, as illustrated below:
```
pryr::object_size(sp)
#> 162 kB
pryr::object_size(m)
#> 800 MB
```
#### Exercises
1. Create a vector `x`. Benchmark `any(is.na(x))` against `anyNA()`. Do the results vary with the size of the vector?
2. Examine the following function definitions to give you an idea of how integers are used.
\* `tail.matrix()`
\* `lm()`.
3. Construct a matrix of integers and a matrix of numerics. Using `pryr::object_size()`, compare the objects.
4. How does the function `seq.int()`, which was used in the `tail.matrix()` function, differ to the standard `seq()` function?
A related memory saving idea is to replace `logical` vectors with vectors from the **bit** package which take up just over a 16th of the space (but you can’t use `NA`s).
#### Exercises
1. Create a vector `x`. Benchmark `any(is.na(x))` against `anyNA()`. Do the results vary with the size of the vector?
2. Examine the following function definitions to give you an idea of how integers are used.
\* `tail.matrix()`
\* `lm()`.
3. Construct a matrix of integers and a matrix of numerics. Using `pryr::object_size()`, compare the objects.
4. How does the function `seq.int()`, which was used in the `tail.matrix()` function, differ to the standard `seq()` function?
A related memory saving idea is to replace `logical` vectors with vectors from the **bit** package which take up just over a 16th of the space (but you can’t use `NA`s).
7\.4 Example: Optimising the `move_square()` function
-----------------------------------------------------
Figure [7\.2](performance.html#fig:7-2) shows that our main bottleneck in simulating the game of Monopoly is the `move_square()` function. Within this function, we spend around 50% of the time creating a data frame, 20% of the time calculating row sums, and the remainder on comparison operations. This piece of code can be optimised fairly easily (while still retaining the same overall structure) by incorporating the following improvements[20](#fn20):
* Instead of using `seq(1, 6)` to generate the 6 possible values of rolling a dice, use `1:6`. Also, instead of a data frame, use a matrix and perform a single call to the `sample()` function
```
matrix(sample(1:6, 6, replace = TRUE), ncol = 2)
```
Overall, this revised line is around 25 times faster; most of the speed boost came from switching to a matrix.
* Using `rowSums()` instead of `apply()`. The `apply()` function call is already faster since we’ve switched from a data frame to a matrix (around 3 times). Using `rowSums()` with a matrix, gives a 10 fold speed boost.
* Use `&&` in the `if` condition; this is around twice as fast compared to `&`.
Impressively the refactored code runs 20 times faster than the original code, compare figures [7\.2](performance.html#fig:7-2) and [7\.4](performance.html#fig:7-4), with the main speed boost coming from using a matrix instead of a data frame.
Figure 7\.4: Code profiling of the optimised code.
#### Exercise
The `move_square()` function above uses a vectorised solution. Whenever we move, we always roll six dice, then examine the outcome and determine the number of doubles. However, this is potentially wasteful, since the probability of getting one double is \\(1/6\\) and two doubles is \\(1/36\\). Another method is too only roll additional dice if and when they are needed. Implement and time this solution.
#### Exercise
The `move_square()` function above uses a vectorised solution. Whenever we move, we always roll six dice, then examine the outcome and determine the number of doubles. However, this is potentially wasteful, since the probability of getting one double is \\(1/6\\) and two doubles is \\(1/36\\). Another method is too only roll additional dice if and when they are needed. Implement and time this solution.
7\.5 Parallel computing
-----------------------
This section provides a brief foray into the world of parallel computing. It only looks at methods for parallel computing on ‘shared memory systems’. This simply means computers in which multiple central processor unit (CPU) cores can access the same block, i.e. most laptops and desktops sold worldwide. This section provides a flavour of what is possible; for a fuller account of parallel processing in R, see McCallum and Weston ([2011](#ref-mccallum2011)).
The foundational package for parallel computing in R is **parallel**. In recent R versions (since R 2\.14\.0\) this comes pre\-installed with base R. The **parallel** package must still be loaded before use, however, and you must determine the number of available cores manually, as illustrated below:
```
library("parallel")
no_of_cores = detectCores()
```
The value returned by `detectCores()` turns out to be operating system and chip maker dependent \- see `help(“detectCores”)` for full details. For most standard machines, `detectCores()` returns the number of simultaneous threads.
### 7\.5\.1 Parallel versions of apply functions
The most commonly used parallel applications are parallelised replacements of `lapply()`, `sapply()` and `apply()`. The parallel implementations and their arguments are shown below.
```
parLapply(cl, x, FUN, ...)
parApply(cl = NULL, X, MARGIN, FUN, ...)
parSapply(cl = NULL, X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
```
The key point is that there is very little difference in arguments between `parLapply()` and `lapply()`, so the barrier to using (this form) of parallel computing is low, assuming you are proficient with the apply family of functions. Each function above has an argument `cl`, which is created by a `makeCluster()` call. This function, amongst other things, specifies the number of processors to use.
### 7\.5\.2 Example: Snakes and Ladders
Parallel computing is ideal for Monte\-Carlo simulations. Each core independently simulates a realisation from the model. At the end, we gather up the results. In the **efficient** package, there is a function that simulates a single game of Snakes and Ladders \- `snakes_ladders()`[21](#fn21)
The following code illustrates how to simulate `N` games using `sapply()`:
```
N = 10^4
sapply(1:N, snakes_ladders)
```
Rewriting this code to make use of the **parallel** package is straightforward.
Begin by making a cluster object:
```
library("parallel")
cl = makeCluster(4)
```
Then simply swap `sapply()` for `parSapply()`:
```
parSapply(cl, 1:N, snakes_ladders)
```
Not stopping the clusters can lead to memory leaks,[22](#fn22) so it is important to stop the created clusters as illustrated below:
```
stopCluster(cl)
```
and used a four (or more) core, then we would
obtain a four\-fold speed up (we set `makeCluster(4)`).
On a multi\-processor computer with four (or more) cores, if we achieved perfect parallelisation this could lead to a four\-fold speed\-up. However, it is rare that we would achieve this optimal speed\-up since there is always communication between threads.
### 7\.5\.3 Exit functions with care
Always call `stopCluster()` to free resources when you finish with the cluster object. However if the parallel code is within function, it’s possible that function ends as the results of an error and so `stopCluster()` is omitted.
The `on.exit()` function handles this problem with the minimum of fuss; regardless of how the function ends, `on.exit()` is always called. In the context of parallel programming we will have something similar to:
```
simulate = function(cores) {
cl = makeCluster(cores)
on.exit(stopCluster(cl))
# Do something
}
```
Another common use of `on.exit()` is with the `par()` function. If you use `par()` to change graphical parameters within a function, `on.exit()` ensures these parameters are reset to their previous value when the function ends.
### 7\.5\.4 Parallel code under Linux \& OS X
If you are using Linux or OS X, then another way of running code in parallel is to use the `mclapply()` and `mcmapply()` functions
```
# This will run on Windows, but will only use 1 core
mclapply(1:N, snakes_ladders)
```
These functions use forking, that is creating a new copy of a process running on the CPU. However Windows does not support this low\-level functionality in the way that Linux does. The main advantage of `mclapply()` is that you don’t have to start and stop cluster objects. The big disadvantage is that on Windows machines, you are limited to a single core.
### 7\.5\.1 Parallel versions of apply functions
The most commonly used parallel applications are parallelised replacements of `lapply()`, `sapply()` and `apply()`. The parallel implementations and their arguments are shown below.
```
parLapply(cl, x, FUN, ...)
parApply(cl = NULL, X, MARGIN, FUN, ...)
parSapply(cl = NULL, X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
```
The key point is that there is very little difference in arguments between `parLapply()` and `lapply()`, so the barrier to using (this form) of parallel computing is low, assuming you are proficient with the apply family of functions. Each function above has an argument `cl`, which is created by a `makeCluster()` call. This function, amongst other things, specifies the number of processors to use.
### 7\.5\.2 Example: Snakes and Ladders
Parallel computing is ideal for Monte\-Carlo simulations. Each core independently simulates a realisation from the model. At the end, we gather up the results. In the **efficient** package, there is a function that simulates a single game of Snakes and Ladders \- `snakes_ladders()`[21](#fn21)
The following code illustrates how to simulate `N` games using `sapply()`:
```
N = 10^4
sapply(1:N, snakes_ladders)
```
Rewriting this code to make use of the **parallel** package is straightforward.
Begin by making a cluster object:
```
library("parallel")
cl = makeCluster(4)
```
Then simply swap `sapply()` for `parSapply()`:
```
parSapply(cl, 1:N, snakes_ladders)
```
Not stopping the clusters can lead to memory leaks,[22](#fn22) so it is important to stop the created clusters as illustrated below:
```
stopCluster(cl)
```
and used a four (or more) core, then we would
obtain a four\-fold speed up (we set `makeCluster(4)`).
On a multi\-processor computer with four (or more) cores, if we achieved perfect parallelisation this could lead to a four\-fold speed\-up. However, it is rare that we would achieve this optimal speed\-up since there is always communication between threads.
### 7\.5\.3 Exit functions with care
Always call `stopCluster()` to free resources when you finish with the cluster object. However if the parallel code is within function, it’s possible that function ends as the results of an error and so `stopCluster()` is omitted.
The `on.exit()` function handles this problem with the minimum of fuss; regardless of how the function ends, `on.exit()` is always called. In the context of parallel programming we will have something similar to:
```
simulate = function(cores) {
cl = makeCluster(cores)
on.exit(stopCluster(cl))
# Do something
}
```
Another common use of `on.exit()` is with the `par()` function. If you use `par()` to change graphical parameters within a function, `on.exit()` ensures these parameters are reset to their previous value when the function ends.
### 7\.5\.4 Parallel code under Linux \& OS X
If you are using Linux or OS X, then another way of running code in parallel is to use the `mclapply()` and `mcmapply()` functions
```
# This will run on Windows, but will only use 1 core
mclapply(1:N, snakes_ladders)
```
These functions use forking, that is creating a new copy of a process running on the CPU. However Windows does not support this low\-level functionality in the way that Linux does. The main advantage of `mclapply()` is that you don’t have to start and stop cluster objects. The big disadvantage is that on Windows machines, you are limited to a single core.
7\.6 Rcpp
---------
Sometimes R is just slow. You’ve tried every trick you know, and your code is still crawling along. At this point you could consider rewriting key parts of your code in another, faster language. R has interfaces to other languages via packages, such as **Rcpp**, **rJava**, **rPython** and recently **V8**. These provide R interfaces to C\+\+, Java, Python and JavaScript respectively. **Rcpp** is the most popular of these (figure [7\.5](performance.html#fig:7-5)).
Figure 7\.5: Downloads per day from the RStudio CRAN mirror of packages that provide R interfaces to other languages.
C\+\+ is a modern, fast and very well\-supported language with libraries for performing many kinds of computational tasks. **Rcpp** makes incorporating C\+\+ code into your R workflow easy.
Although C/Fortran routines can be used using the `.Call()` function this is not recommended: using `.Call()` can be a painful experience. **Rcpp** provides a friendly API (Application Program Interface) that lets you write high\-performance code, bypassing R’s tricky C API. Typical bottlenecks that C\+\+ addresses are loops and recursive functions.
C\+\+ is a powerful programming language about which entire books have been written. This section therefore is focussed on getting started and providing a flavour of what is possible. It is structured as follows. After ensuring that your computer is set\-up for **Rcpp**, we proceed by creating a simple C\+\+ function, to show how C\+\+ compares with R (Section [7\.6\.1](performance.html#simple-c)). This is converted into an R function using `cppFunction()` in Section [7\.6\.2](performance.html#cppfunction). The remainder of the chapter explains C\+\+ data types (Section [7\.6\.3](performance.html#c-types)), illustrates how to source C\+\+ code directly (Section [7\.6\.4](performance.html#sourcecpp)), explains vectors (Section [7\.6\.5](performance.html#vectors-and-loops)) and **Rcpp** sugar (Section [7\.6\.6](performance.html#sugar)) and finally provides guidance on further resources on the subject (Section [7\.6\.7](performance.html#rcpp-resources)).
### 7\.6\.1 A simple C\+\+ function
To write and compile C\+\+ functions, you need a working C\+\+ compiler (see the Prerequiste section at the beginning of this chapter). The code in this chapter was generated using version 1\.0\.6 of **Rcpp**.
**Rcpp** is well documented, as illustrated by the number of vignettes on the package’s [CRAN](https://cran.r-project.org/web/packages/Rcpp/) page. In addition to its popularity, many other packages depend on **Rcpp**, which can be seen by looking at the `Reverse Imports` section.
To check that you have everything needed for this chapter, run the following piece of code from the course R package:
```
efficient::test_rcpp()
```
A C\+\+ function is similar to an R function: you pass a set of inputs to the function, some code is run, a single object is returned. However there are some key differences.
1. In the C\+\+ function each line must be terminated with `;` In R, we use `;` only when we have multiple statements on the same line.
2. We must declare object types in the C\+\+ version. In particular we need to declare the types of the function arguments, return value and any intermediate objects we create.
3. The function must have an explicit `return` statement. Similar to R, there can be multiple returns, but the function will terminate when it hits it’s first `return` statement.
4. You do not use assignment when creating a function.
5. Object assignment must use `=` sign. The `<-` operator isn’t valid.
6. One line comments can be created using `//`. Multi\-line comments are created using `/*...*/`
Suppose we want to create a function that adds two numbers together. In R this would be a simple one line affair:
```
add_r = function(x, y) x + y
```
In C\+\+ it is a bit more long winded:
```
/* Return type double
* Two arguments, also doubles
*/
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
If we were writing a C\+\+ program we would also need another function called `main()`. We would then compile the code to obtain an executable. The executable is platform dependent. The beauty of using **Rcpp** is that it makes it very easy to call C\+\+ functions from R and the user doesn’t have to worry about the platform, or compilers or the R/C\+\+ interface.
### 7\.6\.2 The `cppFunction()` command
If we pass the C\+\+ function created in the previous section as a text string argument to `cppFunction()`:
```
library("Rcpp")
cppFunction('
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
')
```
**Rcpp** will magically compile the C\+\+ code and construct a function that bridges the gap between R and C\+\+. After running the above code, we now have access to the `add_cpp()` function
```
add_cpp
#> function (x, y)
#> .Call(<pointer: 0x7feb3c8ffb70>, x, y)
```
and can call the `add_cpp()` function in the usual way
```
add_cpp(1, 2)
#> [1] 3
```
We don’t have to worry about compilers. Also, if you include this function in a package, users don’t have to worry about any of the **Rcpp** magic. It just works.
### 7\.6\.3 C\+\+ data types
The most basic type of variable is an integer, `int`. An `int` variable can store a value in the range \\(\-32768\\) to \\(\+32767\\). To store floating point numbers, there are single precision numbers, `float` and double precision numbers, `double`. A `double` takes twice as much memory as a `float` (in general, we should always work with double precision numbers unless we have a compelling reason to switch to floats). For **single** characters, we use the `char` data type.
There is also something called an unsigned int, which goes from \\(0\\) to \\(65,535\\) and a `long int` that ranges from \\(0\\) to \\(2^{31}\-1\\).
Table 7\.1: Overview of key C\+\+ object types.
| Type | Description |
| --- | --- |
| char | A single character. |
| int | An integer. |
| float | A single precision floating point number. |
| double | A double\-precision floating point number. |
| void | A valueless quantity. |
A pointer object is a variable that points to an area of memory that has been given a name. Pointers are a very powerful, but primitive facility contained in the C\+\+ language. They are very useful since rather than passing large objects around, we pass a pointer to the memory location; rather than pass the house, we just give the address. We won’t use pointers in this chapter, but mention them for completeness. Table [7\.1](performance.html#tab:cpptypes) gives an overview.
### 7\.6\.4 The `sourceCpp()` function
The `cppFunction()` is great for getting small examples up and running. But it is better practice to put your C\+\+ code in a separate file (with file extension `cpp`) and use the function call `sourceCpp("path/to/file.cpp")` to compile them. However we need to include a few headers at the top of the file. The first line we add gives us access to the **Rcpp** functions. The file `Rcpp.h` contains a list of function and class definitions supplied by **Rcpp**. This file will be located where **Rcpp** is installed. The `include` line
```
#include <Rcpp.h>
```
causes the compiler to replace that lines with the contents of the named source file. This means that we can access the functions defined by **Rcpp**. To access the **Rcpp** functions we would have to type `Rcpp::function_1`. To avoid typing `Rcpp::`, we use the namespace facility
```
using namespace Rcpp;
```
Now we can just type `function_1()`; this is the same concept that R uses for managing function name collisions when loading packages. Above each function we want to export/use in R, we add the tag
```
// [[Rcpp::export]]
```
Similar to packages and the `library()` function in R, we access additional functions via `#include`. A standard header to include is `#include <math.h>` which contains standard mathematics functions.
This would give the complete file
```
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
There are two main benefits with putting your C\+\+ functions in separate files. First, we have the benefit of syntax highlighting (RStudio has great support for C\+\+ editing). Second, it’s easier to make syntax errors when switching between R and C\+\+ in the same file. To save space we’ll omit the headers for the remainder of the chapter.
### 7\.6\.5 Vectors and loops
Let’s now consider a slightly more complicated example. Here we want to write our own function that calculates the mean. This is just an illustrative example: R’s version is much better and more robust to scale differences in our data. For comparison, let’s create a corresponding R function \- this is the same function we used in chapter [3](programming.html#programming). The function takes a single vector `x` as input, and returns the mean value, `m`:
```
mean_r = function(x) {
m = 0
n = length(x)
for(i in 1:n)
m = m + x[i] / n
m
}
```
This is a very bad R function; we should just use the base function `mean()` for real world applications. However the purpose of `mean_r()` is to provide a comparison for the C\+\+ version, which we will write in a similar way.
In this example, we will let **Rcpp** smooth the interface between C\+\+ and R by using the `NumericVector` data type. This **Rcpp** data type mirrors the R vector object type. Other common classes are: `IntegerVector`, `CharacterVector`, and `LogicalVector`.
In the C\+\+ version of the mean function, we specify the arguments types: `x` (`NumericVector`) and the return value (`double`). The C\+\+ version of the `mean()` function is a few lines longer. Almost always, the corresponding C\+\+ version will be, possibly much, longer. In general R optimises for reduced development time; C\+\+ optimises for fast execution time. The corresponding C\+\+ function for calculating the mean is:
```
double mean_cpp(NumericVector x) {
int i;
int n = x.size();
double mean = 0;
for(i = 0; i < n; i++) {
mean = mean + x[i] / n;
}
return mean;
}
```
To use the C\+\+ function we need to source the file (remember to put the necessary headers in).
```
sourceCpp("src/mean_cpp.cpp")
```
Although the C\+\+ version is similar, there are a few crucial differences.
1. We use the `.size()` method to find the length of `x`.
2. The `for` loop has a more complicated syntax.
```
for (variable initialisation; condition; variable update ) {
// Code to execute
}
```
In this example, the loop initialises `i = 0` and will continue running until `i < n` is false.
The statement `i++` increases the value of `i` by `1`; essentially it’s just shortcut for `i = i + 1`.
3. Similar to `i++`, C\+\+ provides other operators to modify variables in place. For example we could rewrite part of the loop as
```
mean += x[i] / n;
```
The above code adds `x[i] / n` to the value of `mean`. Other similar operators are `-=`, `*=`, `/=` and `i--`.
4. A C\+\+ vector starts at `0` **not** `1`
To compare the C\+\+ and R functions, we’ll generate some normal random numbers for the comparison:
```
x = rnorm(1e4)
```
Then call the `microbenchmark()` function (results plotted in figure [7\.6](performance.html#fig:7-6)).
```
# com_mean_r is the compiled version of mean_r
z = microbenchmark(
mean(x), mean_r(x), com_mean_r(x), mean_cpp(x),
times = 1000
)
```
In this simple example, the Rcpp variant is around \\(100\\) times faster than the corresponding pure R version. This sort of speed\-up is not uncommon when switching to an Rcpp solution. Notice that the Rcpp version and standard base function `mean()` run at roughly the same speed; after all, the base R function is written in C. However, `mean()` uses a more sophisticated algorithm when calculating the mean to ensure accuracy.
Figure 7\.6: Comparison of mean functions.
#### Exercises
Consider the following piece of code:
```
double test1() {
double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++i)
b = b + a;
return b;
}
```
1. Save the function `test1()` in a separate file. Make sure it works.
2. Write a similar function in R and compare the speed of the C\+\+ and R versions.
3. Create a function called `test2()` where the `double` variables have been replaced by `float`. Do you still get the correct answer?
4. Change `b = b + a` to `b += a` to make your code more C\+\+ like.
5. (Hard) What’s the difference between `i++` and `++i`?
#### Matrices
Each vector type has a corresponding matrix equivalent: `NumericMatrix`, `IntegerMatrix`, `CharacterMatrix` and `LogicalMatrix`. We use these types in a similar way to how we used `NumericVector`’s. The main differences are:
* When we initialise, we need to specify the number of rows and columns
```
// 10 rows, 5 columns
NumericMatrix mat(10, 5);
// Length 10
NumericVector v(10);
```
* We subset using `()`, i.e. `mat(5, 4)`.
* The first element in a matrix is `mat(0, 0)` \- remember indexes start with `0` not `1`.
* To determine the number of rows and columns, we use the `.nrow()` and `.ncol()` methods.
### 7\.6\.6 C\+\+ with sugar on top
**Rcpp** sugar brings a higher\-level of abstraction to C\+\+ code written using the **Rcpp** API. What this means in practice is that we can write C\+\+ code in the style of R. For example, suppose we wanted to find the squared difference of two vectors; a squared residual in regression. In R we would use
```
sq_diff_r = function(x, y) (x - y)^2
```
Rewriting the function in standard C\+\+ would give
```
NumericVector res_c(NumericVector x, NumericVector y) {
int i;
int n = x.size();
NumericVector residuals(n);
for(i = 0; i < n; i++) {
residuals[i] = pow(x[i] - y[i], 2);
}
return residuals;
}
```
With **Rcpp** sugar we can rewrite this code to be more succinct and have more of an R feel:
```
NumericVector res_sugar(NumericVector x, NumericVector y) {
return pow(x - y, 2);
}
```
In the above C\+\+ code, the `pow()` function and `x-y` are valid due to **Rcpp** sugar. Other functions that are available include the d/q/p/r statistical functions, such as `rnorm()` and `pnorm()`. The sweetened versions aren’t usually faster than the C\+\+ version, but typically there’s very little difference between the two. However with the sugared variety, the code is shorter and is constantly being improved.
#### Exercises
1. Construct an R version (using a `for` loop rather than the vectorised solution), `res_r()` and compare the three function variants.
2. In the above example, `res_sugar()` is faster than `res_c()`. Do you know why?
### 7\.6\.7 Rcpp resources
The aim of this section was to provide an introduction to **Rcpp**. One of the selling features of **Rcpp** is that there is a great deal of documentation available.
* The **Rcpp** [website](http://www.rcpp.org/);
* The original Journal of Statistical Software paper describing **Rcpp** and the follow\-up book (Eddelbuettel and François [2011](#ref-Eddelbuettel2011); Eddelbuettel [2013](#ref-Eddelbuettel2013));
* H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) provides a very readable chapter on **Rcpp** that goes into a bit more detail than this section;
* The **Rcpp** section on the [StackOverflow](https://stackoverflow.com/questions/tagged/rcpp) website. Questions are often answered by the **Rcpp** authors.
### 7\.6\.1 A simple C\+\+ function
To write and compile C\+\+ functions, you need a working C\+\+ compiler (see the Prerequiste section at the beginning of this chapter). The code in this chapter was generated using version 1\.0\.6 of **Rcpp**.
**Rcpp** is well documented, as illustrated by the number of vignettes on the package’s [CRAN](https://cran.r-project.org/web/packages/Rcpp/) page. In addition to its popularity, many other packages depend on **Rcpp**, which can be seen by looking at the `Reverse Imports` section.
To check that you have everything needed for this chapter, run the following piece of code from the course R package:
```
efficient::test_rcpp()
```
A C\+\+ function is similar to an R function: you pass a set of inputs to the function, some code is run, a single object is returned. However there are some key differences.
1. In the C\+\+ function each line must be terminated with `;` In R, we use `;` only when we have multiple statements on the same line.
2. We must declare object types in the C\+\+ version. In particular we need to declare the types of the function arguments, return value and any intermediate objects we create.
3. The function must have an explicit `return` statement. Similar to R, there can be multiple returns, but the function will terminate when it hits it’s first `return` statement.
4. You do not use assignment when creating a function.
5. Object assignment must use `=` sign. The `<-` operator isn’t valid.
6. One line comments can be created using `//`. Multi\-line comments are created using `/*...*/`
Suppose we want to create a function that adds two numbers together. In R this would be a simple one line affair:
```
add_r = function(x, y) x + y
```
In C\+\+ it is a bit more long winded:
```
/* Return type double
* Two arguments, also doubles
*/
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
If we were writing a C\+\+ program we would also need another function called `main()`. We would then compile the code to obtain an executable. The executable is platform dependent. The beauty of using **Rcpp** is that it makes it very easy to call C\+\+ functions from R and the user doesn’t have to worry about the platform, or compilers or the R/C\+\+ interface.
### 7\.6\.2 The `cppFunction()` command
If we pass the C\+\+ function created in the previous section as a text string argument to `cppFunction()`:
```
library("Rcpp")
cppFunction('
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
')
```
**Rcpp** will magically compile the C\+\+ code and construct a function that bridges the gap between R and C\+\+. After running the above code, we now have access to the `add_cpp()` function
```
add_cpp
#> function (x, y)
#> .Call(<pointer: 0x7feb3c8ffb70>, x, y)
```
and can call the `add_cpp()` function in the usual way
```
add_cpp(1, 2)
#> [1] 3
```
We don’t have to worry about compilers. Also, if you include this function in a package, users don’t have to worry about any of the **Rcpp** magic. It just works.
### 7\.6\.3 C\+\+ data types
The most basic type of variable is an integer, `int`. An `int` variable can store a value in the range \\(\-32768\\) to \\(\+32767\\). To store floating point numbers, there are single precision numbers, `float` and double precision numbers, `double`. A `double` takes twice as much memory as a `float` (in general, we should always work with double precision numbers unless we have a compelling reason to switch to floats). For **single** characters, we use the `char` data type.
There is also something called an unsigned int, which goes from \\(0\\) to \\(65,535\\) and a `long int` that ranges from \\(0\\) to \\(2^{31}\-1\\).
Table 7\.1: Overview of key C\+\+ object types.
| Type | Description |
| --- | --- |
| char | A single character. |
| int | An integer. |
| float | A single precision floating point number. |
| double | A double\-precision floating point number. |
| void | A valueless quantity. |
A pointer object is a variable that points to an area of memory that has been given a name. Pointers are a very powerful, but primitive facility contained in the C\+\+ language. They are very useful since rather than passing large objects around, we pass a pointer to the memory location; rather than pass the house, we just give the address. We won’t use pointers in this chapter, but mention them for completeness. Table [7\.1](performance.html#tab:cpptypes) gives an overview.
### 7\.6\.4 The `sourceCpp()` function
The `cppFunction()` is great for getting small examples up and running. But it is better practice to put your C\+\+ code in a separate file (with file extension `cpp`) and use the function call `sourceCpp("path/to/file.cpp")` to compile them. However we need to include a few headers at the top of the file. The first line we add gives us access to the **Rcpp** functions. The file `Rcpp.h` contains a list of function and class definitions supplied by **Rcpp**. This file will be located where **Rcpp** is installed. The `include` line
```
#include <Rcpp.h>
```
causes the compiler to replace that lines with the contents of the named source file. This means that we can access the functions defined by **Rcpp**. To access the **Rcpp** functions we would have to type `Rcpp::function_1`. To avoid typing `Rcpp::`, we use the namespace facility
```
using namespace Rcpp;
```
Now we can just type `function_1()`; this is the same concept that R uses for managing function name collisions when loading packages. Above each function we want to export/use in R, we add the tag
```
// [[Rcpp::export]]
```
Similar to packages and the `library()` function in R, we access additional functions via `#include`. A standard header to include is `#include <math.h>` which contains standard mathematics functions.
This would give the complete file
```
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double add_cpp(double x, double y) {
double value = x + y;
return value;
}
```
There are two main benefits with putting your C\+\+ functions in separate files. First, we have the benefit of syntax highlighting (RStudio has great support for C\+\+ editing). Second, it’s easier to make syntax errors when switching between R and C\+\+ in the same file. To save space we’ll omit the headers for the remainder of the chapter.
### 7\.6\.5 Vectors and loops
Let’s now consider a slightly more complicated example. Here we want to write our own function that calculates the mean. This is just an illustrative example: R’s version is much better and more robust to scale differences in our data. For comparison, let’s create a corresponding R function \- this is the same function we used in chapter [3](programming.html#programming). The function takes a single vector `x` as input, and returns the mean value, `m`:
```
mean_r = function(x) {
m = 0
n = length(x)
for(i in 1:n)
m = m + x[i] / n
m
}
```
This is a very bad R function; we should just use the base function `mean()` for real world applications. However the purpose of `mean_r()` is to provide a comparison for the C\+\+ version, which we will write in a similar way.
In this example, we will let **Rcpp** smooth the interface between C\+\+ and R by using the `NumericVector` data type. This **Rcpp** data type mirrors the R vector object type. Other common classes are: `IntegerVector`, `CharacterVector`, and `LogicalVector`.
In the C\+\+ version of the mean function, we specify the arguments types: `x` (`NumericVector`) and the return value (`double`). The C\+\+ version of the `mean()` function is a few lines longer. Almost always, the corresponding C\+\+ version will be, possibly much, longer. In general R optimises for reduced development time; C\+\+ optimises for fast execution time. The corresponding C\+\+ function for calculating the mean is:
```
double mean_cpp(NumericVector x) {
int i;
int n = x.size();
double mean = 0;
for(i = 0; i < n; i++) {
mean = mean + x[i] / n;
}
return mean;
}
```
To use the C\+\+ function we need to source the file (remember to put the necessary headers in).
```
sourceCpp("src/mean_cpp.cpp")
```
Although the C\+\+ version is similar, there are a few crucial differences.
1. We use the `.size()` method to find the length of `x`.
2. The `for` loop has a more complicated syntax.
```
for (variable initialisation; condition; variable update ) {
// Code to execute
}
```
In this example, the loop initialises `i = 0` and will continue running until `i < n` is false.
The statement `i++` increases the value of `i` by `1`; essentially it’s just shortcut for `i = i + 1`.
3. Similar to `i++`, C\+\+ provides other operators to modify variables in place. For example we could rewrite part of the loop as
```
mean += x[i] / n;
```
The above code adds `x[i] / n` to the value of `mean`. Other similar operators are `-=`, `*=`, `/=` and `i--`.
4. A C\+\+ vector starts at `0` **not** `1`
To compare the C\+\+ and R functions, we’ll generate some normal random numbers for the comparison:
```
x = rnorm(1e4)
```
Then call the `microbenchmark()` function (results plotted in figure [7\.6](performance.html#fig:7-6)).
```
# com_mean_r is the compiled version of mean_r
z = microbenchmark(
mean(x), mean_r(x), com_mean_r(x), mean_cpp(x),
times = 1000
)
```
In this simple example, the Rcpp variant is around \\(100\\) times faster than the corresponding pure R version. This sort of speed\-up is not uncommon when switching to an Rcpp solution. Notice that the Rcpp version and standard base function `mean()` run at roughly the same speed; after all, the base R function is written in C. However, `mean()` uses a more sophisticated algorithm when calculating the mean to ensure accuracy.
Figure 7\.6: Comparison of mean functions.
#### Exercises
Consider the following piece of code:
```
double test1() {
double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++i)
b = b + a;
return b;
}
```
1. Save the function `test1()` in a separate file. Make sure it works.
2. Write a similar function in R and compare the speed of the C\+\+ and R versions.
3. Create a function called `test2()` where the `double` variables have been replaced by `float`. Do you still get the correct answer?
4. Change `b = b + a` to `b += a` to make your code more C\+\+ like.
5. (Hard) What’s the difference between `i++` and `++i`?
#### Matrices
Each vector type has a corresponding matrix equivalent: `NumericMatrix`, `IntegerMatrix`, `CharacterMatrix` and `LogicalMatrix`. We use these types in a similar way to how we used `NumericVector`’s. The main differences are:
* When we initialise, we need to specify the number of rows and columns
```
// 10 rows, 5 columns
NumericMatrix mat(10, 5);
// Length 10
NumericVector v(10);
```
* We subset using `()`, i.e. `mat(5, 4)`.
* The first element in a matrix is `mat(0, 0)` \- remember indexes start with `0` not `1`.
* To determine the number of rows and columns, we use the `.nrow()` and `.ncol()` methods.
#### Exercises
Consider the following piece of code:
```
double test1() {
double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++i)
b = b + a;
return b;
}
```
1. Save the function `test1()` in a separate file. Make sure it works.
2. Write a similar function in R and compare the speed of the C\+\+ and R versions.
3. Create a function called `test2()` where the `double` variables have been replaced by `float`. Do you still get the correct answer?
4. Change `b = b + a` to `b += a` to make your code more C\+\+ like.
5. (Hard) What’s the difference between `i++` and `++i`?
#### Matrices
Each vector type has a corresponding matrix equivalent: `NumericMatrix`, `IntegerMatrix`, `CharacterMatrix` and `LogicalMatrix`. We use these types in a similar way to how we used `NumericVector`’s. The main differences are:
* When we initialise, we need to specify the number of rows and columns
```
// 10 rows, 5 columns
NumericMatrix mat(10, 5);
// Length 10
NumericVector v(10);
```
* We subset using `()`, i.e. `mat(5, 4)`.
* The first element in a matrix is `mat(0, 0)` \- remember indexes start with `0` not `1`.
* To determine the number of rows and columns, we use the `.nrow()` and `.ncol()` methods.
### 7\.6\.6 C\+\+ with sugar on top
**Rcpp** sugar brings a higher\-level of abstraction to C\+\+ code written using the **Rcpp** API. What this means in practice is that we can write C\+\+ code in the style of R. For example, suppose we wanted to find the squared difference of two vectors; a squared residual in regression. In R we would use
```
sq_diff_r = function(x, y) (x - y)^2
```
Rewriting the function in standard C\+\+ would give
```
NumericVector res_c(NumericVector x, NumericVector y) {
int i;
int n = x.size();
NumericVector residuals(n);
for(i = 0; i < n; i++) {
residuals[i] = pow(x[i] - y[i], 2);
}
return residuals;
}
```
With **Rcpp** sugar we can rewrite this code to be more succinct and have more of an R feel:
```
NumericVector res_sugar(NumericVector x, NumericVector y) {
return pow(x - y, 2);
}
```
In the above C\+\+ code, the `pow()` function and `x-y` are valid due to **Rcpp** sugar. Other functions that are available include the d/q/p/r statistical functions, such as `rnorm()` and `pnorm()`. The sweetened versions aren’t usually faster than the C\+\+ version, but typically there’s very little difference between the two. However with the sugared variety, the code is shorter and is constantly being improved.
#### Exercises
1. Construct an R version (using a `for` loop rather than the vectorised solution), `res_r()` and compare the three function variants.
2. In the above example, `res_sugar()` is faster than `res_c()`. Do you know why?
#### Exercises
1. Construct an R version (using a `for` loop rather than the vectorised solution), `res_r()` and compare the three function variants.
2. In the above example, `res_sugar()` is faster than `res_c()`. Do you know why?
### 7\.6\.7 Rcpp resources
The aim of this section was to provide an introduction to **Rcpp**. One of the selling features of **Rcpp** is that there is a great deal of documentation available.
* The **Rcpp** [website](http://www.rcpp.org/);
* The original Journal of Statistical Software paper describing **Rcpp** and the follow\-up book (Eddelbuettel and François [2011](#ref-Eddelbuettel2011); Eddelbuettel [2013](#ref-Eddelbuettel2013));
* H. Wickham ([2014](#ref-Wickham2014)[a](#ref-Wickham2014)) provides a very readable chapter on **Rcpp** that goes into a bit more detail than this section;
* The **Rcpp** section on the [StackOverflow](https://stackoverflow.com/questions/tagged/rcpp) website. Questions are often answered by the **Rcpp** authors.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/hardware.html |
8 Efficient hardware
====================
This chapter is odd for a book on R programming. It contains very little code, and yet the chapter has the potential to speed up your algorithms by orders of magnitude. This chapter considers the impact that your computer has on your time.
Your hardware is crucial. It will not only determine how *fast* you can solve your problem, but also whether you can even tackle the problem of interest. This is because everything is loaded in RAM. Of course, having a more powerful computer costs money. The goal is to help you decide whether the benefits of upgrading your hardware are worth that extra cost.
We’ll begin this chapter with a background section on computer storage and memory and how it is measured. Then we consider individual computer components, before concluding with renting machines in the cloud.
### Prerequisites
This chapter will focus on assessing your hardware and the benefit of upgrading. We will use the **benchmarkme** package to quantify the effect of changing your CPU.
```
library("benchmarkme")
```
8\.1 Top 5 tips for efficient hardware
--------------------------------------
1. Use the package **benchmarkme** to assess your CPUs number crunching ability is it worth upgrading your hardware?
2. If possible, add more RAM.
3. Double check that you have installed a \\(64\\)\-bit version of R.
4. Cloud computing is a cost effective way of obtaining more compute power.
5. A solid state drive typically won’t have much impact on the speed of your R code, but will increase your overall productivity since I/0 is much faster.
8\.2 Background: what is a byte?
--------------------------------
A computer cannot store “numbers” or “letters”. The only thing a computer can store and work with is bits. A bit is binary, it is either a \\(0\\) or a \\(1\\). In fact from a physics perspective, a bit is just a blip of electricity that either is or isn’t there.
In the past the ASCII character set dominated computing. This set defines \\(128\\) characters including \\(0\\) to \\(9\\), upper and lower case alpha\-numeric and a few control characters such as a new line. To store these characters required \\(7\\) bits
since \\(2^7 \= 128\\), but \\(8\\) bits were typically used for performance [reasons](http://stackoverflow.com/q/14690159/203420). Table [8\.1](hardware.html#tab:ascii) gives the binary representation of the first few characters.
```
#> Warning: `frame_data()` was deprecated in tibble 2.0.0.
#> Please use `tribble()` instead.
```
Table 8\.1: The bit representation of a few ASCII characters.
| Bit representation | Character |
| --- | --- |
| \\(01000001\\) | A |
| \\(01000010\\) | B |
| \\(01000011\\) | C |
| \\(01000100\\) | D |
| \\(01000101\\) | E |
| \\(01010010\\) | R |
The limitation of only having \\(256\\) characters led to the development of Unicode, a standard framework aimed at creating a single character set for every reasonable writing system. Typically, Unicode characters require sixteen bits of storage.
Eight bits is one byte, or ASCII character. So two ASCII characters would use two bytes or \\(16\\) bits. A pure text document containing \\(100\\) characters would use \\(100\\) bytes (\\(800\\) bits). Note that mark\-up, such as font information or meta\-data, can impose a substantial memory overhead: an empty `.docx` file requires about \\(3,700\\) bytes of storage.
When computer scientists first started to think about computer memory, they noticed that \\(2^{10} \= 1024 \\simeq 10^3\\) and \\(2^{20} \=1,048,576\\simeq 10^6\\), so they adopted the short hand of kilo\- and mega\-bytes. Of course, *everyone* knew that it was just a short hand, and it was really a binary power. When computers became more wide spread, foolish people like you and me just assumed that kilo actually meant \\(10^3\\) bytes.
Fortunately the IEEE Standards Board intervened and created conventional, internationally adopted definitions of the International System of Units (SI) prefixes. So a kilobyte (kB) is \\(10^3 \= 1000\\) bytes and a megabyte (MB) is \\(10^6\\) bytes or \\(10^3\\) kilobytes (see table 8\.2\). A petabyte is approximately \\(100\\) million drawers filled with text. Astonishingly Google processes around \\(20\\) petabytes of data every day.
| Factor | Name | Symbol | Origin | Derivation |
| --- | --- | --- | --- | --- |
| \\(2^{10}\\) | kibi | Ki | Kilobinary: | \\((2^{10})^1\\) |
| \\(2^{20}\\) | mebi | Mi | Megabinary: | \\((2^{10})^2\\) |
| \\(2^{30}\\) | gibi | Gi | Gigabinary: | \\((2^{10})^3\\) |
| \\(2^{40}\\) | tebi | Ti | Terabinary: | \\((2^{10})^4\\) |
| \\(2^{50}\\) | pebi | Pi | Petabinary: | \\((2^{10})^5\\) |
Table 8\.2: Data conversion table. Credit: <http://physics.nist.gov/cuu/Units/binary.html>
Even though there is now an agreed standard for discussing memory, that doesn’t mean that everyone follows it.
Microsoft Windows, for example, uses 1MB to mean \\(2^{20}\\)B. Even more confusing the capacity of a \\(1\.44\\)MB floppy disk is a mixture, \\(1\\text{MB} \= 10^3 \\times 2^{10}\\)B. Typically RAM is specified in kibibytes, but hard drive manufacturers follow the SI standard!
8\.3 Random access memory: RAM
------------------------------
Random access memory (RAM) is a type of computer memory that can be accessed randomly: any byte of memory can be accessed without touching the preceding bytes. RAM is found in computers, phones, tablets and even printers. The amount of RAM R has access to is incredibly important. Since R loads objects into RAM, the amount of RAM you have available can limit the size of data set you can analyse.
Even if the original data set is relatively small, your analysis can generate large objects. For example, suppose we want to perform standard cluster analysis. The built\-in data set `USArrests`, is a data frame with \\(50\\) rows and \\(4\\) columns. Each row corresponds to a state in the USA
```
head(USArrests, 3)
#> Murder Assault UrbanPop Rape
#> Alabama 13.2 236 58 21.2
#> Alaska 10.0 263 48 44.5
#> Arizona 8.1 294 80 31.0
```
If we want to group states that have similar crime statistics, a standard first step is to calculate the distance or similarity matrix
```
d = dist(USArrests)
```
When we inspect the object size of the original data set and the distance object using the **pryr** package
```
pryr::object_size(USArrests)
#> 5.74 kB
pryr::object_size(d)
#> 14.8 kB
```
The distance object `d` is actually a vector that contains the distances in the upper triangular region.
we have managed to create an object that is three times larger than the original data set. In fact the object `d` is a symmetric \\(n \\times n\\) matrix, where \\(n\\) is the number of rows in `USArrests`. Clearly, as `n` increases the size of `d` increases at rate \\(O(n^2\)\\). So if our original data set contained \\(10,000\\) records, the associated distance matrix would contain almost \\(10^8\\) values. Of course since the matrix is symmetric, this corresponds to around \\(50\\) million unique values.
A rough rule of thumb is that your RAM should be three times the size of your data set.
Another benefit of increasing the amount of onboard RAM is that the ‘garbage collector’, a process that runs periodically to free\-up system memory occupied by R, is called less often. It is straightforward to determine how much RAM you have using the **benchmarkme** package
```
benchmarkme::get_ram()
#> 16.3 GB
```
Figure 8\.1: Three DIMM slots on a computer motherboard used for increasing the amount of available RAM. Credit: Wikimedia.org
It is sometimes possible to increase your computer’s RAM. On a computer motherboard there are typically \\(2\\) to \\(4\\) RAM or memory slots. If you have free slots, then you can add more memory. RAM comes in the form of dual in\-line memory modules (DIMMs) that can be slotted into the motherboard spaces (see figure [8\.1](hardware.html#fig:8-1) for example).
However it is common that all slots are already taken. This means that to upgrade your computer’s memory, some or all of the DIMMs will have to be removed. To go from \\(8\\)GB to \\(16\\)GB, for example, you may have to discard the two \\(4\\)GB RAM cards and replace them with two \\(8\\)GB cards. Increasing your laptop/desktop from \\(4\\)GB to \\(16\\)GB or \\(32\\)GB is cheap and should definitely be considered. As R Core member Uwe Ligges states,
```
fortunes::fortune(192)
#>
#> RAM is cheap and thinking hurts.
#> -- Uwe Ligges (about memory requirements in R)
#> R-help (June 2007)
```
It is a testament to the design of R that it is still relevant and its popularity is growing. Ross Ihaka, one of the originators of the R programming language, made a throw\-away comment in 2003:
```
fortunes::fortune(21)
#>
#> I seem to recall that we were targetting 512k Macintoshes. In our dreams we
#> might have seen 16Mb Sun.
#> -- Ross Ihaka (in reply to the question whether R&R thought when they
#> started out that they would see R using 16G memory on a dual Opteron
#> computer)
#> R-help (November 2003)
```
Considering that a standard smart phone now contains \\(1\\)GB of RAM, the fact that R was designed for “basic” computers, but can scale across clusters is impressive.
R’s origins on computers with limited resources helps explain its efficiency at dealing with large datasets.
#### Exercises
The following two exercises aim to help you determine if it is worthwhile upgrading your RAM.
1. R loads everything into memory, i.e. your computers RAM. How much RAM does your computer have?
2. Using your preferred search engine, how much does it cost to double the amount of available RAM on your system?
8\.4 Hard drives: HDD vs SSD
----------------------------
You are using R because you want to analyse data.
The data is typically stored on your hard drive; but not all hard drives are equal.
Unless you have a fairly expensive laptop your computer probably has a standard hard disk drive (HDD).
HDDs were first introduced by IBM in 1956\. Data is stored using magnetism on a rotating platter, as shown in Figure [8\.2](hardware.html#fig:8-2). The faster the platter spins, the faster the HDD can perform. Many laptop drives spin at either \\(5400\\)RPM (Revolutions per Minute) or \\(7200\\)RPM. The major advantage of HDDs is that they are cheap, making a \\(1\\)TB laptop standard.
In the authors’ experience, having an SSD drive doesn’t make **much** difference to R. However, the reduction in boot time and general tasks makes an SSD drive a wonderful purchase.
Figure 8\.2: A standard 2\.5" hard drive, found in most laptops. Credit: [https://en.wikipedia.org/wiki/Hard\\\_disk\\\_drive](https://en.wikipedia.org/wiki/Hard\_disk\_drive)
Solid state drives (SSDs) can be thought of as large, but more sophisticated versions of USB sticks. They have no moving parts and information is stored in microchips. Since there are no moving parts, reading/writing is much quicker. SSDs have other benefits: they are quieter, allow faster boot time (no ‘spin up’ time) and require less power (more battery life).
The read/write speed for a standard HDD is usually in the region of \\(50\-120\\)MB/s (usually closer to \\(50\\)MB). For SSDs, speeds are typically over \\(200\\)MB/s. For top\-of\-the\-range models this can approach \\(500\\)MB/s. If you’re wondering, read/write speeds for RAM is around \\(2\-20\\)GB/s. So at best SSDs are at least one order of magnitude slower than RAM, but still faster than standard HDDs.
If you are unsure what type of hard drive you have, then time how long your computer takes to reach the log\-in screen. If it is less then five seconds, you probably have a SSD. There are links on the book’s website detailing more precise methods for each OS.
8\.5 Operating systems: 32\-bit or 64\-bit
------------------------------------------
R comes in two versions: \\(32\\)\-bit and \\(64\\)\-bit. Your operating system also comes in two versions, \\(32\\)\-bit and \\(64\\)\-bit. Ideally you want \\(64\\)\-bit versions of both R and the operating system. Using a \\(32\\)\-bit version of either has severe limitations on the amount of RAM R can access. So when we suggest that you should just buy more RAM, this assumes that you are using a \\(64\\)\-bit operating system, with a \\(64\\)\-bit version of R.
If you are using an OS version from the last five years, it is unlikely to be \\(32\\)\-bit OS.
A \\(32\\)\-bit machine can access at most only \\(4\\)GB of RAM. Although some CPUs offer solutions to this limitation, if you are running a \\(32\\)\-bit operating system, then R is limited to around \\(3\\)GB RAM. If you are running a \\(64\\)\-bit operating system, but only a \\(32\\)\-bit version of R, then you have access to slightly more memory (but not much). Modern systems should run a \\(64\\)\-bit operating system, with a \\(64\\)\-bit version of R. Your memory limit is now measured as \\(8\\) terabytes for Windows machines and \\(128\\)TB for Unix\-based OSs. An easy method for determining if you are running a \\(64\\)\-bit version of R is to run
```
.Machine$sizeof.pointer
```
which will return \\(8\\) if you a running a \\(64\\)\-bit version of R.
To find precise details consult the R help pages `help("Memory-limits")` and `help("Memory")`.
#### Exercises
These exercises aim to condense the previous section into the key points.
1. Are you using \\(32\\)\-bit or \\(64\\)\-bit version of R?
2. If you are using Windows, what are the results of running the command `memory.limit()`?
8\.6 Central processing unit (CPU)
----------------------------------
The central processing unit (CPU), or the processor, is the brains of a computer. The CPU is responsible for performing numerical calculations. The faster the processor, the faster R will run. The clock speed (or clock rate, measured in hertz) is the frequency with which the CPU executes instructions. The faster the clock speed, the more instructions a CPU can execute in a section. CPU clock speed for a single CPU has been fairly static in the last couple of years, hovering around 3\.4GHz (see figure [8\.3](hardware.html#fig:8-3)).
Figure 8\.3: CPU clock speed. The data for this figure was collected from web\-forum and wikipedia. It is intended to indicate general trends in CPU speed.
Unfortunately we can’t simply use clock speeds to compare CPUs, since the internal architecture of a CPU plays a crucial role in determining the CPU performance. The R package **benchmarkme** provides functions for benchmarking your system and contains data from previous benchmarks. Figure [8\.4](hardware.html#fig:8-4) shows the relative performance for over \\(150\\) CPUs.
Figure 8\.4: CPU benchmarks from the R package, **benchmarkme**. Each point represents an individual CPU result.
Running the benchmarks and comparing your CPU to others is straightforward using the **benchmarkme** package.
After loading the package, we can benchmark your CPU
```
res = benchmark_std()
```
and compare the results to other users
```
plot(res)
# Upload your benchmarks for future users
upload_results(res)
```
You get the model specifications of the top CPUs using `get_datatable(res)`.
8\.7 Cloud computing
--------------------
Cloud computing uses networks of remote servers, instead of a local computer, to store and analyse data. It is now becoming increasingly popular to rent cloud computing resources.
### 8\.7\.1 Amazon EC2
Amazon Elastic Compute Cloud (EC2\) is one of a number of providers of this service. EC2 makes it (relatively) easy to run R instances in the cloud. Users can configure the operating system, CPU, hard drive type, the amount of RAM and where your project is physically located.
If you want to run a server in the Amazon EC2 cloud, you have to select the system you are going to boot up. There are a vast array of pre\-packaged system images. Some of these images are just basic operating systems, such as Debian or Ubuntu, which require further configuration. There is also an [Amazon machine image](http://www.louisaslett.com/RStudio_AMI/) that specifically targets R and RStudio.
#### Exercise
To assess whether you should consider cloud computing, how much does it cost to rent a machine comparable to your laptop in the cloud?
### Prerequisites
This chapter will focus on assessing your hardware and the benefit of upgrading. We will use the **benchmarkme** package to quantify the effect of changing your CPU.
```
library("benchmarkme")
```
8\.1 Top 5 tips for efficient hardware
--------------------------------------
1. Use the package **benchmarkme** to assess your CPUs number crunching ability is it worth upgrading your hardware?
2. If possible, add more RAM.
3. Double check that you have installed a \\(64\\)\-bit version of R.
4. Cloud computing is a cost effective way of obtaining more compute power.
5. A solid state drive typically won’t have much impact on the speed of your R code, but will increase your overall productivity since I/0 is much faster.
8\.2 Background: what is a byte?
--------------------------------
A computer cannot store “numbers” or “letters”. The only thing a computer can store and work with is bits. A bit is binary, it is either a \\(0\\) or a \\(1\\). In fact from a physics perspective, a bit is just a blip of electricity that either is or isn’t there.
In the past the ASCII character set dominated computing. This set defines \\(128\\) characters including \\(0\\) to \\(9\\), upper and lower case alpha\-numeric and a few control characters such as a new line. To store these characters required \\(7\\) bits
since \\(2^7 \= 128\\), but \\(8\\) bits were typically used for performance [reasons](http://stackoverflow.com/q/14690159/203420). Table [8\.1](hardware.html#tab:ascii) gives the binary representation of the first few characters.
```
#> Warning: `frame_data()` was deprecated in tibble 2.0.0.
#> Please use `tribble()` instead.
```
Table 8\.1: The bit representation of a few ASCII characters.
| Bit representation | Character |
| --- | --- |
| \\(01000001\\) | A |
| \\(01000010\\) | B |
| \\(01000011\\) | C |
| \\(01000100\\) | D |
| \\(01000101\\) | E |
| \\(01010010\\) | R |
The limitation of only having \\(256\\) characters led to the development of Unicode, a standard framework aimed at creating a single character set for every reasonable writing system. Typically, Unicode characters require sixteen bits of storage.
Eight bits is one byte, or ASCII character. So two ASCII characters would use two bytes or \\(16\\) bits. A pure text document containing \\(100\\) characters would use \\(100\\) bytes (\\(800\\) bits). Note that mark\-up, such as font information or meta\-data, can impose a substantial memory overhead: an empty `.docx` file requires about \\(3,700\\) bytes of storage.
When computer scientists first started to think about computer memory, they noticed that \\(2^{10} \= 1024 \\simeq 10^3\\) and \\(2^{20} \=1,048,576\\simeq 10^6\\), so they adopted the short hand of kilo\- and mega\-bytes. Of course, *everyone* knew that it was just a short hand, and it was really a binary power. When computers became more wide spread, foolish people like you and me just assumed that kilo actually meant \\(10^3\\) bytes.
Fortunately the IEEE Standards Board intervened and created conventional, internationally adopted definitions of the International System of Units (SI) prefixes. So a kilobyte (kB) is \\(10^3 \= 1000\\) bytes and a megabyte (MB) is \\(10^6\\) bytes or \\(10^3\\) kilobytes (see table 8\.2\). A petabyte is approximately \\(100\\) million drawers filled with text. Astonishingly Google processes around \\(20\\) petabytes of data every day.
| Factor | Name | Symbol | Origin | Derivation |
| --- | --- | --- | --- | --- |
| \\(2^{10}\\) | kibi | Ki | Kilobinary: | \\((2^{10})^1\\) |
| \\(2^{20}\\) | mebi | Mi | Megabinary: | \\((2^{10})^2\\) |
| \\(2^{30}\\) | gibi | Gi | Gigabinary: | \\((2^{10})^3\\) |
| \\(2^{40}\\) | tebi | Ti | Terabinary: | \\((2^{10})^4\\) |
| \\(2^{50}\\) | pebi | Pi | Petabinary: | \\((2^{10})^5\\) |
Table 8\.2: Data conversion table. Credit: <http://physics.nist.gov/cuu/Units/binary.html>
Even though there is now an agreed standard for discussing memory, that doesn’t mean that everyone follows it.
Microsoft Windows, for example, uses 1MB to mean \\(2^{20}\\)B. Even more confusing the capacity of a \\(1\.44\\)MB floppy disk is a mixture, \\(1\\text{MB} \= 10^3 \\times 2^{10}\\)B. Typically RAM is specified in kibibytes, but hard drive manufacturers follow the SI standard!
8\.3 Random access memory: RAM
------------------------------
Random access memory (RAM) is a type of computer memory that can be accessed randomly: any byte of memory can be accessed without touching the preceding bytes. RAM is found in computers, phones, tablets and even printers. The amount of RAM R has access to is incredibly important. Since R loads objects into RAM, the amount of RAM you have available can limit the size of data set you can analyse.
Even if the original data set is relatively small, your analysis can generate large objects. For example, suppose we want to perform standard cluster analysis. The built\-in data set `USArrests`, is a data frame with \\(50\\) rows and \\(4\\) columns. Each row corresponds to a state in the USA
```
head(USArrests, 3)
#> Murder Assault UrbanPop Rape
#> Alabama 13.2 236 58 21.2
#> Alaska 10.0 263 48 44.5
#> Arizona 8.1 294 80 31.0
```
If we want to group states that have similar crime statistics, a standard first step is to calculate the distance or similarity matrix
```
d = dist(USArrests)
```
When we inspect the object size of the original data set and the distance object using the **pryr** package
```
pryr::object_size(USArrests)
#> 5.74 kB
pryr::object_size(d)
#> 14.8 kB
```
The distance object `d` is actually a vector that contains the distances in the upper triangular region.
we have managed to create an object that is three times larger than the original data set. In fact the object `d` is a symmetric \\(n \\times n\\) matrix, where \\(n\\) is the number of rows in `USArrests`. Clearly, as `n` increases the size of `d` increases at rate \\(O(n^2\)\\). So if our original data set contained \\(10,000\\) records, the associated distance matrix would contain almost \\(10^8\\) values. Of course since the matrix is symmetric, this corresponds to around \\(50\\) million unique values.
A rough rule of thumb is that your RAM should be three times the size of your data set.
Another benefit of increasing the amount of onboard RAM is that the ‘garbage collector’, a process that runs periodically to free\-up system memory occupied by R, is called less often. It is straightforward to determine how much RAM you have using the **benchmarkme** package
```
benchmarkme::get_ram()
#> 16.3 GB
```
Figure 8\.1: Three DIMM slots on a computer motherboard used for increasing the amount of available RAM. Credit: Wikimedia.org
It is sometimes possible to increase your computer’s RAM. On a computer motherboard there are typically \\(2\\) to \\(4\\) RAM or memory slots. If you have free slots, then you can add more memory. RAM comes in the form of dual in\-line memory modules (DIMMs) that can be slotted into the motherboard spaces (see figure [8\.1](hardware.html#fig:8-1) for example).
However it is common that all slots are already taken. This means that to upgrade your computer’s memory, some or all of the DIMMs will have to be removed. To go from \\(8\\)GB to \\(16\\)GB, for example, you may have to discard the two \\(4\\)GB RAM cards and replace them with two \\(8\\)GB cards. Increasing your laptop/desktop from \\(4\\)GB to \\(16\\)GB or \\(32\\)GB is cheap and should definitely be considered. As R Core member Uwe Ligges states,
```
fortunes::fortune(192)
#>
#> RAM is cheap and thinking hurts.
#> -- Uwe Ligges (about memory requirements in R)
#> R-help (June 2007)
```
It is a testament to the design of R that it is still relevant and its popularity is growing. Ross Ihaka, one of the originators of the R programming language, made a throw\-away comment in 2003:
```
fortunes::fortune(21)
#>
#> I seem to recall that we were targetting 512k Macintoshes. In our dreams we
#> might have seen 16Mb Sun.
#> -- Ross Ihaka (in reply to the question whether R&R thought when they
#> started out that they would see R using 16G memory on a dual Opteron
#> computer)
#> R-help (November 2003)
```
Considering that a standard smart phone now contains \\(1\\)GB of RAM, the fact that R was designed for “basic” computers, but can scale across clusters is impressive.
R’s origins on computers with limited resources helps explain its efficiency at dealing with large datasets.
#### Exercises
The following two exercises aim to help you determine if it is worthwhile upgrading your RAM.
1. R loads everything into memory, i.e. your computers RAM. How much RAM does your computer have?
2. Using your preferred search engine, how much does it cost to double the amount of available RAM on your system?
#### Exercises
The following two exercises aim to help you determine if it is worthwhile upgrading your RAM.
1. R loads everything into memory, i.e. your computers RAM. How much RAM does your computer have?
2. Using your preferred search engine, how much does it cost to double the amount of available RAM on your system?
8\.4 Hard drives: HDD vs SSD
----------------------------
You are using R because you want to analyse data.
The data is typically stored on your hard drive; but not all hard drives are equal.
Unless you have a fairly expensive laptop your computer probably has a standard hard disk drive (HDD).
HDDs were first introduced by IBM in 1956\. Data is stored using magnetism on a rotating platter, as shown in Figure [8\.2](hardware.html#fig:8-2). The faster the platter spins, the faster the HDD can perform. Many laptop drives spin at either \\(5400\\)RPM (Revolutions per Minute) or \\(7200\\)RPM. The major advantage of HDDs is that they are cheap, making a \\(1\\)TB laptop standard.
In the authors’ experience, having an SSD drive doesn’t make **much** difference to R. However, the reduction in boot time and general tasks makes an SSD drive a wonderful purchase.
Figure 8\.2: A standard 2\.5" hard drive, found in most laptops. Credit: [https://en.wikipedia.org/wiki/Hard\\\_disk\\\_drive](https://en.wikipedia.org/wiki/Hard\_disk\_drive)
Solid state drives (SSDs) can be thought of as large, but more sophisticated versions of USB sticks. They have no moving parts and information is stored in microchips. Since there are no moving parts, reading/writing is much quicker. SSDs have other benefits: they are quieter, allow faster boot time (no ‘spin up’ time) and require less power (more battery life).
The read/write speed for a standard HDD is usually in the region of \\(50\-120\\)MB/s (usually closer to \\(50\\)MB). For SSDs, speeds are typically over \\(200\\)MB/s. For top\-of\-the\-range models this can approach \\(500\\)MB/s. If you’re wondering, read/write speeds for RAM is around \\(2\-20\\)GB/s. So at best SSDs are at least one order of magnitude slower than RAM, but still faster than standard HDDs.
If you are unsure what type of hard drive you have, then time how long your computer takes to reach the log\-in screen. If it is less then five seconds, you probably have a SSD. There are links on the book’s website detailing more precise methods for each OS.
8\.5 Operating systems: 32\-bit or 64\-bit
------------------------------------------
R comes in two versions: \\(32\\)\-bit and \\(64\\)\-bit. Your operating system also comes in two versions, \\(32\\)\-bit and \\(64\\)\-bit. Ideally you want \\(64\\)\-bit versions of both R and the operating system. Using a \\(32\\)\-bit version of either has severe limitations on the amount of RAM R can access. So when we suggest that you should just buy more RAM, this assumes that you are using a \\(64\\)\-bit operating system, with a \\(64\\)\-bit version of R.
If you are using an OS version from the last five years, it is unlikely to be \\(32\\)\-bit OS.
A \\(32\\)\-bit machine can access at most only \\(4\\)GB of RAM. Although some CPUs offer solutions to this limitation, if you are running a \\(32\\)\-bit operating system, then R is limited to around \\(3\\)GB RAM. If you are running a \\(64\\)\-bit operating system, but only a \\(32\\)\-bit version of R, then you have access to slightly more memory (but not much). Modern systems should run a \\(64\\)\-bit operating system, with a \\(64\\)\-bit version of R. Your memory limit is now measured as \\(8\\) terabytes for Windows machines and \\(128\\)TB for Unix\-based OSs. An easy method for determining if you are running a \\(64\\)\-bit version of R is to run
```
.Machine$sizeof.pointer
```
which will return \\(8\\) if you a running a \\(64\\)\-bit version of R.
To find precise details consult the R help pages `help("Memory-limits")` and `help("Memory")`.
#### Exercises
These exercises aim to condense the previous section into the key points.
1. Are you using \\(32\\)\-bit or \\(64\\)\-bit version of R?
2. If you are using Windows, what are the results of running the command `memory.limit()`?
#### Exercises
These exercises aim to condense the previous section into the key points.
1. Are you using \\(32\\)\-bit or \\(64\\)\-bit version of R?
2. If you are using Windows, what are the results of running the command `memory.limit()`?
8\.6 Central processing unit (CPU)
----------------------------------
The central processing unit (CPU), or the processor, is the brains of a computer. The CPU is responsible for performing numerical calculations. The faster the processor, the faster R will run. The clock speed (or clock rate, measured in hertz) is the frequency with which the CPU executes instructions. The faster the clock speed, the more instructions a CPU can execute in a section. CPU clock speed for a single CPU has been fairly static in the last couple of years, hovering around 3\.4GHz (see figure [8\.3](hardware.html#fig:8-3)).
Figure 8\.3: CPU clock speed. The data for this figure was collected from web\-forum and wikipedia. It is intended to indicate general trends in CPU speed.
Unfortunately we can’t simply use clock speeds to compare CPUs, since the internal architecture of a CPU plays a crucial role in determining the CPU performance. The R package **benchmarkme** provides functions for benchmarking your system and contains data from previous benchmarks. Figure [8\.4](hardware.html#fig:8-4) shows the relative performance for over \\(150\\) CPUs.
Figure 8\.4: CPU benchmarks from the R package, **benchmarkme**. Each point represents an individual CPU result.
Running the benchmarks and comparing your CPU to others is straightforward using the **benchmarkme** package.
After loading the package, we can benchmark your CPU
```
res = benchmark_std()
```
and compare the results to other users
```
plot(res)
# Upload your benchmarks for future users
upload_results(res)
```
You get the model specifications of the top CPUs using `get_datatable(res)`.
8\.7 Cloud computing
--------------------
Cloud computing uses networks of remote servers, instead of a local computer, to store and analyse data. It is now becoming increasingly popular to rent cloud computing resources.
### 8\.7\.1 Amazon EC2
Amazon Elastic Compute Cloud (EC2\) is one of a number of providers of this service. EC2 makes it (relatively) easy to run R instances in the cloud. Users can configure the operating system, CPU, hard drive type, the amount of RAM and where your project is physically located.
If you want to run a server in the Amazon EC2 cloud, you have to select the system you are going to boot up. There are a vast array of pre\-packaged system images. Some of these images are just basic operating systems, such as Debian or Ubuntu, which require further configuration. There is also an [Amazon machine image](http://www.louisaslett.com/RStudio_AMI/) that specifically targets R and RStudio.
#### Exercise
To assess whether you should consider cloud computing, how much does it cost to rent a machine comparable to your laptop in the cloud?
### 8\.7\.1 Amazon EC2
Amazon Elastic Compute Cloud (EC2\) is one of a number of providers of this service. EC2 makes it (relatively) easy to run R instances in the cloud. Users can configure the operating system, CPU, hard drive type, the amount of RAM and where your project is physically located.
If you want to run a server in the Amazon EC2 cloud, you have to select the system you are going to boot up. There are a vast array of pre\-packaged system images. Some of these images are just basic operating systems, such as Debian or Ubuntu, which require further configuration. There is also an [Amazon machine image](http://www.louisaslett.com/RStudio_AMI/) that specifically targets R and RStudio.
#### Exercise
To assess whether you should consider cloud computing, how much does it cost to rent a machine comparable to your laptop in the cloud?
#### Exercise
To assess whether you should consider cloud computing, how much does it cost to rent a machine comparable to your laptop in the cloud?
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/collaboration.html |
9 Efficient collaboration
=========================
Large projects inevitably involve many people. This poses risks but also opportunities for improving computational efficiency and productivity, especially if project collaborators are reading and committing code. This chapter provides guidance on how to minimise the risks and maximise the benefits of collaborative R programming.
Collaborative working has a number of benefits. A team with a diverse skill set is usually stronger than a team with a very narrow focus. It makes sense to specialize: clearly defining roles such as statistician, front\-end developer, system administrator and project manager will make your team stronger. Even if you are working alone, dividing the work into discrete branches in this way can be useful, as discussed in Chapter [4](workflow.html#workflow).
Collaborative programming provides an opportunity for people to review each other’s code. This can be encouraged by using a uniform style with many comments, as described in Section [9\.2](collaboration.html#coding-style). Like using a clear style in human language, following a style guide has the additional advantage of making your code more understandable to others.
When working on complex programming projects with multiple inter\-dependencies version control is essential. Even on small projects tracking the progress of your project’s code\-base has many advantages and makes collaboration much easier. Fortunately it is now easier than ever before to integrate version control into your project, using RStudio’s interface to the version control software `git` and online code sharing websites such as GitHub. This is the subject of Section [9\.3](collaboration.html#version-control).
The final section, [9\.4](collaboration.html#code-review), addresses the question of working in a team and performing
code reviews.
### Prerequisites
This chapter deals with coding standards and techniques. The only packages required for this
chapter are **lubridate** and **dplyr**. These packages are used to illustrate good practice.
9\.1 Top 5 tips for efficient collaboration
-------------------------------------------
1. Have a consistent coding style.
2. Think carefully about your comments and keep them up to date.
3. Use version control whenever possible.
4. Use informative commit messages.
5. Don’t be afraid to elicit feedback from colleagues.
9\.2 Coding style
-----------------
To be a successful programmer you need to use a consistent programming style.
There is no single ‘correct’ style, but using multiple styles in the same project is wrong (Bååth [2012](#ref-ba_aa_ath_state_2012)). To some extent good style is subjective and down to personal taste. There are, however, general principles that
most programmers agree on, such as:
* Use modular code;
* Comment your code;
* Don’t Repeat Yourself (DRY);
* Be concise, clear and consistent.
Good coding style will make you more efficient even if you are the only person who reads it.
When your code is read by multiple readers or you are developing code with co\-workers, having a consistent style is even more important. There are a number of R style guides online that are broadly similar, including one by
[Google](https://google-styleguide.googlecode.com/svn/trunk/Rguide.xml), [Hadley Whickham](http://adv-r.had.co.nz/Style.html) and [Richie Cotton](https://4dpiecharts.com/r-code-style-guide/).
The style followed in this book is based on a combination of Hadley Wickham’s guide and our own preferences (we follow Yihui Xie in preferring `=` to `<-` for assignment, for example).
In\-line with the principle of automation (automate any task that can save time by automating), the easiest way to improve your code is to ask your computer to do it, using RStudio.
### 9\.2\.1 Reformatting code with RStudio
RStudio can automatically clean up poorly indented and formatted code. To do this, select the lines that need to be formatted (e.g. via `Ctrl+A` to select the entire script) then automatically indent it with `Ctrl+I`. The shortcut `Ctrl+Shift+A` will reformat the code, adding spaces for maximum readability. An example is provided below.
```
# Poorly indented/formatted code
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This code chunk works but is not pleasant to read. RStudio automatically indents the code after the `if` statement as follows.
```
# Automatically indented code (Ctrl+I in RStudio)
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This is a start, but it’s still not easy to read. This can be fixed in RStudio as illustrated below (these options can be seen in the Code menu, accessed with `Alt+C` on Windows/Linux computers).
```
# Automatically reformat the code (Ctrl+Shift+A in RStudio)
if(!exists("x")) {
x = c(3, 5)
y = x[2]
}
```
Note that some aspects of style are subjective: we would not leave a space after the `if` and `)`.
### 9\.2\.2 File names
File names should use the `.R` extension and should be lower case (e.g. `load.R`). Avoid spaces. Use a dash or underscore to separate words.
```
# Good names
normalise.R
load.R
# Bad names
Normalise.r
load data.R
```
Section 1\.1 of [Writing R Extensions](https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Package-structure) provides more detailed guidance on file names, such as avoiding non\-English alphabetic characters since they cannot be guaranteed to work across locales. While the guidelines are strict, the guidance aids in making your scripts more portable.
### 9\.2\.3 Loading packages
Library function calls should be at the top of your script. When loading an essential package, use `library` instead of `require` since a missing package will then raise an error. If a package isn’t essential, use `require` and appropriately capture the warning raised. Package names should be surrounded with speech marks.
```
# Good
library("dplyr")
# Non-standard evaluation
library(dplyr)
```
Avoid listing every package you may need, instead just include the packages you actually use. If you find that you are loading many packages, consider putting all packages in a file called `packages.R` and using `source` appropriately.
### 9\.2\.4 Commenting
Comments can greatly improve the efficiency of collaborative projects by helping everyone to understand what each line of code is doing. However comments should be used carefully: plastering your script with comments does not necessarily make it more efficient, and too many comments can be inefficient. Updating heavily commented code can be a pain, for example: not only will you have to change all the R code, you’ll also have to rewrite or delete all the comments!
Ensure that your comments are meaningful. Avoid using verbose English to explain standard R code. The comment below, for example, adds no useful information because it is obvious by reading the code that `x` is being set to 1:
```
# Setting x equal to 1
x = 1
```
Instead, comments should provide context. Imagine `x` was being used as a counter (in which case it should probably have a more meaningful name, like `counter`, but we’ll continue to use `x` for illustrative purposes). In that case the comment could explain your intention for its future use:
```
# Initialize counter
x = 1
```
The example above illustrates that comments are more useful if they provide context and explain the programmer’s intention (McConnell [2004](#ref-Mcconnell2004)). Each comment line should begin with a single hash (`#`), followed by a space. Comments can be toggled (turned on and off) in this way with `Ctl+Shift+C` in RStudio. The double hash (`##`) can be reserved for R output. If you follow your comment with four dashes (`# ----`) RStudio will enable code folding until the next instance of this.
### 9\.2\.5 Object names
> “When I use a word,” Humpty Dumpty said, in a rather scornful tone,
> “it means just what I choose it to mean \- neither more nor less.”
>
>
> * Lewis Carroll \- Through the Looking Glass, Chapter 6\.
It is important for objects and functions to be named consistently and sensibly. To take a silly example, imagine if all objects in your projects were called `x`, `xx`, `xxx` etc. The code would run fine. However, it would be hard for other people, and a future you, to figure out what was going on, especially when you got to the object `xxxxxxxxxx`!
For this reason, giving a clear and consistent name to your objects, especially if they are going to be used many times in your script, can boost project efficiency (if an object is only used once, its name is less important, a case where `x` could be acceptable). Following discussion in (Bååth [2012](#ref-ba_aa_ath_state_2012)) and elsewhere, suggest an `underscore_separated` style for function and object names[23](#fn23). Unless you are creating an S3 object, avoid using a `.` in the name (this will help avoid confusing Python programmers!). Names should be concise yet meaningful.
In functions the required arguments should always be first, followed by optional arguments. The special `...` argument should be last. If your argument has a boolean value, use `TRUE`/`FALSE` instead of `T`/`F` for clarity.
It’s tempting to use `T`/`F` as shortcuts. But it is easy to accidentally redefine these variables, e.g. `F = 10`. R raises an error if you try to redefine `TRUE`/`FALSE`.
While it’s possible to write arguments that depend on other arguments, try to avoid using this idiom
as it makes understanding the default behaviour harder to understand. Typically it’s easier to set an argument to have a default value of `NULL` and check its value using `is.null` than by using `missing`.
Where possible, avoid using names of existing functions.
### 9\.2\.6 Example package
The `lubridate` package is a good example of a package that has a consistent naming system, to make it easy for users to guess its features and behaviour. Dates are encoded in a variety of ways, but the `lubridate` package has a neat set of functions consisting of the three letters, **y**ear, **m**onth and **d**ay. For example,
```
library("lubridate")
ymd("2012-01-02")
dmy("02-01-2012")
mdy("01-02-2012")
```
### 9\.2\.7 Assignment
The two most common ways of assigning objects to values in R is with `<-` and `=`. In most (but not all) contexts, they can be used interchangeably. Regardless of which operator you prefer, consistency is key, particularly when working in a group. In this book we use the `=` operator for assignment, as it’s faster to type and more consistent with other languages.
The one place where a difference occurs is during function calls. Consider the following piece of code used for timing random number generation
```
system.time(expr1 <- rnorm(10e5))
system.time(expr2 = rnorm(10e5)) # error
```
The first lines will run correctly **and** create a variable called `expr1`. The second line will raise an error. When we use `=` in a function call, it changes from an *assignment* operator to an *argument passing* operator. For further information about assignment, see `?assignOps`.
### 9\.2\.8 Spacing
Consistent spacing is an easy way of making your code more readable. Even a simple command such as `x = x + 1` takes a bit more time to understand when the spacing is removed, i.e. `x=x+1`. You should add a space around the operators `+`, `-`, `\` and `*`. Include a space around the assignment operators, `<-` and `=`. Additionally, add a space around any comparison operators such as `==` and `<`. The latter rule helps avoid bugs
```
# Bug. x now equals 1
x[x<-1]
# Correct. Selecting values less than -1
x[x < -1]
```
The exceptions to the space rule are `:`, `::` and `:::`, as well as `$` and `@` symbols for selecting sub\-parts of objects. As with English, add a space after a comma, e.g.
```
z[z$colA > 1990, ]
```
### 9\.2\.9 Indentation
Use two spaces to indent code. Never mix tabs and spaces. RStudio can automatically convert the tab character to spaces (see `Tools -> Global options -> Code`).
### 9\.2\.10 Curly braces
Consider the following code:
```
# Bad style, fails
if(x < 5)
{
y}
else {
x}
```
Typing this straight into R will result in an error. An opening curly brace, `{` should not go on its own line and should always be followed by a line break. A closing curly brace should always go on its own line (unless it’s followed by an `else`, in which case the `else` should go on its own line). The code inside curly braces should be indented (and RStudio will enforce this rule), as shown below.
```
# Good style
if(x < 5){
x
} else {
y
}
```
#### Exercises
Look at the difference between your style and RStudio’s based on a representative R script that you have written (see Section [9\.2](collaboration.html#coding-style)). What are the similarities? What are the differences? Are you consistent? Write these down and think about how you can use the results to improve your coding style.
9\.3 Version control
--------------------
When a project gets large, complicated or mission\-critical it is important to keep track of how it evolves. In the same way that Dropbox saves a ‘backup’ of your files, version control systems keep a backup of your code. The only difference is that version control systems back\-up your code *forever*.
The version control system we recommend is Git, a command\-line application created by Linus Torvalds, who also invented Linux.[24](#fn24) The easiest way to integrate your R projects with Git, if you’re not accustomed to using a shell (e.g. the Unix command line), is with RStudio’s Git tab, in the top right\-hand window (see figure [9\.1](collaboration.html#fig:9-1)). This shows a number of files have been modified (as illustrated with the blue M symbol) and that some are new (as illustrated with the yellow ? symbol). Checking the tick\-box will enable these files to be *committed*.
### 9\.3\.1 Commits
Commits are the basic units of version control. Keep your commits ‘atomic’: each one should only do one thing. Document your work with clear and concise commit messages, use the present tense, e.g.: ‘Add analysis functions’.
Committing code only updates the files on your ‘local’ branch. To update the files stored on a remote server (e.g. on GitHub), you must ‘push’ the commit. This can be done using `git push` from a shell or using the green up arrow in RStudio, illustrated in figure [9\.1](collaboration.html#fig:9-1). The blue down arrow will ‘pull’ the latest version of the repository from the remote.[25](#fn25)
Figure 9\.1: The Git tab in RStudio
### 9\.3\.2 Git integration in RStudio
How can you enable this functionality on your installation of RStudio? RStudio can be a GUI Git only if Git has been installed *and* RStudio can find it. You need a working installation of Git (e.g. installed through `apt-get install git` Ubuntu/Debian or via [GitHub Desktop](https://help.github.com/desktop/guides/getting-started/installing-github-desktop/) for Mac and Windows). RStudio can be linked to your Git installation via Tools \> Global Options, in the Git/SVN tab. This tab also provides a [link](https://support.rstudio.com/hc/en-us/articles/200532077) to a help page on RStudio/Git.
Once Git has been linked to your RStudio installation, it can be used to track changes in a new project by selecting `Create a git repository` when creating a new project. The tab illustrated in figure [9\.1](collaboration.html#fig:9-1) will appear, allowing functionality for interacting with Git via RStudio.
RStudio provides a useful GUI for navigating past commits. This allows you to see the entire history of your project. To navigate and view the details of past commits click on the Diff button in the Git pane, as illustrated in figure [9\.2](collaboration.html#fig:9-2).
Figure 9\.2: The Git history navigation interface
### 9\.3\.3 GitHub
GitHub is an online platform that makes sharing your work and collaborative code easy. There are alternatives such as [GitLab](https://about.gitlab.com/). The focus here is on GitHub as it’s by far the most popular among R developers. Also, through the command `devtools::install_github()`, preview versions of a package can be installed and updated in an instant. This makes ‘GitHub packages’ a great way to access the latest functionality. And GitHub makes it easy to get your work ‘out there’ to the world for efficiently collaborating with others, without the restraints placed on CRAN packages.
To install the GitHub version of the **benchmarkme** package, for example one would enter
```
devtools::install_github("csgillespie/benchmarkme")
```
Note that `csgillespie` is the GitHub user and `benchmarkme` is the package name. Replacing `csgillespie` with `robinlovelace` in the above code would install Robin’s version of the package. This is useful for fast collaboration with many people, but you must remember that GitHub packages will not update automatically with the command `update.packages` (see [2\.3\.5](set-up.html#updating-r-packages)).
Warning: although GitHub is fantastic for collaboration, it can end up creating more problems than it solves if your collaborators are not git\-literate. In one project, Robin eventually abandoned using GitHub to collaborate after his collaborator found it impossible to work with. More time was being spent debugging git/GitHub than actually working. Our advice therefore is to **never impose git** and always ensure that other lines of communication (e.g. phone calls, emails) are open as different people prefer different ways of communicating.
### 9\.3\.4 Branches, forks, pulls and clones
Git is a large program which takes a long time to learn in depth. However, getting to grips with the basics of some of its more advanced functions can make you a more efficient collaborator. Using and merging branches, for example, allows you to test new features in a self\-contained environment before it is used in production (e.g. when shifting to an updated version of a package which is not backwards compatible). Instead of bogging you down with a comprehensive discussion of what is possible, this section cuts to the most important features for collaboration: branches, forks, fetches and clones. For a more detailed description of Git’s powerful functionality, we recommend Jenny Byran’s [book](http://happygitwithr.com/), “Happy Git and GitHub for the useR”.
Branches are distinct versions of your repository. Git allows you to jump seamlessly between different versions of your entire project. To create a new branch called test, you need to enter the shell and use the Git command line:
```
git checkout -b test
```
This is the equivalent of entering two commands: `git branch test` to create the branch and then `git checkout test` to *checkout* that branch. Checkout means switch into that branch. Any changes will not affect your previous branch. In RStudio you can jump quickly between branches using the drop down menu in the top right of the Git pane. This is illustrated in figure [9\.1](collaboration.html#fig:9-1): see the `master` text followed by a down arrow. Clicking on this will allow you to select other branches.
Forks are like branches but they exist on other people’s computers. You can fork a repository on GitHub easily, as described on the site’s [help pages](https://help.github.com/articles/fork-a-repo/). If you want an exact copy of this repository (including the commit history) you can *clone* this fork to your computer using the command `git clone` or by using a Git GUI such as GitHub Desktop. This is preferable from a collaboration perspective compared to cloning the repository directly, because any changes can be pushed back online easily if you are working from your own fork. You cannot push to forks that you have not created. If you want your work to be incorporated into the original fork you can use a *pull request*. Note: if you don’t need the project’s entire commit history, you can simply download a zip file containing the latest version of the repository from GitHub (see at the top right of any GitHub repository).
A pull request (PR) is a mechanism on GitHub by which your code can be added to an existing project. One of the most useful features of a PR from a collaboration perspective is that it provides an opportunity for others to comment on your code, line by line, before it gets merged. This is all done online on GitHub, as discussed in [GitHub’s online help](https://help.github.com/articles/merging-a-pull-request/). Following feedback, you may want to refactor code, written by you or others.
9\.4 Code review
----------------
What is a code review?[26](#fn26) Simply when we have finished working on a piece of code, a colleague reviews our work and considers questions such as
* Is the code correct and properly documented?
* Could the code be improved?
* Does the code conform to existing style guidelines?
* Are there any automated tests? If so, are they sufficient?
A good code review shares knowledge and best practice.
A lightweight code review can take a variety of forms. For example, it could be as simple as emailing round some code for comments, or “over the shoulder”, where someone literally looks over your shoulder while coding. More formal techniques include paired programming where two developers work side by side on the same project.
Regardless of the review method being employed, there a number of points to remember. First, as with all forms of feedback, be constructive. Rather than pointing out flaws, give suggested improvements. Closely related is give praise when appropriate. Second, if you are reviewing a piece of code set a time frame or the number of lines of code you will review. For example, you will spend one hour reviewing a piece of code, or a maximum of 400 lines. Third, a code review should be performed before the code is merged into a larger code base; fix mistakes as soon as possible.
Many R users don’t work in team or group; instead they work by themselves. Practically, there isn’t anyone nearby to review their code. However there is still the option of an *unoffical* code review. For example, if you have hosted code on an online repository such as GitHub, users will naturally give feedback on our code (especially if you make it clear that you welcome feedback). Another good place is StackOverflow (covered in detail in chapter [10](learning.html#learning)). This site allows you to post answers to other users questions. When you post an answer, if your code is unclear, this will be flagged in comments below your answer.
### Prerequisites
This chapter deals with coding standards and techniques. The only packages required for this
chapter are **lubridate** and **dplyr**. These packages are used to illustrate good practice.
9\.1 Top 5 tips for efficient collaboration
-------------------------------------------
1. Have a consistent coding style.
2. Think carefully about your comments and keep them up to date.
3. Use version control whenever possible.
4. Use informative commit messages.
5. Don’t be afraid to elicit feedback from colleagues.
9\.2 Coding style
-----------------
To be a successful programmer you need to use a consistent programming style.
There is no single ‘correct’ style, but using multiple styles in the same project is wrong (Bååth [2012](#ref-ba_aa_ath_state_2012)). To some extent good style is subjective and down to personal taste. There are, however, general principles that
most programmers agree on, such as:
* Use modular code;
* Comment your code;
* Don’t Repeat Yourself (DRY);
* Be concise, clear and consistent.
Good coding style will make you more efficient even if you are the only person who reads it.
When your code is read by multiple readers or you are developing code with co\-workers, having a consistent style is even more important. There are a number of R style guides online that are broadly similar, including one by
[Google](https://google-styleguide.googlecode.com/svn/trunk/Rguide.xml), [Hadley Whickham](http://adv-r.had.co.nz/Style.html) and [Richie Cotton](https://4dpiecharts.com/r-code-style-guide/).
The style followed in this book is based on a combination of Hadley Wickham’s guide and our own preferences (we follow Yihui Xie in preferring `=` to `<-` for assignment, for example).
In\-line with the principle of automation (automate any task that can save time by automating), the easiest way to improve your code is to ask your computer to do it, using RStudio.
### 9\.2\.1 Reformatting code with RStudio
RStudio can automatically clean up poorly indented and formatted code. To do this, select the lines that need to be formatted (e.g. via `Ctrl+A` to select the entire script) then automatically indent it with `Ctrl+I`. The shortcut `Ctrl+Shift+A` will reformat the code, adding spaces for maximum readability. An example is provided below.
```
# Poorly indented/formatted code
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This code chunk works but is not pleasant to read. RStudio automatically indents the code after the `if` statement as follows.
```
# Automatically indented code (Ctrl+I in RStudio)
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This is a start, but it’s still not easy to read. This can be fixed in RStudio as illustrated below (these options can be seen in the Code menu, accessed with `Alt+C` on Windows/Linux computers).
```
# Automatically reformat the code (Ctrl+Shift+A in RStudio)
if(!exists("x")) {
x = c(3, 5)
y = x[2]
}
```
Note that some aspects of style are subjective: we would not leave a space after the `if` and `)`.
### 9\.2\.2 File names
File names should use the `.R` extension and should be lower case (e.g. `load.R`). Avoid spaces. Use a dash or underscore to separate words.
```
# Good names
normalise.R
load.R
# Bad names
Normalise.r
load data.R
```
Section 1\.1 of [Writing R Extensions](https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Package-structure) provides more detailed guidance on file names, such as avoiding non\-English alphabetic characters since they cannot be guaranteed to work across locales. While the guidelines are strict, the guidance aids in making your scripts more portable.
### 9\.2\.3 Loading packages
Library function calls should be at the top of your script. When loading an essential package, use `library` instead of `require` since a missing package will then raise an error. If a package isn’t essential, use `require` and appropriately capture the warning raised. Package names should be surrounded with speech marks.
```
# Good
library("dplyr")
# Non-standard evaluation
library(dplyr)
```
Avoid listing every package you may need, instead just include the packages you actually use. If you find that you are loading many packages, consider putting all packages in a file called `packages.R` and using `source` appropriately.
### 9\.2\.4 Commenting
Comments can greatly improve the efficiency of collaborative projects by helping everyone to understand what each line of code is doing. However comments should be used carefully: plastering your script with comments does not necessarily make it more efficient, and too many comments can be inefficient. Updating heavily commented code can be a pain, for example: not only will you have to change all the R code, you’ll also have to rewrite or delete all the comments!
Ensure that your comments are meaningful. Avoid using verbose English to explain standard R code. The comment below, for example, adds no useful information because it is obvious by reading the code that `x` is being set to 1:
```
# Setting x equal to 1
x = 1
```
Instead, comments should provide context. Imagine `x` was being used as a counter (in which case it should probably have a more meaningful name, like `counter`, but we’ll continue to use `x` for illustrative purposes). In that case the comment could explain your intention for its future use:
```
# Initialize counter
x = 1
```
The example above illustrates that comments are more useful if they provide context and explain the programmer’s intention (McConnell [2004](#ref-Mcconnell2004)). Each comment line should begin with a single hash (`#`), followed by a space. Comments can be toggled (turned on and off) in this way with `Ctl+Shift+C` in RStudio. The double hash (`##`) can be reserved for R output. If you follow your comment with four dashes (`# ----`) RStudio will enable code folding until the next instance of this.
### 9\.2\.5 Object names
> “When I use a word,” Humpty Dumpty said, in a rather scornful tone,
> “it means just what I choose it to mean \- neither more nor less.”
>
>
> * Lewis Carroll \- Through the Looking Glass, Chapter 6\.
It is important for objects and functions to be named consistently and sensibly. To take a silly example, imagine if all objects in your projects were called `x`, `xx`, `xxx` etc. The code would run fine. However, it would be hard for other people, and a future you, to figure out what was going on, especially when you got to the object `xxxxxxxxxx`!
For this reason, giving a clear and consistent name to your objects, especially if they are going to be used many times in your script, can boost project efficiency (if an object is only used once, its name is less important, a case where `x` could be acceptable). Following discussion in (Bååth [2012](#ref-ba_aa_ath_state_2012)) and elsewhere, suggest an `underscore_separated` style for function and object names[23](#fn23). Unless you are creating an S3 object, avoid using a `.` in the name (this will help avoid confusing Python programmers!). Names should be concise yet meaningful.
In functions the required arguments should always be first, followed by optional arguments. The special `...` argument should be last. If your argument has a boolean value, use `TRUE`/`FALSE` instead of `T`/`F` for clarity.
It’s tempting to use `T`/`F` as shortcuts. But it is easy to accidentally redefine these variables, e.g. `F = 10`. R raises an error if you try to redefine `TRUE`/`FALSE`.
While it’s possible to write arguments that depend on other arguments, try to avoid using this idiom
as it makes understanding the default behaviour harder to understand. Typically it’s easier to set an argument to have a default value of `NULL` and check its value using `is.null` than by using `missing`.
Where possible, avoid using names of existing functions.
### 9\.2\.6 Example package
The `lubridate` package is a good example of a package that has a consistent naming system, to make it easy for users to guess its features and behaviour. Dates are encoded in a variety of ways, but the `lubridate` package has a neat set of functions consisting of the three letters, **y**ear, **m**onth and **d**ay. For example,
```
library("lubridate")
ymd("2012-01-02")
dmy("02-01-2012")
mdy("01-02-2012")
```
### 9\.2\.7 Assignment
The two most common ways of assigning objects to values in R is with `<-` and `=`. In most (but not all) contexts, they can be used interchangeably. Regardless of which operator you prefer, consistency is key, particularly when working in a group. In this book we use the `=` operator for assignment, as it’s faster to type and more consistent with other languages.
The one place where a difference occurs is during function calls. Consider the following piece of code used for timing random number generation
```
system.time(expr1 <- rnorm(10e5))
system.time(expr2 = rnorm(10e5)) # error
```
The first lines will run correctly **and** create a variable called `expr1`. The second line will raise an error. When we use `=` in a function call, it changes from an *assignment* operator to an *argument passing* operator. For further information about assignment, see `?assignOps`.
### 9\.2\.8 Spacing
Consistent spacing is an easy way of making your code more readable. Even a simple command such as `x = x + 1` takes a bit more time to understand when the spacing is removed, i.e. `x=x+1`. You should add a space around the operators `+`, `-`, `\` and `*`. Include a space around the assignment operators, `<-` and `=`. Additionally, add a space around any comparison operators such as `==` and `<`. The latter rule helps avoid bugs
```
# Bug. x now equals 1
x[x<-1]
# Correct. Selecting values less than -1
x[x < -1]
```
The exceptions to the space rule are `:`, `::` and `:::`, as well as `$` and `@` symbols for selecting sub\-parts of objects. As with English, add a space after a comma, e.g.
```
z[z$colA > 1990, ]
```
### 9\.2\.9 Indentation
Use two spaces to indent code. Never mix tabs and spaces. RStudio can automatically convert the tab character to spaces (see `Tools -> Global options -> Code`).
### 9\.2\.10 Curly braces
Consider the following code:
```
# Bad style, fails
if(x < 5)
{
y}
else {
x}
```
Typing this straight into R will result in an error. An opening curly brace, `{` should not go on its own line and should always be followed by a line break. A closing curly brace should always go on its own line (unless it’s followed by an `else`, in which case the `else` should go on its own line). The code inside curly braces should be indented (and RStudio will enforce this rule), as shown below.
```
# Good style
if(x < 5){
x
} else {
y
}
```
#### Exercises
Look at the difference between your style and RStudio’s based on a representative R script that you have written (see Section [9\.2](collaboration.html#coding-style)). What are the similarities? What are the differences? Are you consistent? Write these down and think about how you can use the results to improve your coding style.
### 9\.2\.1 Reformatting code with RStudio
RStudio can automatically clean up poorly indented and formatted code. To do this, select the lines that need to be formatted (e.g. via `Ctrl+A` to select the entire script) then automatically indent it with `Ctrl+I`. The shortcut `Ctrl+Shift+A` will reformat the code, adding spaces for maximum readability. An example is provided below.
```
# Poorly indented/formatted code
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This code chunk works but is not pleasant to read. RStudio automatically indents the code after the `if` statement as follows.
```
# Automatically indented code (Ctrl+I in RStudio)
if(!exists("x")){
x=c(3,5)
y=x[2]}
```
This is a start, but it’s still not easy to read. This can be fixed in RStudio as illustrated below (these options can be seen in the Code menu, accessed with `Alt+C` on Windows/Linux computers).
```
# Automatically reformat the code (Ctrl+Shift+A in RStudio)
if(!exists("x")) {
x = c(3, 5)
y = x[2]
}
```
Note that some aspects of style are subjective: we would not leave a space after the `if` and `)`.
### 9\.2\.2 File names
File names should use the `.R` extension and should be lower case (e.g. `load.R`). Avoid spaces. Use a dash or underscore to separate words.
```
# Good names
normalise.R
load.R
# Bad names
Normalise.r
load data.R
```
Section 1\.1 of [Writing R Extensions](https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Package-structure) provides more detailed guidance on file names, such as avoiding non\-English alphabetic characters since they cannot be guaranteed to work across locales. While the guidelines are strict, the guidance aids in making your scripts more portable.
### 9\.2\.3 Loading packages
Library function calls should be at the top of your script. When loading an essential package, use `library` instead of `require` since a missing package will then raise an error. If a package isn’t essential, use `require` and appropriately capture the warning raised. Package names should be surrounded with speech marks.
```
# Good
library("dplyr")
# Non-standard evaluation
library(dplyr)
```
Avoid listing every package you may need, instead just include the packages you actually use. If you find that you are loading many packages, consider putting all packages in a file called `packages.R` and using `source` appropriately.
### 9\.2\.4 Commenting
Comments can greatly improve the efficiency of collaborative projects by helping everyone to understand what each line of code is doing. However comments should be used carefully: plastering your script with comments does not necessarily make it more efficient, and too many comments can be inefficient. Updating heavily commented code can be a pain, for example: not only will you have to change all the R code, you’ll also have to rewrite or delete all the comments!
Ensure that your comments are meaningful. Avoid using verbose English to explain standard R code. The comment below, for example, adds no useful information because it is obvious by reading the code that `x` is being set to 1:
```
# Setting x equal to 1
x = 1
```
Instead, comments should provide context. Imagine `x` was being used as a counter (in which case it should probably have a more meaningful name, like `counter`, but we’ll continue to use `x` for illustrative purposes). In that case the comment could explain your intention for its future use:
```
# Initialize counter
x = 1
```
The example above illustrates that comments are more useful if they provide context and explain the programmer’s intention (McConnell [2004](#ref-Mcconnell2004)). Each comment line should begin with a single hash (`#`), followed by a space. Comments can be toggled (turned on and off) in this way with `Ctl+Shift+C` in RStudio. The double hash (`##`) can be reserved for R output. If you follow your comment with four dashes (`# ----`) RStudio will enable code folding until the next instance of this.
### 9\.2\.5 Object names
> “When I use a word,” Humpty Dumpty said, in a rather scornful tone,
> “it means just what I choose it to mean \- neither more nor less.”
>
>
> * Lewis Carroll \- Through the Looking Glass, Chapter 6\.
It is important for objects and functions to be named consistently and sensibly. To take a silly example, imagine if all objects in your projects were called `x`, `xx`, `xxx` etc. The code would run fine. However, it would be hard for other people, and a future you, to figure out what was going on, especially when you got to the object `xxxxxxxxxx`!
For this reason, giving a clear and consistent name to your objects, especially if they are going to be used many times in your script, can boost project efficiency (if an object is only used once, its name is less important, a case where `x` could be acceptable). Following discussion in (Bååth [2012](#ref-ba_aa_ath_state_2012)) and elsewhere, suggest an `underscore_separated` style for function and object names[23](#fn23). Unless you are creating an S3 object, avoid using a `.` in the name (this will help avoid confusing Python programmers!). Names should be concise yet meaningful.
In functions the required arguments should always be first, followed by optional arguments. The special `...` argument should be last. If your argument has a boolean value, use `TRUE`/`FALSE` instead of `T`/`F` for clarity.
It’s tempting to use `T`/`F` as shortcuts. But it is easy to accidentally redefine these variables, e.g. `F = 10`. R raises an error if you try to redefine `TRUE`/`FALSE`.
While it’s possible to write arguments that depend on other arguments, try to avoid using this idiom
as it makes understanding the default behaviour harder to understand. Typically it’s easier to set an argument to have a default value of `NULL` and check its value using `is.null` than by using `missing`.
Where possible, avoid using names of existing functions.
### 9\.2\.6 Example package
The `lubridate` package is a good example of a package that has a consistent naming system, to make it easy for users to guess its features and behaviour. Dates are encoded in a variety of ways, but the `lubridate` package has a neat set of functions consisting of the three letters, **y**ear, **m**onth and **d**ay. For example,
```
library("lubridate")
ymd("2012-01-02")
dmy("02-01-2012")
mdy("01-02-2012")
```
### 9\.2\.7 Assignment
The two most common ways of assigning objects to values in R is with `<-` and `=`. In most (but not all) contexts, they can be used interchangeably. Regardless of which operator you prefer, consistency is key, particularly when working in a group. In this book we use the `=` operator for assignment, as it’s faster to type and more consistent with other languages.
The one place where a difference occurs is during function calls. Consider the following piece of code used for timing random number generation
```
system.time(expr1 <- rnorm(10e5))
system.time(expr2 = rnorm(10e5)) # error
```
The first lines will run correctly **and** create a variable called `expr1`. The second line will raise an error. When we use `=` in a function call, it changes from an *assignment* operator to an *argument passing* operator. For further information about assignment, see `?assignOps`.
### 9\.2\.8 Spacing
Consistent spacing is an easy way of making your code more readable. Even a simple command such as `x = x + 1` takes a bit more time to understand when the spacing is removed, i.e. `x=x+1`. You should add a space around the operators `+`, `-`, `\` and `*`. Include a space around the assignment operators, `<-` and `=`. Additionally, add a space around any comparison operators such as `==` and `<`. The latter rule helps avoid bugs
```
# Bug. x now equals 1
x[x<-1]
# Correct. Selecting values less than -1
x[x < -1]
```
The exceptions to the space rule are `:`, `::` and `:::`, as well as `$` and `@` symbols for selecting sub\-parts of objects. As with English, add a space after a comma, e.g.
```
z[z$colA > 1990, ]
```
### 9\.2\.9 Indentation
Use two spaces to indent code. Never mix tabs and spaces. RStudio can automatically convert the tab character to spaces (see `Tools -> Global options -> Code`).
### 9\.2\.10 Curly braces
Consider the following code:
```
# Bad style, fails
if(x < 5)
{
y}
else {
x}
```
Typing this straight into R will result in an error. An opening curly brace, `{` should not go on its own line and should always be followed by a line break. A closing curly brace should always go on its own line (unless it’s followed by an `else`, in which case the `else` should go on its own line). The code inside curly braces should be indented (and RStudio will enforce this rule), as shown below.
```
# Good style
if(x < 5){
x
} else {
y
}
```
#### Exercises
Look at the difference between your style and RStudio’s based on a representative R script that you have written (see Section [9\.2](collaboration.html#coding-style)). What are the similarities? What are the differences? Are you consistent? Write these down and think about how you can use the results to improve your coding style.
#### Exercises
Look at the difference between your style and RStudio’s based on a representative R script that you have written (see Section [9\.2](collaboration.html#coding-style)). What are the similarities? What are the differences? Are you consistent? Write these down and think about how you can use the results to improve your coding style.
9\.3 Version control
--------------------
When a project gets large, complicated or mission\-critical it is important to keep track of how it evolves. In the same way that Dropbox saves a ‘backup’ of your files, version control systems keep a backup of your code. The only difference is that version control systems back\-up your code *forever*.
The version control system we recommend is Git, a command\-line application created by Linus Torvalds, who also invented Linux.[24](#fn24) The easiest way to integrate your R projects with Git, if you’re not accustomed to using a shell (e.g. the Unix command line), is with RStudio’s Git tab, in the top right\-hand window (see figure [9\.1](collaboration.html#fig:9-1)). This shows a number of files have been modified (as illustrated with the blue M symbol) and that some are new (as illustrated with the yellow ? symbol). Checking the tick\-box will enable these files to be *committed*.
### 9\.3\.1 Commits
Commits are the basic units of version control. Keep your commits ‘atomic’: each one should only do one thing. Document your work with clear and concise commit messages, use the present tense, e.g.: ‘Add analysis functions’.
Committing code only updates the files on your ‘local’ branch. To update the files stored on a remote server (e.g. on GitHub), you must ‘push’ the commit. This can be done using `git push` from a shell or using the green up arrow in RStudio, illustrated in figure [9\.1](collaboration.html#fig:9-1). The blue down arrow will ‘pull’ the latest version of the repository from the remote.[25](#fn25)
Figure 9\.1: The Git tab in RStudio
### 9\.3\.2 Git integration in RStudio
How can you enable this functionality on your installation of RStudio? RStudio can be a GUI Git only if Git has been installed *and* RStudio can find it. You need a working installation of Git (e.g. installed through `apt-get install git` Ubuntu/Debian or via [GitHub Desktop](https://help.github.com/desktop/guides/getting-started/installing-github-desktop/) for Mac and Windows). RStudio can be linked to your Git installation via Tools \> Global Options, in the Git/SVN tab. This tab also provides a [link](https://support.rstudio.com/hc/en-us/articles/200532077) to a help page on RStudio/Git.
Once Git has been linked to your RStudio installation, it can be used to track changes in a new project by selecting `Create a git repository` when creating a new project. The tab illustrated in figure [9\.1](collaboration.html#fig:9-1) will appear, allowing functionality for interacting with Git via RStudio.
RStudio provides a useful GUI for navigating past commits. This allows you to see the entire history of your project. To navigate and view the details of past commits click on the Diff button in the Git pane, as illustrated in figure [9\.2](collaboration.html#fig:9-2).
Figure 9\.2: The Git history navigation interface
### 9\.3\.3 GitHub
GitHub is an online platform that makes sharing your work and collaborative code easy. There are alternatives such as [GitLab](https://about.gitlab.com/). The focus here is on GitHub as it’s by far the most popular among R developers. Also, through the command `devtools::install_github()`, preview versions of a package can be installed and updated in an instant. This makes ‘GitHub packages’ a great way to access the latest functionality. And GitHub makes it easy to get your work ‘out there’ to the world for efficiently collaborating with others, without the restraints placed on CRAN packages.
To install the GitHub version of the **benchmarkme** package, for example one would enter
```
devtools::install_github("csgillespie/benchmarkme")
```
Note that `csgillespie` is the GitHub user and `benchmarkme` is the package name. Replacing `csgillespie` with `robinlovelace` in the above code would install Robin’s version of the package. This is useful for fast collaboration with many people, but you must remember that GitHub packages will not update automatically with the command `update.packages` (see [2\.3\.5](set-up.html#updating-r-packages)).
Warning: although GitHub is fantastic for collaboration, it can end up creating more problems than it solves if your collaborators are not git\-literate. In one project, Robin eventually abandoned using GitHub to collaborate after his collaborator found it impossible to work with. More time was being spent debugging git/GitHub than actually working. Our advice therefore is to **never impose git** and always ensure that other lines of communication (e.g. phone calls, emails) are open as different people prefer different ways of communicating.
### 9\.3\.4 Branches, forks, pulls and clones
Git is a large program which takes a long time to learn in depth. However, getting to grips with the basics of some of its more advanced functions can make you a more efficient collaborator. Using and merging branches, for example, allows you to test new features in a self\-contained environment before it is used in production (e.g. when shifting to an updated version of a package which is not backwards compatible). Instead of bogging you down with a comprehensive discussion of what is possible, this section cuts to the most important features for collaboration: branches, forks, fetches and clones. For a more detailed description of Git’s powerful functionality, we recommend Jenny Byran’s [book](http://happygitwithr.com/), “Happy Git and GitHub for the useR”.
Branches are distinct versions of your repository. Git allows you to jump seamlessly between different versions of your entire project. To create a new branch called test, you need to enter the shell and use the Git command line:
```
git checkout -b test
```
This is the equivalent of entering two commands: `git branch test` to create the branch and then `git checkout test` to *checkout* that branch. Checkout means switch into that branch. Any changes will not affect your previous branch. In RStudio you can jump quickly between branches using the drop down menu in the top right of the Git pane. This is illustrated in figure [9\.1](collaboration.html#fig:9-1): see the `master` text followed by a down arrow. Clicking on this will allow you to select other branches.
Forks are like branches but they exist on other people’s computers. You can fork a repository on GitHub easily, as described on the site’s [help pages](https://help.github.com/articles/fork-a-repo/). If you want an exact copy of this repository (including the commit history) you can *clone* this fork to your computer using the command `git clone` or by using a Git GUI such as GitHub Desktop. This is preferable from a collaboration perspective compared to cloning the repository directly, because any changes can be pushed back online easily if you are working from your own fork. You cannot push to forks that you have not created. If you want your work to be incorporated into the original fork you can use a *pull request*. Note: if you don’t need the project’s entire commit history, you can simply download a zip file containing the latest version of the repository from GitHub (see at the top right of any GitHub repository).
A pull request (PR) is a mechanism on GitHub by which your code can be added to an existing project. One of the most useful features of a PR from a collaboration perspective is that it provides an opportunity for others to comment on your code, line by line, before it gets merged. This is all done online on GitHub, as discussed in [GitHub’s online help](https://help.github.com/articles/merging-a-pull-request/). Following feedback, you may want to refactor code, written by you or others.
### 9\.3\.1 Commits
Commits are the basic units of version control. Keep your commits ‘atomic’: each one should only do one thing. Document your work with clear and concise commit messages, use the present tense, e.g.: ‘Add analysis functions’.
Committing code only updates the files on your ‘local’ branch. To update the files stored on a remote server (e.g. on GitHub), you must ‘push’ the commit. This can be done using `git push` from a shell or using the green up arrow in RStudio, illustrated in figure [9\.1](collaboration.html#fig:9-1). The blue down arrow will ‘pull’ the latest version of the repository from the remote.[25](#fn25)
Figure 9\.1: The Git tab in RStudio
### 9\.3\.2 Git integration in RStudio
How can you enable this functionality on your installation of RStudio? RStudio can be a GUI Git only if Git has been installed *and* RStudio can find it. You need a working installation of Git (e.g. installed through `apt-get install git` Ubuntu/Debian or via [GitHub Desktop](https://help.github.com/desktop/guides/getting-started/installing-github-desktop/) for Mac and Windows). RStudio can be linked to your Git installation via Tools \> Global Options, in the Git/SVN tab. This tab also provides a [link](https://support.rstudio.com/hc/en-us/articles/200532077) to a help page on RStudio/Git.
Once Git has been linked to your RStudio installation, it can be used to track changes in a new project by selecting `Create a git repository` when creating a new project. The tab illustrated in figure [9\.1](collaboration.html#fig:9-1) will appear, allowing functionality for interacting with Git via RStudio.
RStudio provides a useful GUI for navigating past commits. This allows you to see the entire history of your project. To navigate and view the details of past commits click on the Diff button in the Git pane, as illustrated in figure [9\.2](collaboration.html#fig:9-2).
Figure 9\.2: The Git history navigation interface
### 9\.3\.3 GitHub
GitHub is an online platform that makes sharing your work and collaborative code easy. There are alternatives such as [GitLab](https://about.gitlab.com/). The focus here is on GitHub as it’s by far the most popular among R developers. Also, through the command `devtools::install_github()`, preview versions of a package can be installed and updated in an instant. This makes ‘GitHub packages’ a great way to access the latest functionality. And GitHub makes it easy to get your work ‘out there’ to the world for efficiently collaborating with others, without the restraints placed on CRAN packages.
To install the GitHub version of the **benchmarkme** package, for example one would enter
```
devtools::install_github("csgillespie/benchmarkme")
```
Note that `csgillespie` is the GitHub user and `benchmarkme` is the package name. Replacing `csgillespie` with `robinlovelace` in the above code would install Robin’s version of the package. This is useful for fast collaboration with many people, but you must remember that GitHub packages will not update automatically with the command `update.packages` (see [2\.3\.5](set-up.html#updating-r-packages)).
Warning: although GitHub is fantastic for collaboration, it can end up creating more problems than it solves if your collaborators are not git\-literate. In one project, Robin eventually abandoned using GitHub to collaborate after his collaborator found it impossible to work with. More time was being spent debugging git/GitHub than actually working. Our advice therefore is to **never impose git** and always ensure that other lines of communication (e.g. phone calls, emails) are open as different people prefer different ways of communicating.
### 9\.3\.4 Branches, forks, pulls and clones
Git is a large program which takes a long time to learn in depth. However, getting to grips with the basics of some of its more advanced functions can make you a more efficient collaborator. Using and merging branches, for example, allows you to test new features in a self\-contained environment before it is used in production (e.g. when shifting to an updated version of a package which is not backwards compatible). Instead of bogging you down with a comprehensive discussion of what is possible, this section cuts to the most important features for collaboration: branches, forks, fetches and clones. For a more detailed description of Git’s powerful functionality, we recommend Jenny Byran’s [book](http://happygitwithr.com/), “Happy Git and GitHub for the useR”.
Branches are distinct versions of your repository. Git allows you to jump seamlessly between different versions of your entire project. To create a new branch called test, you need to enter the shell and use the Git command line:
```
git checkout -b test
```
This is the equivalent of entering two commands: `git branch test` to create the branch and then `git checkout test` to *checkout* that branch. Checkout means switch into that branch. Any changes will not affect your previous branch. In RStudio you can jump quickly between branches using the drop down menu in the top right of the Git pane. This is illustrated in figure [9\.1](collaboration.html#fig:9-1): see the `master` text followed by a down arrow. Clicking on this will allow you to select other branches.
Forks are like branches but they exist on other people’s computers. You can fork a repository on GitHub easily, as described on the site’s [help pages](https://help.github.com/articles/fork-a-repo/). If you want an exact copy of this repository (including the commit history) you can *clone* this fork to your computer using the command `git clone` or by using a Git GUI such as GitHub Desktop. This is preferable from a collaboration perspective compared to cloning the repository directly, because any changes can be pushed back online easily if you are working from your own fork. You cannot push to forks that you have not created. If you want your work to be incorporated into the original fork you can use a *pull request*. Note: if you don’t need the project’s entire commit history, you can simply download a zip file containing the latest version of the repository from GitHub (see at the top right of any GitHub repository).
A pull request (PR) is a mechanism on GitHub by which your code can be added to an existing project. One of the most useful features of a PR from a collaboration perspective is that it provides an opportunity for others to comment on your code, line by line, before it gets merged. This is all done online on GitHub, as discussed in [GitHub’s online help](https://help.github.com/articles/merging-a-pull-request/). Following feedback, you may want to refactor code, written by you or others.
9\.4 Code review
----------------
What is a code review?[26](#fn26) Simply when we have finished working on a piece of code, a colleague reviews our work and considers questions such as
* Is the code correct and properly documented?
* Could the code be improved?
* Does the code conform to existing style guidelines?
* Are there any automated tests? If so, are they sufficient?
A good code review shares knowledge and best practice.
A lightweight code review can take a variety of forms. For example, it could be as simple as emailing round some code for comments, or “over the shoulder”, where someone literally looks over your shoulder while coding. More formal techniques include paired programming where two developers work side by side on the same project.
Regardless of the review method being employed, there a number of points to remember. First, as with all forms of feedback, be constructive. Rather than pointing out flaws, give suggested improvements. Closely related is give praise when appropriate. Second, if you are reviewing a piece of code set a time frame or the number of lines of code you will review. For example, you will spend one hour reviewing a piece of code, or a maximum of 400 lines. Third, a code review should be performed before the code is merged into a larger code base; fix mistakes as soon as possible.
Many R users don’t work in team or group; instead they work by themselves. Practically, there isn’t anyone nearby to review their code. However there is still the option of an *unoffical* code review. For example, if you have hosted code on an online repository such as GitHub, users will naturally give feedback on our code (especially if you make it clear that you welcome feedback). Another good place is StackOverflow (covered in detail in chapter [10](learning.html#learning)). This site allows you to post answers to other users questions. When you post an answer, if your code is unclear, this will be flagged in comments below your answer.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/learning.html |
10 Efficient learning
=====================
As with any vibrant open source software community, R is fast moving. This can be disorientating because it means that you can never ‘finish’ learning R. On the other hand, it makes R a fascinating subject: there is always more to learn. Even experienced R users keep finding new functionality that helps solve problems quicker and more elegantly. Therefore *learning how to learn* is one of the most important skills to have if you want to learn R *in depth*. We emphasise *depth* of learning because it is more efficient to learn something properly than to Google it repeatedly every time you forget how it works.
This chapter aims to equip you with concepts, guidance and tips that will accelerate your transition from an R *hacker* to an R *programmer*. This inevitably involves effective use of R’s help, reading R source code, and use of online material.
### Prerequisties
The only package used in this section is **swirl**
```
library("swirl")
#>
#> | Hi! Type swirl() when you are ready to begin.
```
10\.1 Top 5 tips for efficient learning
---------------------------------------
1. Use R’s internal help, e.g. with `?`, `??`, `vignette()` and `apropos()`. Try **swirl**.
2. Read about the latest developments in established outlets such as the *Journal for Statistical Software*, the *R Journal*, R lists and the ‘blogosphere’.
3. If stuck, ask for help! A clear question posted in an appropriate place, using reproducible code, should get a quick and enlightening answer.
4. For more in\-depth learning, nothing can beat immersive R books and tutorials. Do some research and decide which resources you should complete.
5. One of the best ways to consolidate learning is to write\-it\-up and pass on the knowledge: telling the story of what you’ve learned with also help others.
10\.2 Using R’s internal help
-----------------------------
Sometimes the best place to look for help is within R itself. Using R’s help has 3 main advantages from an efficiency perspective: 1\) it’s faster to query R from inside your IDE than to switch context and search for help on a different platform (e.g. the internet which has countless distractions); 2\) it works offline; 3\) learning to read R’s documentation (and source code) is a powerful skill in itself that will improve your R programming.
The main disadvantage of R’s internal help is that it is terse and in some cases sparse. Do not expect to *always* be able to find the answer in R so be prepared to look elsewhere for more detailed help and context. From a learning perspective becoming acquainted with R’s documentation is often better than finding out the solution from a different source: it was written by developers, largely for developers. Therefore with R documentation you learn about a function *from the horses mouth*. R help also sometimes sheds light on a function’s history, e.g. through references to academic papers.
As you look to learn about a topic or function in R, it is likely that you will have a search strategy of your own, ranging from broad to narrow:
1. Searching R and installed packages for help on a specific *topic*.
2. Reading\-up on *packages* vignettes.
3. Getting help on a specific *function*.
4. Looking into the *source code*.
In many cases you may already have researched stages 1 and 2\. Often you can stop at 3 and simply use the function without worrying exactly how it works. In every case, it is useful to be aware of this hierarchical approach to learning from R’s internal help, so you can start with the ‘Big Picture’ (and avoid going down a misguided route early on) and then quickly focus in on the functions that are most related to your task. To illustrate this approach in action, imagine that you are interested in a specific topic: optimisation. The remainder of this section will work through the stages 1 to 4 outlined above as if we wanted to find out more about this topic, with occasional diversions from this topic to see how specific help functions work in more detail. The final method of learning from R’s internal resources covered in this section is **swirl**, a package for interactive learning that we cover last.
### 10\.2\.1 Searching R for topics
A ‘wide boundary’ search for a topic in R will often begin with a search for instances of a keyword in the documentation and function names. Using the example of optimisation, one could start with a search for a text string related to the topic of interest:
```
# help.search("optim") # or, more concisely
??optim
```
Note that the `??` symbol is simply a useful shorthand version of the function `help.search()`.
It is sometimes useful to use the full function rather than the shorthand version, because that way you can specify a number of options.
To search for all help pages that mention the more specific terms “optimisation” or “optimization” (the US spelling), in the title or alias of the help pages, for example, the following command would be used:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"))
```
This will return a short (and potentially more efficiently focussed) list of help pages than the wide\-ranging `??optim` call.
To make the search even more specific, we can use the `package` argument to constrain the search to a single package.
This can be very useful when you know that a function exists in a specific package, but you cannot remember what it is called:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"), package = "stats")
```
Another function for searching R is `apropos()`. It prints to the console any R objects (including ‘hidden’ functions, those beginning with `.` and datasets) whose name matches a given text string. Because it does not search R’s documentation, it tends to return fewer results than `help.search()`. Its use and typical outputs can be seen from a couple of examples below:
```
apropos("optim")
#> [1] "constrOptim" "is_blas_optimize" "optim" "optimHess"
#> [5] "optimise" "optimize"
apropos("lm")[1:6] # show only first six results
#> [1] ".colMeans" ".lm.fit" "bm_matrix_cal_lm" "colMeans"
#> [5] "colMeans" "confint.lm"
```
To search *all R packages*, including those you have not installed locally, for a specific topic there are a number of options. For obvious reasons, this depends on having internet access. The most rudimentary way to see what packages are available from CRAN, if you are using RStudio, is to use its autocompletion functionality for package names. To take an example, if you are looking for a package for geospatial data analysis, you could do worse than to enter the text string `geo` as an argument into the package installation function (for example `install.packages(geo)`) and hitting `Tab` when the cursor is between the `o` and the `)` in the example. The resulting options are shown in the figure below: selecting one from the dropdown menu will result in it being completed with surrounding quote marks, as necessary.
Figure 10\.1: Package name autocompletion in action in RStudio for packages beginning with ‘geo’.
### 10\.2\.2 Finding and using vignettes
Some packages contain vignettes. These are pieces of [‘long\-form’ documentation](http://r-pkgs.had.co.nz/vignettes.html) that allow package authors to go into detail explaining how the package works (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). In general they are high quality. Because they can be used to illustrate real world use cases, vignettes can be the best way to understand functions in the context of broader explanations and longer examples than are provided in function help pages. Although many packages lack vignettes, they deserve a sub\-section of their own because they can boost the efficiency with which package functions are used, in an integrated workflow.
If you are frustrated because a certain package lacks a vignette, you can create one. This can be a great way of learning about and consolidating your knowledge of a package. To create a vignette, first download the source code of a package and then use `devtools::use_vignette()`. To add a vignette to the **efficient** package used in this book, for example, you could clone the repo, e.g. using the command `git clone [git@github.com](mailto:git@github.com):csgillespie/efficient`. Once you have opened the repo as a project, e.g. in RStudio, you could create a vignette called “efficient\-learning” with the following command: `use_vignette(“efficient-learning”)`.
To browse any vignettes associated with a particular package, we can use the handy function `browseVignettes()`:
```
browseVignettes(package = "benchmarkme")
```
This is roughly equivalent to `vignette(package = "benchmarkme")` but opens a new page in a browser and lets you navigate all the vignettes in that particular package. For an overview of all vignettes available from R packages installed on your computer, try browsing all available vignettes with `browseVignettes()`. You may be surprised at how many hidden gems there are in there!
How best to *use* vignettes depends on the vignette in question and your aims. In general you should expect to spend longer reading vignette’s than other types of R documentation. The *Introduction to dplyr* vignette (opened with `vignette("introduction", package = "dplyr")`), for example, contains almost 4,000 words of prose and example code and outputs, illustrating how its functions work. We recommend working through the examples and typing the example code to ‘learn by doing’.
Another way to learn from package vignettes is to view their source code. You can find where vignette source code lives by looking in the `vignette/` folder of the package’s source code: **dplyr**’s vignettes, for example, can be viewed (and edited) online at [github.com/hadley/dplyr/tree/master/vignettes](https://github.com/hadley/dplyr/tree/master/vignettes). A quick way to view a vignette’s R code is with the `edit()` function:
```
v = vignette("introduction", package = "dplyr")
edit(v)
```
### 10\.2\.3 Getting help on functions
All functions have help pages. These contain, at a minimum, a list of the input arguments and the nature of the output that can be expected. Once a function has been identified, e.g. using one of the methods outlined in Section [10\.2\.1](learning.html#searching-r-for-topics), its *help page* can be displayed by prefixing the function name with `?`. Continuing with the previous example, the help page associated with the command `optim()` (for general purpose optimisation) can be invoked as follows:
```
# help("optim") # or, more concisely:
?optim
```
In general, help pages describe *what* functions do, not *how* they work. This is one of the reasons
that function help pages are thought (by some) to be difficult to understand. In practice,
this means that the help page does not describe the underlying mathematics or algorithm in
detail, it’s aim is to describe the interface.
A help page is divided into a number of sections.
The help for `optim()` is typical, in that it has a title (General\-purpose Optimization) followed by short Description, Usage and Arguments sections.
The Description is usually just a sentence or two for explaining what it does. Usage shows the arguments that the function needs to work. And Arguments describes what kind of objects the function expects. Longer sections typically include Details and Examples, which provide some context and provide (usually reproducible) examples of how the function can be used, respectively. The typically short Value, References and See Also sections facilitate efficient learning by explaining what the output means, where you can find academic literature on the subject, and which functions are related.
`optim()` is a mature and heavily used function so it has a long help page: you’ll probably be thankful to learn that not all help pages are this long!
With so much potentially overwhelming information in a single help page, the placement of the short, dense sections at the beginning is efficient because it means you can understand the fundamentals of a function in few words.
Learning how to read and quickly interpret such help pages will greatly help your ability to learn R. Take some time to study the help for `optim()` in detail.
It is worth discussing the contents of the Usage section in particular, because this contains information that may not be immediately obvious:
```
optim(par, fn, gr = NULL, ...,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"),
lower = -Inf, upper = Inf, control = list(), hessian = FALSE)
```
This contains two pieces of critical information: 1\) the *essential* arguments which must be provided for the function to work (`par` and `fn` in this case, as `gr` has a default value) before the `...` symbol; and 2\) *optional* arguments that control how the function works (`method`, `lower`, and `hessian` in this case). `...` are optional arguments whose values depend on the other arguments (which will be passed to the function represented by `fn` in this case). Let’s see how this works in practice by trying to run `optim()` to find the minimum value of the function \\(y \= x^4 \- x^2\\):
```
fn = function(x) {
x^4 - x^2
}
optim(par = 0, fn = fn)
#> Warning in optim(par = 0, fn = fn): one-dimensional optimization by Nelder-Mead is unreliable:
#> use "Brent" or optimize() directly
#> $par
#> [1] 0.707
#>
#> $value
#> [1] -0.25
#>
#> $counts
#> function gradient
#> 58 NA
#>
#> $convergence
#> [1] 0
#>
#> $message
#> NULL
```
The results show that the minimum value of `fn(x)` is found when `x = 0.707..` (\\(\\frac{1}{\\sqrt{2}}\\)), with a minimum value of \\(\-0\.25\\). It took \\(58\\) iterations of the function call for `optim()` to converge on this value. Each of these output values is described in the Values section of the help pages.
From the help pages, we could guess that providing the function call without specifying `par` (i.e. `optim(fn = fn)`) would fail, which indeed it does.
The most *helpful* section is often the Examples. These lie at the bottom of the help page and show precisely how the function works. You can either copy and paste the code, or actually run the example code using the `example` command (it is well worth running these examples due to the graphics produced):
```
example(optim)
```
When a package is added to CRAN, the example part of the documentation is run on all major platforms. This helps ensure that a package works on multiple systems.
Another useful section in the help file is `See Also:`. In the `optim()` help page, it links to `optimize()` which may be more appropriate for this use case.
### 10\.2\.4 Reading R source code
R is open source. This means that we can view the underlying source code and examine any function. Of course the code is complex, and diving straight into the source code won’t help that much. However, watching the GitHub R source code [mirror](https://github.com/wch/r-source/) will allow you to monitor small changes that occur. This gives a nice entry point into a complex code base. Likewise, examining the source of small functions, such as `NCOL` is informative, e.g. `getFunction("NCOL")`
Subscribing to the R NEWS [blog](https://developer.r-project.org/blosxom.cgi/R-devel/NEWS/) is an easy way of keeping track of future changes.
Many R packages are developed in the open on GitHub or R\-Forge. Select a few well known packages and examine their source. A good package to start with is **[drat](https://github.com/eddelbuettel/drat)**. This is a relatively simple package developed by Dirk Eddelbuettel (author of Rcpp) that only contains a few functions. It gives you an excellent pointer into software development by one of the key R package writers.
A shortcut for browsing R’s source code is provided by the RStudio IDE: clicking on a function and then hit `F2` will open its source code in the file editor. This works for both functions that exist in R and its packages and functions that you created yourself in another R script (so long as it is within your project directory).
Although reading source code can be interesting in itself, it is probably best done in the context of a specific question, e.g. “how can I use a function name as an argument in my own function?” (looking at the source code of `apply()` may help here).
### 10\.2\.5 Swirl
**swirl** is an interactive teaching platform for R. It offers a number of extensions and, for the pioneering, the ability for others to create custom extensions. The learning curve and method will not work for everyone, but this package is worth flagging as a potent self teaching resource. In some ways **swirl** can be seen as the ultimate internal R help as it allows dedicated learning sessions, based on multiple choice questions, all within a usual R session. To enter the **swirl** world, just enter the following. The resultant instructions will explain the rest:
```
library("swirl")
swirl()
```
10\.3 Online resources
----------------------
The R community has a strong online presence, providing many resources for learning. Over time, there has fortunately been a tendency for R resources to become more user friendly and up\-to\-date. Many resources that have been on CRAN for many years are dated by now so it’s more efficient to navigate directly to the most up\-to\-date and efficient\-to\-use resources.
‘Cheat sheets’ are short documents summarising how to do certain things. [RStudio](http://www.rstudio.com/resources/cheatsheets/), for example, provides excellent cheat sheets on [**dplyr**](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf), [**rmarkdown**](https://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf) and the [RStudio IDE](https://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf) itself.
The R\-project website contains six detailed [official manuals](https://cran.r-project.org/manuals.html), plus a giant pdf file containing documentation for all recommended packages. These include [An Introduction to R](https://cran.r-project.org/doc/manuals/r-release/R-intro.html), [The R language definition](https://cran.r-project.org/doc/manuals/r-release/R-lang.html) and [R Installation and Administration](https://cran.r-project.org/doc/manuals/r-release/R-admin.html), all of which are recommended for people wanting to learn their general R skills. If you are developing a package and want to submit it to CRAN, the [Writing R Extensions](https://cran.r-project.org/doc/manuals/r-release/R-exts.html) manual is recommended reading, although it has to some extent been superseded by H. Wickham ([2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)), the source code of which is [available online](https://github.com/hadley/r-pkgs). While these manuals are long, they contain important information written by experienced R programmers.
For more domain\-specific and up\-to\-date information on developments in R, we recommend checking out academic journals. The [R\-journal](https://journal.r-project.org/) regularly publishes articles describing new R packages, as well as general programming hints. Similarly, the articles in the [Journal of Statistical Software](https://www.jstatsoft.org/) have a strong R bias. Publications in these journals are generally of very high quality and have been rigorously peer reviewed. However, they may be rather technical for R novices.
The wider community provides a much larger body of information, of more variable quality, than the official R resources. The [Contributed Documentation](https://cran.r-project.org/other-docs.html) page on R’s home page contains dozens of tutorials and other resources on a wide range of topics. Some of these are excellent, although many are not kept up\-to\-date. An excellent resource for browsing R help pages online is provided by [rdocumentation.org](http://www.rdocumentation.org).
Lower grade but more frequently released information can be found on the ‘blogosphere’. Central to this is [R\-bloggers](http://www.r-bloggers.com/), a blog aggregator of content contributed by bloggers who write about R (in English). It is a great way to get exposed to new and different packages. Similarly monitoring the *[\#rstats](https://twitter.com/search?q=%23rstats)* twitter tag keeps you up\-to\-date with the latest news.
There are also mailing lists, Google groups and the Stack Exchange Q \& A sites. Before requesting help, read a few other questions to learn the format of the site. Make sure you search previous questions so you are not duplicating work. Perhaps the most important point is that people aren’t under **any** obligation to answer your question. One of the fantastic things about the open\-source community is that you can ask questions and one of core developers may answer your question for free; but remember, everyone is busy!
### 10\.3\.1 Stackoverflow
The number one place on the internet for getting help on programming is [Stackoverflow](http://www.stackoverflow.com). This website provides a platform for asking and answering questions. Through site membership, questions and answers are voted up or down. Users of Stackoverflow earn reputation points when their question or answer is up\-voted. Anyone (with enough reputation) can edit a question or answer. This helps the content remain relevant.
Questions are tagged. The R questions can be found under the [R tag](http://stackoverflow.com/questions/tagged/r). The [R page](https://stackoverflow.com/tags/r/info) contains links to Official documentation, free resources, and various other links. Members of the Stackoverflow R community have tagged, using `r-faq`, a few questions that often crop up.
### 10\.3\.2 Mailing lists and groups.
There are many mailing lists and Google groups focused on R and particular packages. The main list for getting help is `R-help`. This is a high volume mailing list, with around a dozen messages per day. A more technical mailing list is `R-devel`. This list is intended for questions and discussion about code development in R. The discussion on this list is very technical. However, it’s a good place to be introduced to new ideas \- but it’s not the place to ask about these ideas! There are
many other special interest mailing [lists](https://www.r-project.org/mail.html) covering topics such as high performance computing to ecology. Many popular packages also have their own mailing list or Google group, e.g. **ggplot2** and **shiny**. The key piece of advice is before mailing a list, read the relevant mailing archive and check that your message is appropriate.
10\.4 Asking a question
-----------------------
A great way to get specific help on a difficult topic is to ask for help.
However, asking a good question is not easy. Three common mistakes, and ways to avoid them, are outlined below:
1. Asking a question that has already been asked: ensure you’ve properly searched for the answer before posting.
2. The answer to the question can be found in R’s help: make sure you’ve properly read the relevant help pages before asking.
3. The question does not contain a reproducible example: create a simple version of your data, show the code you’ve tried, and display the result you are hoping for.
Your question should contain just enough information that you problem is clear and can be reproducible, while at the same time avoid unnecessary details. Fortunately there is a StackOverflow question \- [How to make a great R reproducible example?](http://stackoverflow.com/q/5963269/203420) that provides excellent guidance.
Additional guides that explain how to create good programming questions are provided by [StackOverflow](https://stackoverflow.com/help/how-to-ask) and and the [R mailing list posting guide](https://www.r-project.org/posting-guide.html).
### Minimal data set
What is the smallest data set you can construct that will reproduce your issue? Your actual data set may contain \\(10^5\\) rows and \\(10^4\\) columns, but to get your idea across you might only need \\(4\\) rows and \\(3\\) columns. Making small example data sets is easy. For example, to create a data frame with two numeric columns and a column of characters just use
```
set.seed(1)
example_df = data.frame(x = rnorm(4), y = rnorm(4), z = sample(LETTERS, 4))
```
Note the call to `set.seed` ensures anyone who runs the code will get the same random number stream. Alternatively, you can use one of the many data sets that come with R \- `library(help = "datasets")`.
If creating an example data set isn’t possible, then use `dput` on your actual data set. This will create an ASCII text representation of the object that will enable anyone to recreate the object
```
dput(example_df)
#> structure(list(
#> x = c(-0.626453810742332, 0.183643324222082, -0.835628612410047, 1.59528080213779),
#> y = c(0.329507771815361, -0.820468384118015, 0.487429052428485, 0.738324705129217),
#> z = structure(c(3L, 4L, 1L, 2L), .Label = c("J", "R", "S", "Y"), class = "factor")),
#> .Names = c("x", "y", "z"), row.names = c(NA, -4L), class = "data.frame")
```
### Minimal example
What you should not do, is simply copy and paste your entire function into your question. It’s unlikely that your entire function doesn’t work, so just simplify it to the bare minimum. The aim is to target your actual issue. Avoid copying and pasting large blocks of code; remove superfluous lines that are not part of the problem. Before asking your question, can you run your code in a clean R environment and reproduce your error?
10\.5 Learning in depth
-----------------------
In the age of the internet and social media, many people feel lucky if they have time out to go for a walk, let alone sit down to read a book. However it is undeniable that learning R *in depth* is a time consuming activity. Reading a book or a large tutorial (and completing the practical examples contained within) may not be the most efficient way to solve a particular problem in the short term, but it can be one of the best ways to learn R programming properly, especially in the long\-run.
In depth learning differs from shallow, incremental learning because rather than discovering how a specific function works, you find out how systems of functions work together. To take a metaphor from civil engineering, in depth learning is about building strong foundations, on which a wide range of buildings can be constructed. In depth learning can be highly efficient in the long run because it will pay back over many years, regardless of the domain\-specific problem you want to use R to tackle. Shallow learning, to continue the metaphor, is more like erecting many temporary structures: they can solve a specific problem in the short term but they will not be durable. Flimsy dwellings can be swept away. Shallow memories can be forgotten.
Having established that time spent ‘deep learning’ can, counter\-intuitively, be efficient, it is worth thinking about how to deep learn. This varies from person to person. It does not involve passively absorbing sacred information transmitted year after year by the ‘R gods’. It is an active, participatory process. To ensure that memories are rapidly actionable you must ‘learn by doing’. Learning from a cohesive, systematic and relatively comprehensive resource will help you to see the many interconnections between the different elements of R programming and how they can be combined for efficient work.
There are a number of such resources, including this book. Although the understandable tendency will be to use it incrementally, dipping in and out of different sections when different problems arise, we also recommend reading it systematically to see how the different elements of efficiency fit together. It is likely that as you work progressively through this book, in parallel with solving real world problems, you will realise that the solution is not to have the ‘right’ resource at hand but to be able to use the tools provided by R efficiently. Once you hit this level of proficiency, you should have the confidence to address most problems encountered from first principles. Over time, your ‘first port of call’ should move away from Google and even R’s internal help to simply giving it a try: informed trial and error, intelligent experimentation, can be the best approach to both learning and solving problems quickly, once you are equipped with the tools to do so. That’s why this is the last section in the book.
If you have already worked through all the examples in this book, or if you want to learn areas not covered in it, there are many excellent resources for extending and deepening your knowledge of R programming for fast and effective work, and to do new things with it. Because R is a large and ever\-evolving language, there is no definitive list of resources for taking your R skills to new heights. However, the list below, in rough ascending order of difficulty and depth, should provide plenty of material and motivation for in depth learning of R.
1. Free webinars and online courses provided by [RStudio](http://www.rstudio.com/resources/webinars/) and [DataCamp](https://www.datacamp.com/community/open-courses). Both organisations are well regarded and keep their content up\-to\-date, but there are likely other sources of other online courses. We recommend testing pushing your abilities, rather than going over the same material covered in this book.
2. *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)), a free book introducing many concepts and ‘tidy’ packages for working with data (a free online version is available from [r4ds.had.co.nz/](http://r4ds.had.co.nz/)).
3. *R programming for Data Science* (Peng [2014](#ref-peng_r_2014)), which provides in depth coverage of analysis and visualisation of datasets.
4. *Advanced R Programming* (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)), an advanced book which looks at the internals of how R works (free from [adv\-r.had.co.nz](http://adv-r.had.co.nz/)).
10\.6 Spread the knowledge
--------------------------
The final thing to say on the topic of efficient learning relates to the [old](https://en.wikipedia.org/wiki/Docendo_discimus) (\~2000 years old!) saying *docendo discimus*:
> **by teaching we learn**.
This means that passing on information is one of the best ways to consolidate your learning. It was largely by helping others to learn R that we became proficient R users.
Demand for R skills is growing, so there are many opportunities to teach R. Whether it’s helping your colleague to use `apply()`, or writing a blog post on solving certain problems in R, teaching others R can be a rewarding experience. Furthermore, spreading the knowledge can be efficient: it will improve your own understanding of the language and benefit the entire community, providing a positive feedback to the movement towards open source software in data\-driven computing.
Assuming you have completed reading the book, the only remaining thing to say is well done: you are now an efficient R programmer. We hope you direct your new found skills towards the greater good and pass on the wisdom to others along the way.
### Prerequisties
The only package used in this section is **swirl**
```
library("swirl")
#>
#> | Hi! Type swirl() when you are ready to begin.
```
10\.1 Top 5 tips for efficient learning
---------------------------------------
1. Use R’s internal help, e.g. with `?`, `??`, `vignette()` and `apropos()`. Try **swirl**.
2. Read about the latest developments in established outlets such as the *Journal for Statistical Software*, the *R Journal*, R lists and the ‘blogosphere’.
3. If stuck, ask for help! A clear question posted in an appropriate place, using reproducible code, should get a quick and enlightening answer.
4. For more in\-depth learning, nothing can beat immersive R books and tutorials. Do some research and decide which resources you should complete.
5. One of the best ways to consolidate learning is to write\-it\-up and pass on the knowledge: telling the story of what you’ve learned with also help others.
10\.2 Using R’s internal help
-----------------------------
Sometimes the best place to look for help is within R itself. Using R’s help has 3 main advantages from an efficiency perspective: 1\) it’s faster to query R from inside your IDE than to switch context and search for help on a different platform (e.g. the internet which has countless distractions); 2\) it works offline; 3\) learning to read R’s documentation (and source code) is a powerful skill in itself that will improve your R programming.
The main disadvantage of R’s internal help is that it is terse and in some cases sparse. Do not expect to *always* be able to find the answer in R so be prepared to look elsewhere for more detailed help and context. From a learning perspective becoming acquainted with R’s documentation is often better than finding out the solution from a different source: it was written by developers, largely for developers. Therefore with R documentation you learn about a function *from the horses mouth*. R help also sometimes sheds light on a function’s history, e.g. through references to academic papers.
As you look to learn about a topic or function in R, it is likely that you will have a search strategy of your own, ranging from broad to narrow:
1. Searching R and installed packages for help on a specific *topic*.
2. Reading\-up on *packages* vignettes.
3. Getting help on a specific *function*.
4. Looking into the *source code*.
In many cases you may already have researched stages 1 and 2\. Often you can stop at 3 and simply use the function without worrying exactly how it works. In every case, it is useful to be aware of this hierarchical approach to learning from R’s internal help, so you can start with the ‘Big Picture’ (and avoid going down a misguided route early on) and then quickly focus in on the functions that are most related to your task. To illustrate this approach in action, imagine that you are interested in a specific topic: optimisation. The remainder of this section will work through the stages 1 to 4 outlined above as if we wanted to find out more about this topic, with occasional diversions from this topic to see how specific help functions work in more detail. The final method of learning from R’s internal resources covered in this section is **swirl**, a package for interactive learning that we cover last.
### 10\.2\.1 Searching R for topics
A ‘wide boundary’ search for a topic in R will often begin with a search for instances of a keyword in the documentation and function names. Using the example of optimisation, one could start with a search for a text string related to the topic of interest:
```
# help.search("optim") # or, more concisely
??optim
```
Note that the `??` symbol is simply a useful shorthand version of the function `help.search()`.
It is sometimes useful to use the full function rather than the shorthand version, because that way you can specify a number of options.
To search for all help pages that mention the more specific terms “optimisation” or “optimization” (the US spelling), in the title or alias of the help pages, for example, the following command would be used:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"))
```
This will return a short (and potentially more efficiently focussed) list of help pages than the wide\-ranging `??optim` call.
To make the search even more specific, we can use the `package` argument to constrain the search to a single package.
This can be very useful when you know that a function exists in a specific package, but you cannot remember what it is called:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"), package = "stats")
```
Another function for searching R is `apropos()`. It prints to the console any R objects (including ‘hidden’ functions, those beginning with `.` and datasets) whose name matches a given text string. Because it does not search R’s documentation, it tends to return fewer results than `help.search()`. Its use and typical outputs can be seen from a couple of examples below:
```
apropos("optim")
#> [1] "constrOptim" "is_blas_optimize" "optim" "optimHess"
#> [5] "optimise" "optimize"
apropos("lm")[1:6] # show only first six results
#> [1] ".colMeans" ".lm.fit" "bm_matrix_cal_lm" "colMeans"
#> [5] "colMeans" "confint.lm"
```
To search *all R packages*, including those you have not installed locally, for a specific topic there are a number of options. For obvious reasons, this depends on having internet access. The most rudimentary way to see what packages are available from CRAN, if you are using RStudio, is to use its autocompletion functionality for package names. To take an example, if you are looking for a package for geospatial data analysis, you could do worse than to enter the text string `geo` as an argument into the package installation function (for example `install.packages(geo)`) and hitting `Tab` when the cursor is between the `o` and the `)` in the example. The resulting options are shown in the figure below: selecting one from the dropdown menu will result in it being completed with surrounding quote marks, as necessary.
Figure 10\.1: Package name autocompletion in action in RStudio for packages beginning with ‘geo’.
### 10\.2\.2 Finding and using vignettes
Some packages contain vignettes. These are pieces of [‘long\-form’ documentation](http://r-pkgs.had.co.nz/vignettes.html) that allow package authors to go into detail explaining how the package works (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). In general they are high quality. Because they can be used to illustrate real world use cases, vignettes can be the best way to understand functions in the context of broader explanations and longer examples than are provided in function help pages. Although many packages lack vignettes, they deserve a sub\-section of their own because they can boost the efficiency with which package functions are used, in an integrated workflow.
If you are frustrated because a certain package lacks a vignette, you can create one. This can be a great way of learning about and consolidating your knowledge of a package. To create a vignette, first download the source code of a package and then use `devtools::use_vignette()`. To add a vignette to the **efficient** package used in this book, for example, you could clone the repo, e.g. using the command `git clone [git@github.com](mailto:git@github.com):csgillespie/efficient`. Once you have opened the repo as a project, e.g. in RStudio, you could create a vignette called “efficient\-learning” with the following command: `use_vignette(“efficient-learning”)`.
To browse any vignettes associated with a particular package, we can use the handy function `browseVignettes()`:
```
browseVignettes(package = "benchmarkme")
```
This is roughly equivalent to `vignette(package = "benchmarkme")` but opens a new page in a browser and lets you navigate all the vignettes in that particular package. For an overview of all vignettes available from R packages installed on your computer, try browsing all available vignettes with `browseVignettes()`. You may be surprised at how many hidden gems there are in there!
How best to *use* vignettes depends on the vignette in question and your aims. In general you should expect to spend longer reading vignette’s than other types of R documentation. The *Introduction to dplyr* vignette (opened with `vignette("introduction", package = "dplyr")`), for example, contains almost 4,000 words of prose and example code and outputs, illustrating how its functions work. We recommend working through the examples and typing the example code to ‘learn by doing’.
Another way to learn from package vignettes is to view their source code. You can find where vignette source code lives by looking in the `vignette/` folder of the package’s source code: **dplyr**’s vignettes, for example, can be viewed (and edited) online at [github.com/hadley/dplyr/tree/master/vignettes](https://github.com/hadley/dplyr/tree/master/vignettes). A quick way to view a vignette’s R code is with the `edit()` function:
```
v = vignette("introduction", package = "dplyr")
edit(v)
```
### 10\.2\.3 Getting help on functions
All functions have help pages. These contain, at a minimum, a list of the input arguments and the nature of the output that can be expected. Once a function has been identified, e.g. using one of the methods outlined in Section [10\.2\.1](learning.html#searching-r-for-topics), its *help page* can be displayed by prefixing the function name with `?`. Continuing with the previous example, the help page associated with the command `optim()` (for general purpose optimisation) can be invoked as follows:
```
# help("optim") # or, more concisely:
?optim
```
In general, help pages describe *what* functions do, not *how* they work. This is one of the reasons
that function help pages are thought (by some) to be difficult to understand. In practice,
this means that the help page does not describe the underlying mathematics or algorithm in
detail, it’s aim is to describe the interface.
A help page is divided into a number of sections.
The help for `optim()` is typical, in that it has a title (General\-purpose Optimization) followed by short Description, Usage and Arguments sections.
The Description is usually just a sentence or two for explaining what it does. Usage shows the arguments that the function needs to work. And Arguments describes what kind of objects the function expects. Longer sections typically include Details and Examples, which provide some context and provide (usually reproducible) examples of how the function can be used, respectively. The typically short Value, References and See Also sections facilitate efficient learning by explaining what the output means, where you can find academic literature on the subject, and which functions are related.
`optim()` is a mature and heavily used function so it has a long help page: you’ll probably be thankful to learn that not all help pages are this long!
With so much potentially overwhelming information in a single help page, the placement of the short, dense sections at the beginning is efficient because it means you can understand the fundamentals of a function in few words.
Learning how to read and quickly interpret such help pages will greatly help your ability to learn R. Take some time to study the help for `optim()` in detail.
It is worth discussing the contents of the Usage section in particular, because this contains information that may not be immediately obvious:
```
optim(par, fn, gr = NULL, ...,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"),
lower = -Inf, upper = Inf, control = list(), hessian = FALSE)
```
This contains two pieces of critical information: 1\) the *essential* arguments which must be provided for the function to work (`par` and `fn` in this case, as `gr` has a default value) before the `...` symbol; and 2\) *optional* arguments that control how the function works (`method`, `lower`, and `hessian` in this case). `...` are optional arguments whose values depend on the other arguments (which will be passed to the function represented by `fn` in this case). Let’s see how this works in practice by trying to run `optim()` to find the minimum value of the function \\(y \= x^4 \- x^2\\):
```
fn = function(x) {
x^4 - x^2
}
optim(par = 0, fn = fn)
#> Warning in optim(par = 0, fn = fn): one-dimensional optimization by Nelder-Mead is unreliable:
#> use "Brent" or optimize() directly
#> $par
#> [1] 0.707
#>
#> $value
#> [1] -0.25
#>
#> $counts
#> function gradient
#> 58 NA
#>
#> $convergence
#> [1] 0
#>
#> $message
#> NULL
```
The results show that the minimum value of `fn(x)` is found when `x = 0.707..` (\\(\\frac{1}{\\sqrt{2}}\\)), with a minimum value of \\(\-0\.25\\). It took \\(58\\) iterations of the function call for `optim()` to converge on this value. Each of these output values is described in the Values section of the help pages.
From the help pages, we could guess that providing the function call without specifying `par` (i.e. `optim(fn = fn)`) would fail, which indeed it does.
The most *helpful* section is often the Examples. These lie at the bottom of the help page and show precisely how the function works. You can either copy and paste the code, or actually run the example code using the `example` command (it is well worth running these examples due to the graphics produced):
```
example(optim)
```
When a package is added to CRAN, the example part of the documentation is run on all major platforms. This helps ensure that a package works on multiple systems.
Another useful section in the help file is `See Also:`. In the `optim()` help page, it links to `optimize()` which may be more appropriate for this use case.
### 10\.2\.4 Reading R source code
R is open source. This means that we can view the underlying source code and examine any function. Of course the code is complex, and diving straight into the source code won’t help that much. However, watching the GitHub R source code [mirror](https://github.com/wch/r-source/) will allow you to monitor small changes that occur. This gives a nice entry point into a complex code base. Likewise, examining the source of small functions, such as `NCOL` is informative, e.g. `getFunction("NCOL")`
Subscribing to the R NEWS [blog](https://developer.r-project.org/blosxom.cgi/R-devel/NEWS/) is an easy way of keeping track of future changes.
Many R packages are developed in the open on GitHub or R\-Forge. Select a few well known packages and examine their source. A good package to start with is **[drat](https://github.com/eddelbuettel/drat)**. This is a relatively simple package developed by Dirk Eddelbuettel (author of Rcpp) that only contains a few functions. It gives you an excellent pointer into software development by one of the key R package writers.
A shortcut for browsing R’s source code is provided by the RStudio IDE: clicking on a function and then hit `F2` will open its source code in the file editor. This works for both functions that exist in R and its packages and functions that you created yourself in another R script (so long as it is within your project directory).
Although reading source code can be interesting in itself, it is probably best done in the context of a specific question, e.g. “how can I use a function name as an argument in my own function?” (looking at the source code of `apply()` may help here).
### 10\.2\.5 Swirl
**swirl** is an interactive teaching platform for R. It offers a number of extensions and, for the pioneering, the ability for others to create custom extensions. The learning curve and method will not work for everyone, but this package is worth flagging as a potent self teaching resource. In some ways **swirl** can be seen as the ultimate internal R help as it allows dedicated learning sessions, based on multiple choice questions, all within a usual R session. To enter the **swirl** world, just enter the following. The resultant instructions will explain the rest:
```
library("swirl")
swirl()
```
### 10\.2\.1 Searching R for topics
A ‘wide boundary’ search for a topic in R will often begin with a search for instances of a keyword in the documentation and function names. Using the example of optimisation, one could start with a search for a text string related to the topic of interest:
```
# help.search("optim") # or, more concisely
??optim
```
Note that the `??` symbol is simply a useful shorthand version of the function `help.search()`.
It is sometimes useful to use the full function rather than the shorthand version, because that way you can specify a number of options.
To search for all help pages that mention the more specific terms “optimisation” or “optimization” (the US spelling), in the title or alias of the help pages, for example, the following command would be used:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"))
```
This will return a short (and potentially more efficiently focussed) list of help pages than the wide\-ranging `??optim` call.
To make the search even more specific, we can use the `package` argument to constrain the search to a single package.
This can be very useful when you know that a function exists in a specific package, but you cannot remember what it is called:
```
help.search(pattern = "optimisation|optimization", fields = c("title", "concept"), package = "stats")
```
Another function for searching R is `apropos()`. It prints to the console any R objects (including ‘hidden’ functions, those beginning with `.` and datasets) whose name matches a given text string. Because it does not search R’s documentation, it tends to return fewer results than `help.search()`. Its use and typical outputs can be seen from a couple of examples below:
```
apropos("optim")
#> [1] "constrOptim" "is_blas_optimize" "optim" "optimHess"
#> [5] "optimise" "optimize"
apropos("lm")[1:6] # show only first six results
#> [1] ".colMeans" ".lm.fit" "bm_matrix_cal_lm" "colMeans"
#> [5] "colMeans" "confint.lm"
```
To search *all R packages*, including those you have not installed locally, for a specific topic there are a number of options. For obvious reasons, this depends on having internet access. The most rudimentary way to see what packages are available from CRAN, if you are using RStudio, is to use its autocompletion functionality for package names. To take an example, if you are looking for a package for geospatial data analysis, you could do worse than to enter the text string `geo` as an argument into the package installation function (for example `install.packages(geo)`) and hitting `Tab` when the cursor is between the `o` and the `)` in the example. The resulting options are shown in the figure below: selecting one from the dropdown menu will result in it being completed with surrounding quote marks, as necessary.
Figure 10\.1: Package name autocompletion in action in RStudio for packages beginning with ‘geo’.
### 10\.2\.2 Finding and using vignettes
Some packages contain vignettes. These are pieces of [‘long\-form’ documentation](http://r-pkgs.had.co.nz/vignettes.html) that allow package authors to go into detail explaining how the package works (H. Wickham [2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)). In general they are high quality. Because they can be used to illustrate real world use cases, vignettes can be the best way to understand functions in the context of broader explanations and longer examples than are provided in function help pages. Although many packages lack vignettes, they deserve a sub\-section of their own because they can boost the efficiency with which package functions are used, in an integrated workflow.
If you are frustrated because a certain package lacks a vignette, you can create one. This can be a great way of learning about and consolidating your knowledge of a package. To create a vignette, first download the source code of a package and then use `devtools::use_vignette()`. To add a vignette to the **efficient** package used in this book, for example, you could clone the repo, e.g. using the command `git clone [git@github.com](mailto:git@github.com):csgillespie/efficient`. Once you have opened the repo as a project, e.g. in RStudio, you could create a vignette called “efficient\-learning” with the following command: `use_vignette(“efficient-learning”)`.
To browse any vignettes associated with a particular package, we can use the handy function `browseVignettes()`:
```
browseVignettes(package = "benchmarkme")
```
This is roughly equivalent to `vignette(package = "benchmarkme")` but opens a new page in a browser and lets you navigate all the vignettes in that particular package. For an overview of all vignettes available from R packages installed on your computer, try browsing all available vignettes with `browseVignettes()`. You may be surprised at how many hidden gems there are in there!
How best to *use* vignettes depends on the vignette in question and your aims. In general you should expect to spend longer reading vignette’s than other types of R documentation. The *Introduction to dplyr* vignette (opened with `vignette("introduction", package = "dplyr")`), for example, contains almost 4,000 words of prose and example code and outputs, illustrating how its functions work. We recommend working through the examples and typing the example code to ‘learn by doing’.
Another way to learn from package vignettes is to view their source code. You can find where vignette source code lives by looking in the `vignette/` folder of the package’s source code: **dplyr**’s vignettes, for example, can be viewed (and edited) online at [github.com/hadley/dplyr/tree/master/vignettes](https://github.com/hadley/dplyr/tree/master/vignettes). A quick way to view a vignette’s R code is with the `edit()` function:
```
v = vignette("introduction", package = "dplyr")
edit(v)
```
### 10\.2\.3 Getting help on functions
All functions have help pages. These contain, at a minimum, a list of the input arguments and the nature of the output that can be expected. Once a function has been identified, e.g. using one of the methods outlined in Section [10\.2\.1](learning.html#searching-r-for-topics), its *help page* can be displayed by prefixing the function name with `?`. Continuing with the previous example, the help page associated with the command `optim()` (for general purpose optimisation) can be invoked as follows:
```
# help("optim") # or, more concisely:
?optim
```
In general, help pages describe *what* functions do, not *how* they work. This is one of the reasons
that function help pages are thought (by some) to be difficult to understand. In practice,
this means that the help page does not describe the underlying mathematics or algorithm in
detail, it’s aim is to describe the interface.
A help page is divided into a number of sections.
The help for `optim()` is typical, in that it has a title (General\-purpose Optimization) followed by short Description, Usage and Arguments sections.
The Description is usually just a sentence or two for explaining what it does. Usage shows the arguments that the function needs to work. And Arguments describes what kind of objects the function expects. Longer sections typically include Details and Examples, which provide some context and provide (usually reproducible) examples of how the function can be used, respectively. The typically short Value, References and See Also sections facilitate efficient learning by explaining what the output means, where you can find academic literature on the subject, and which functions are related.
`optim()` is a mature and heavily used function so it has a long help page: you’ll probably be thankful to learn that not all help pages are this long!
With so much potentially overwhelming information in a single help page, the placement of the short, dense sections at the beginning is efficient because it means you can understand the fundamentals of a function in few words.
Learning how to read and quickly interpret such help pages will greatly help your ability to learn R. Take some time to study the help for `optim()` in detail.
It is worth discussing the contents of the Usage section in particular, because this contains information that may not be immediately obvious:
```
optim(par, fn, gr = NULL, ...,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"),
lower = -Inf, upper = Inf, control = list(), hessian = FALSE)
```
This contains two pieces of critical information: 1\) the *essential* arguments which must be provided for the function to work (`par` and `fn` in this case, as `gr` has a default value) before the `...` symbol; and 2\) *optional* arguments that control how the function works (`method`, `lower`, and `hessian` in this case). `...` are optional arguments whose values depend on the other arguments (which will be passed to the function represented by `fn` in this case). Let’s see how this works in practice by trying to run `optim()` to find the minimum value of the function \\(y \= x^4 \- x^2\\):
```
fn = function(x) {
x^4 - x^2
}
optim(par = 0, fn = fn)
#> Warning in optim(par = 0, fn = fn): one-dimensional optimization by Nelder-Mead is unreliable:
#> use "Brent" or optimize() directly
#> $par
#> [1] 0.707
#>
#> $value
#> [1] -0.25
#>
#> $counts
#> function gradient
#> 58 NA
#>
#> $convergence
#> [1] 0
#>
#> $message
#> NULL
```
The results show that the minimum value of `fn(x)` is found when `x = 0.707..` (\\(\\frac{1}{\\sqrt{2}}\\)), with a minimum value of \\(\-0\.25\\). It took \\(58\\) iterations of the function call for `optim()` to converge on this value. Each of these output values is described in the Values section of the help pages.
From the help pages, we could guess that providing the function call without specifying `par` (i.e. `optim(fn = fn)`) would fail, which indeed it does.
The most *helpful* section is often the Examples. These lie at the bottom of the help page and show precisely how the function works. You can either copy and paste the code, or actually run the example code using the `example` command (it is well worth running these examples due to the graphics produced):
```
example(optim)
```
When a package is added to CRAN, the example part of the documentation is run on all major platforms. This helps ensure that a package works on multiple systems.
Another useful section in the help file is `See Also:`. In the `optim()` help page, it links to `optimize()` which may be more appropriate for this use case.
### 10\.2\.4 Reading R source code
R is open source. This means that we can view the underlying source code and examine any function. Of course the code is complex, and diving straight into the source code won’t help that much. However, watching the GitHub R source code [mirror](https://github.com/wch/r-source/) will allow you to monitor small changes that occur. This gives a nice entry point into a complex code base. Likewise, examining the source of small functions, such as `NCOL` is informative, e.g. `getFunction("NCOL")`
Subscribing to the R NEWS [blog](https://developer.r-project.org/blosxom.cgi/R-devel/NEWS/) is an easy way of keeping track of future changes.
Many R packages are developed in the open on GitHub or R\-Forge. Select a few well known packages and examine their source. A good package to start with is **[drat](https://github.com/eddelbuettel/drat)**. This is a relatively simple package developed by Dirk Eddelbuettel (author of Rcpp) that only contains a few functions. It gives you an excellent pointer into software development by one of the key R package writers.
A shortcut for browsing R’s source code is provided by the RStudio IDE: clicking on a function and then hit `F2` will open its source code in the file editor. This works for both functions that exist in R and its packages and functions that you created yourself in another R script (so long as it is within your project directory).
Although reading source code can be interesting in itself, it is probably best done in the context of a specific question, e.g. “how can I use a function name as an argument in my own function?” (looking at the source code of `apply()` may help here).
### 10\.2\.5 Swirl
**swirl** is an interactive teaching platform for R. It offers a number of extensions and, for the pioneering, the ability for others to create custom extensions. The learning curve and method will not work for everyone, but this package is worth flagging as a potent self teaching resource. In some ways **swirl** can be seen as the ultimate internal R help as it allows dedicated learning sessions, based on multiple choice questions, all within a usual R session. To enter the **swirl** world, just enter the following. The resultant instructions will explain the rest:
```
library("swirl")
swirl()
```
10\.3 Online resources
----------------------
The R community has a strong online presence, providing many resources for learning. Over time, there has fortunately been a tendency for R resources to become more user friendly and up\-to\-date. Many resources that have been on CRAN for many years are dated by now so it’s more efficient to navigate directly to the most up\-to\-date and efficient\-to\-use resources.
‘Cheat sheets’ are short documents summarising how to do certain things. [RStudio](http://www.rstudio.com/resources/cheatsheets/), for example, provides excellent cheat sheets on [**dplyr**](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf), [**rmarkdown**](https://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf) and the [RStudio IDE](https://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf) itself.
The R\-project website contains six detailed [official manuals](https://cran.r-project.org/manuals.html), plus a giant pdf file containing documentation for all recommended packages. These include [An Introduction to R](https://cran.r-project.org/doc/manuals/r-release/R-intro.html), [The R language definition](https://cran.r-project.org/doc/manuals/r-release/R-lang.html) and [R Installation and Administration](https://cran.r-project.org/doc/manuals/r-release/R-admin.html), all of which are recommended for people wanting to learn their general R skills. If you are developing a package and want to submit it to CRAN, the [Writing R Extensions](https://cran.r-project.org/doc/manuals/r-release/R-exts.html) manual is recommended reading, although it has to some extent been superseded by H. Wickham ([2015](#ref-Wickham_2015)[b](#ref-Wickham_2015)), the source code of which is [available online](https://github.com/hadley/r-pkgs). While these manuals are long, they contain important information written by experienced R programmers.
For more domain\-specific and up\-to\-date information on developments in R, we recommend checking out academic journals. The [R\-journal](https://journal.r-project.org/) regularly publishes articles describing new R packages, as well as general programming hints. Similarly, the articles in the [Journal of Statistical Software](https://www.jstatsoft.org/) have a strong R bias. Publications in these journals are generally of very high quality and have been rigorously peer reviewed. However, they may be rather technical for R novices.
The wider community provides a much larger body of information, of more variable quality, than the official R resources. The [Contributed Documentation](https://cran.r-project.org/other-docs.html) page on R’s home page contains dozens of tutorials and other resources on a wide range of topics. Some of these are excellent, although many are not kept up\-to\-date. An excellent resource for browsing R help pages online is provided by [rdocumentation.org](http://www.rdocumentation.org).
Lower grade but more frequently released information can be found on the ‘blogosphere’. Central to this is [R\-bloggers](http://www.r-bloggers.com/), a blog aggregator of content contributed by bloggers who write about R (in English). It is a great way to get exposed to new and different packages. Similarly monitoring the *[\#rstats](https://twitter.com/search?q=%23rstats)* twitter tag keeps you up\-to\-date with the latest news.
There are also mailing lists, Google groups and the Stack Exchange Q \& A sites. Before requesting help, read a few other questions to learn the format of the site. Make sure you search previous questions so you are not duplicating work. Perhaps the most important point is that people aren’t under **any** obligation to answer your question. One of the fantastic things about the open\-source community is that you can ask questions and one of core developers may answer your question for free; but remember, everyone is busy!
### 10\.3\.1 Stackoverflow
The number one place on the internet for getting help on programming is [Stackoverflow](http://www.stackoverflow.com). This website provides a platform for asking and answering questions. Through site membership, questions and answers are voted up or down. Users of Stackoverflow earn reputation points when their question or answer is up\-voted. Anyone (with enough reputation) can edit a question or answer. This helps the content remain relevant.
Questions are tagged. The R questions can be found under the [R tag](http://stackoverflow.com/questions/tagged/r). The [R page](https://stackoverflow.com/tags/r/info) contains links to Official documentation, free resources, and various other links. Members of the Stackoverflow R community have tagged, using `r-faq`, a few questions that often crop up.
### 10\.3\.2 Mailing lists and groups.
There are many mailing lists and Google groups focused on R and particular packages. The main list for getting help is `R-help`. This is a high volume mailing list, with around a dozen messages per day. A more technical mailing list is `R-devel`. This list is intended for questions and discussion about code development in R. The discussion on this list is very technical. However, it’s a good place to be introduced to new ideas \- but it’s not the place to ask about these ideas! There are
many other special interest mailing [lists](https://www.r-project.org/mail.html) covering topics such as high performance computing to ecology. Many popular packages also have their own mailing list or Google group, e.g. **ggplot2** and **shiny**. The key piece of advice is before mailing a list, read the relevant mailing archive and check that your message is appropriate.
### 10\.3\.1 Stackoverflow
The number one place on the internet for getting help on programming is [Stackoverflow](http://www.stackoverflow.com). This website provides a platform for asking and answering questions. Through site membership, questions and answers are voted up or down. Users of Stackoverflow earn reputation points when their question or answer is up\-voted. Anyone (with enough reputation) can edit a question or answer. This helps the content remain relevant.
Questions are tagged. The R questions can be found under the [R tag](http://stackoverflow.com/questions/tagged/r). The [R page](https://stackoverflow.com/tags/r/info) contains links to Official documentation, free resources, and various other links. Members of the Stackoverflow R community have tagged, using `r-faq`, a few questions that often crop up.
### 10\.3\.2 Mailing lists and groups.
There are many mailing lists and Google groups focused on R and particular packages. The main list for getting help is `R-help`. This is a high volume mailing list, with around a dozen messages per day. A more technical mailing list is `R-devel`. This list is intended for questions and discussion about code development in R. The discussion on this list is very technical. However, it’s a good place to be introduced to new ideas \- but it’s not the place to ask about these ideas! There are
many other special interest mailing [lists](https://www.r-project.org/mail.html) covering topics such as high performance computing to ecology. Many popular packages also have their own mailing list or Google group, e.g. **ggplot2** and **shiny**. The key piece of advice is before mailing a list, read the relevant mailing archive and check that your message is appropriate.
10\.4 Asking a question
-----------------------
A great way to get specific help on a difficult topic is to ask for help.
However, asking a good question is not easy. Three common mistakes, and ways to avoid them, are outlined below:
1. Asking a question that has already been asked: ensure you’ve properly searched for the answer before posting.
2. The answer to the question can be found in R’s help: make sure you’ve properly read the relevant help pages before asking.
3. The question does not contain a reproducible example: create a simple version of your data, show the code you’ve tried, and display the result you are hoping for.
Your question should contain just enough information that you problem is clear and can be reproducible, while at the same time avoid unnecessary details. Fortunately there is a StackOverflow question \- [How to make a great R reproducible example?](http://stackoverflow.com/q/5963269/203420) that provides excellent guidance.
Additional guides that explain how to create good programming questions are provided by [StackOverflow](https://stackoverflow.com/help/how-to-ask) and and the [R mailing list posting guide](https://www.r-project.org/posting-guide.html).
### Minimal data set
What is the smallest data set you can construct that will reproduce your issue? Your actual data set may contain \\(10^5\\) rows and \\(10^4\\) columns, but to get your idea across you might only need \\(4\\) rows and \\(3\\) columns. Making small example data sets is easy. For example, to create a data frame with two numeric columns and a column of characters just use
```
set.seed(1)
example_df = data.frame(x = rnorm(4), y = rnorm(4), z = sample(LETTERS, 4))
```
Note the call to `set.seed` ensures anyone who runs the code will get the same random number stream. Alternatively, you can use one of the many data sets that come with R \- `library(help = "datasets")`.
If creating an example data set isn’t possible, then use `dput` on your actual data set. This will create an ASCII text representation of the object that will enable anyone to recreate the object
```
dput(example_df)
#> structure(list(
#> x = c(-0.626453810742332, 0.183643324222082, -0.835628612410047, 1.59528080213779),
#> y = c(0.329507771815361, -0.820468384118015, 0.487429052428485, 0.738324705129217),
#> z = structure(c(3L, 4L, 1L, 2L), .Label = c("J", "R", "S", "Y"), class = "factor")),
#> .Names = c("x", "y", "z"), row.names = c(NA, -4L), class = "data.frame")
```
### Minimal example
What you should not do, is simply copy and paste your entire function into your question. It’s unlikely that your entire function doesn’t work, so just simplify it to the bare minimum. The aim is to target your actual issue. Avoid copying and pasting large blocks of code; remove superfluous lines that are not part of the problem. Before asking your question, can you run your code in a clean R environment and reproduce your error?
### Minimal data set
What is the smallest data set you can construct that will reproduce your issue? Your actual data set may contain \\(10^5\\) rows and \\(10^4\\) columns, but to get your idea across you might only need \\(4\\) rows and \\(3\\) columns. Making small example data sets is easy. For example, to create a data frame with two numeric columns and a column of characters just use
```
set.seed(1)
example_df = data.frame(x = rnorm(4), y = rnorm(4), z = sample(LETTERS, 4))
```
Note the call to `set.seed` ensures anyone who runs the code will get the same random number stream. Alternatively, you can use one of the many data sets that come with R \- `library(help = "datasets")`.
If creating an example data set isn’t possible, then use `dput` on your actual data set. This will create an ASCII text representation of the object that will enable anyone to recreate the object
```
dput(example_df)
#> structure(list(
#> x = c(-0.626453810742332, 0.183643324222082, -0.835628612410047, 1.59528080213779),
#> y = c(0.329507771815361, -0.820468384118015, 0.487429052428485, 0.738324705129217),
#> z = structure(c(3L, 4L, 1L, 2L), .Label = c("J", "R", "S", "Y"), class = "factor")),
#> .Names = c("x", "y", "z"), row.names = c(NA, -4L), class = "data.frame")
```
### Minimal example
What you should not do, is simply copy and paste your entire function into your question. It’s unlikely that your entire function doesn’t work, so just simplify it to the bare minimum. The aim is to target your actual issue. Avoid copying and pasting large blocks of code; remove superfluous lines that are not part of the problem. Before asking your question, can you run your code in a clean R environment and reproduce your error?
10\.5 Learning in depth
-----------------------
In the age of the internet and social media, many people feel lucky if they have time out to go for a walk, let alone sit down to read a book. However it is undeniable that learning R *in depth* is a time consuming activity. Reading a book or a large tutorial (and completing the practical examples contained within) may not be the most efficient way to solve a particular problem in the short term, but it can be one of the best ways to learn R programming properly, especially in the long\-run.
In depth learning differs from shallow, incremental learning because rather than discovering how a specific function works, you find out how systems of functions work together. To take a metaphor from civil engineering, in depth learning is about building strong foundations, on which a wide range of buildings can be constructed. In depth learning can be highly efficient in the long run because it will pay back over many years, regardless of the domain\-specific problem you want to use R to tackle. Shallow learning, to continue the metaphor, is more like erecting many temporary structures: they can solve a specific problem in the short term but they will not be durable. Flimsy dwellings can be swept away. Shallow memories can be forgotten.
Having established that time spent ‘deep learning’ can, counter\-intuitively, be efficient, it is worth thinking about how to deep learn. This varies from person to person. It does not involve passively absorbing sacred information transmitted year after year by the ‘R gods’. It is an active, participatory process. To ensure that memories are rapidly actionable you must ‘learn by doing’. Learning from a cohesive, systematic and relatively comprehensive resource will help you to see the many interconnections between the different elements of R programming and how they can be combined for efficient work.
There are a number of such resources, including this book. Although the understandable tendency will be to use it incrementally, dipping in and out of different sections when different problems arise, we also recommend reading it systematically to see how the different elements of efficiency fit together. It is likely that as you work progressively through this book, in parallel with solving real world problems, you will realise that the solution is not to have the ‘right’ resource at hand but to be able to use the tools provided by R efficiently. Once you hit this level of proficiency, you should have the confidence to address most problems encountered from first principles. Over time, your ‘first port of call’ should move away from Google and even R’s internal help to simply giving it a try: informed trial and error, intelligent experimentation, can be the best approach to both learning and solving problems quickly, once you are equipped with the tools to do so. That’s why this is the last section in the book.
If you have already worked through all the examples in this book, or if you want to learn areas not covered in it, there are many excellent resources for extending and deepening your knowledge of R programming for fast and effective work, and to do new things with it. Because R is a large and ever\-evolving language, there is no definitive list of resources for taking your R skills to new heights. However, the list below, in rough ascending order of difficulty and depth, should provide plenty of material and motivation for in depth learning of R.
1. Free webinars and online courses provided by [RStudio](http://www.rstudio.com/resources/webinars/) and [DataCamp](https://www.datacamp.com/community/open-courses). Both organisations are well regarded and keep their content up\-to\-date, but there are likely other sources of other online courses. We recommend testing pushing your abilities, rather than going over the same material covered in this book.
2. *R for Data Science* (Grolemund and Wickham [2016](#ref-grolemund_r_2016)), a free book introducing many concepts and ‘tidy’ packages for working with data (a free online version is available from [r4ds.had.co.nz/](http://r4ds.had.co.nz/)).
3. *R programming for Data Science* (Peng [2014](#ref-peng_r_2014)), which provides in depth coverage of analysis and visualisation of datasets.
4. *Advanced R Programming* (H. Wickham [2014](#ref-Wickham2014)[a](#ref-Wickham2014)), an advanced book which looks at the internals of how R works (free from [adv\-r.had.co.nz](http://adv-r.had.co.nz/)).
10\.6 Spread the knowledge
--------------------------
The final thing to say on the topic of efficient learning relates to the [old](https://en.wikipedia.org/wiki/Docendo_discimus) (\~2000 years old!) saying *docendo discimus*:
> **by teaching we learn**.
This means that passing on information is one of the best ways to consolidate your learning. It was largely by helping others to learn R that we became proficient R users.
Demand for R skills is growing, so there are many opportunities to teach R. Whether it’s helping your colleague to use `apply()`, or writing a blog post on solving certain problems in R, teaching others R can be a rewarding experience. Furthermore, spreading the knowledge can be efficient: it will improve your own understanding of the language and benefit the entire community, providing a positive feedback to the movement towards open source software in data\-driven computing.
Assuming you have completed reading the book, the only remaining thing to say is well done: you are now an efficient R programmer. We hope you direct your new found skills towards the greater good and pass on the wisdom to others along the way.
| R Programming |
csgillespie.github.io | https://csgillespie.github.io/efficientR/building-the-book-from-source.html |
A Building the book from source
===============================
The complete source of the book is available [online](https://github.com/csgillespie/efficientR). To build the book:
1. Install the latest version of R
* If you are using RStudio, make sure that’s up\-to\-date as well
2. Install the book dependencies.
```
# Make sure you are using the latest version of `devtools`
# Older versions do not work.
devtools::install_github("csgillespie/efficientR")
```
3. Clone the efficientR [repository](https://github.com/csgillespie/efficientR)
* See the chapter [9](collaboration.html#collaboration) on Efficient collaboration for an introduction
to git and github.
1. If you are using `RStudio`, open `index.Rmd` and click `Knit`.
* Alternatively (for mainly Linux users) you can use the bundled `Makefile`
A.1 Package dependencies
------------------------
The book uses datasets stored in the **efficient** GitHub package, which can be installed (after **devtools** has been installed) as follows:
```
# Installs package dependencies shown below
devtools::install_github("csgillespie/efficient",
args = "--with-keep.source")
```
The book depends on the following CRAN packages:
| Name | Title | version |
| --- | --- | --- |
| assertive.reflection | Assertions for Checking the State of R (Cotton [2016](#ref-R-assertive.reflection)[a](#ref-R-assertive.reflection)) | 0\.0\.4 |
| benchmarkme | Crowd Sourced System Benchmarks (Gillespie [2019](#ref-R-benchmarkme)) | 1\.0\.3 |
| bookdown | Authoring Books and Technical Documents with R Markdown (Xie [2020](#ref-R-bookdown)[a](#ref-R-bookdown)) | 0\.18 |
| cranlogs | Download Logs from the ‘RStudio’ ‘CRAN’ Mirror (Csárdi [2019](#ref-R-cranlogs)) | 2\.1\.1 |
| data.table | Extension of `data.frame` (Dowle and Srinivasan [2019](#ref-R-data.table)) | 1\.12\.8 |
| dbplyr | A ‘dplyr’ Back End for Databases (H. Wickham and Ruiz [2020](#ref-R-dbplyr)) | 1\.4\.3 |
| devtools | Tools to Make Developing R Packages Easier (H. Wickham, Hester, and Chang [2020](#ref-R-devtools)) | 2\.3\.0 |
| DiagrammeR | Graph/Network Visualization (Iannone [2020](#ref-R-DiagrammeR)) | 1\.0\.5 |
| dplyr | A Grammar of Data Manipulation (H. Wickham, François, et al. [2020](#ref-R-dplyr)) | 0\.8\.5 |
| drat | ‘Drat’ R Archive Template (Carl Boettiger et al. [2019](#ref-R-drat)) | 0\.1\.5 |
| efficient | Becoming an Efficient R Programmer (Gillespie and Lovelace [2020](#ref-R-efficient)) | 0\.1\.3 |
| feather | R Bindings to the Feather ‘API’ (Wickham [2019](#ref-R-feather)) | 0\.3\.5 |
| formatR | Format R Code Automatically (Xie [2019](#ref-R-formatR)) | 1\.7 |
| fortunes | R Fortunes (Zeileis et al. [2016](#ref-R-fortunes)) | 1\.5\.4 |
| geosphere | Spherical Trigonometry (Hijmans [2019](#ref-R-geosphere)) | 1\.5\.10 |
| ggmap | Spatial Visualization with ggplot2 (Kahle, Wickham, and Jackson [2019](#ref-R-ggmap)) | 3\.0\.0 |
| ggplot2 | Create Elegant Data Visualisations Using the Grammar of Graphics (H. Wickham, Chang, et al. [2020](#ref-R-ggplot2)) | 3\.3\.0 |
| ggplot2movies | Movies Data (H. Wickham [2015](#ref-R-ggplot2movies)[a](#ref-R-ggplot2movies)) | 0\.0\.1 |
| knitr | A General\-Purpose Package for Dynamic Report Generation in R (Xie [2020](#ref-R-knitr)[b](#ref-R-knitr)) | 1\.28 |
| lubridate | Make Dealing with Dates a Little Easier (Spinu, Grolemund, and Wickham [2020](#ref-R-lubridate)) | 1\.7\.8 |
| maps | Draw Geographical Maps (Richard A. Becker, Ray Brownrigg. Enhancements by Thomas P Minka, and Deckmyn. [2018](#ref-R-maps)) | 3\.3\.0 |
| microbenchmark | Accurate Timing Functions (Mersmann [2019](#ref-R-microbenchmark)) | 1\.4\.7 |
| profvis | Interactive Visualizations for Profiling R Code (Chang, Luraschi, and Mastny [2019](#ref-R-profvis)) | 0\.3\.6 |
| pryr | Tools for Computing on the Language (H. Wickham [2018](#ref-R-pryr)) | 0\.1\.4 |
| Rcpp | Seamless R and C\+\+ Integration (Eddelbuettel et al. [2020](#ref-R-Rcpp)) | 1\.0\.4\.6 |
| readr | Read Rectangular Text Data (H. Wickham, Hester, and Francois [2018](#ref-R-readr)) | 1\.3\.1 |
| reticulate | Interface to ‘Python’ (Ushey, Allaire, and Tang [2020](#ref-R-reticulate)) | 1\.15 |
| rio | A Swiss\-Army Knife for Data I/O (Chan and Leeper [2018](#ref-R-rio)) | 0\.5\.16 |
| RSQLite | ‘SQLite’ Interface for R (Müller et al. [2020](#ref-R-RSQLite)) | 2\.2\.0 |
| swirl | Learn R, in R (Kross et al. [2020](#ref-R-swirl)) | 2\.4\.5 |
| tibble | Simple Data Frames (Müller and Wickham [2020](#ref-R-tibble)) | 3\.0\.1 |
| tidyr | Tidy Messy Data (H. Wickham and Henry [2020](#ref-R-tidyr)) | 1\.0\.2 |
A.1 Package dependencies
------------------------
The book uses datasets stored in the **efficient** GitHub package, which can be installed (after **devtools** has been installed) as follows:
```
# Installs package dependencies shown below
devtools::install_github("csgillespie/efficient",
args = "--with-keep.source")
```
The book depends on the following CRAN packages:
| Name | Title | version |
| --- | --- | --- |
| assertive.reflection | Assertions for Checking the State of R (Cotton [2016](#ref-R-assertive.reflection)[a](#ref-R-assertive.reflection)) | 0\.0\.4 |
| benchmarkme | Crowd Sourced System Benchmarks (Gillespie [2019](#ref-R-benchmarkme)) | 1\.0\.3 |
| bookdown | Authoring Books and Technical Documents with R Markdown (Xie [2020](#ref-R-bookdown)[a](#ref-R-bookdown)) | 0\.18 |
| cranlogs | Download Logs from the ‘RStudio’ ‘CRAN’ Mirror (Csárdi [2019](#ref-R-cranlogs)) | 2\.1\.1 |
| data.table | Extension of `data.frame` (Dowle and Srinivasan [2019](#ref-R-data.table)) | 1\.12\.8 |
| dbplyr | A ‘dplyr’ Back End for Databases (H. Wickham and Ruiz [2020](#ref-R-dbplyr)) | 1\.4\.3 |
| devtools | Tools to Make Developing R Packages Easier (H. Wickham, Hester, and Chang [2020](#ref-R-devtools)) | 2\.3\.0 |
| DiagrammeR | Graph/Network Visualization (Iannone [2020](#ref-R-DiagrammeR)) | 1\.0\.5 |
| dplyr | A Grammar of Data Manipulation (H. Wickham, François, et al. [2020](#ref-R-dplyr)) | 0\.8\.5 |
| drat | ‘Drat’ R Archive Template (Carl Boettiger et al. [2019](#ref-R-drat)) | 0\.1\.5 |
| efficient | Becoming an Efficient R Programmer (Gillespie and Lovelace [2020](#ref-R-efficient)) | 0\.1\.3 |
| feather | R Bindings to the Feather ‘API’ (Wickham [2019](#ref-R-feather)) | 0\.3\.5 |
| formatR | Format R Code Automatically (Xie [2019](#ref-R-formatR)) | 1\.7 |
| fortunes | R Fortunes (Zeileis et al. [2016](#ref-R-fortunes)) | 1\.5\.4 |
| geosphere | Spherical Trigonometry (Hijmans [2019](#ref-R-geosphere)) | 1\.5\.10 |
| ggmap | Spatial Visualization with ggplot2 (Kahle, Wickham, and Jackson [2019](#ref-R-ggmap)) | 3\.0\.0 |
| ggplot2 | Create Elegant Data Visualisations Using the Grammar of Graphics (H. Wickham, Chang, et al. [2020](#ref-R-ggplot2)) | 3\.3\.0 |
| ggplot2movies | Movies Data (H. Wickham [2015](#ref-R-ggplot2movies)[a](#ref-R-ggplot2movies)) | 0\.0\.1 |
| knitr | A General\-Purpose Package for Dynamic Report Generation in R (Xie [2020](#ref-R-knitr)[b](#ref-R-knitr)) | 1\.28 |
| lubridate | Make Dealing with Dates a Little Easier (Spinu, Grolemund, and Wickham [2020](#ref-R-lubridate)) | 1\.7\.8 |
| maps | Draw Geographical Maps (Richard A. Becker, Ray Brownrigg. Enhancements by Thomas P Minka, and Deckmyn. [2018](#ref-R-maps)) | 3\.3\.0 |
| microbenchmark | Accurate Timing Functions (Mersmann [2019](#ref-R-microbenchmark)) | 1\.4\.7 |
| profvis | Interactive Visualizations for Profiling R Code (Chang, Luraschi, and Mastny [2019](#ref-R-profvis)) | 0\.3\.6 |
| pryr | Tools for Computing on the Language (H. Wickham [2018](#ref-R-pryr)) | 0\.1\.4 |
| Rcpp | Seamless R and C\+\+ Integration (Eddelbuettel et al. [2020](#ref-R-Rcpp)) | 1\.0\.4\.6 |
| readr | Read Rectangular Text Data (H. Wickham, Hester, and Francois [2018](#ref-R-readr)) | 1\.3\.1 |
| reticulate | Interface to ‘Python’ (Ushey, Allaire, and Tang [2020](#ref-R-reticulate)) | 1\.15 |
| rio | A Swiss\-Army Knife for Data I/O (Chan and Leeper [2018](#ref-R-rio)) | 0\.5\.16 |
| RSQLite | ‘SQLite’ Interface for R (Müller et al. [2020](#ref-R-RSQLite)) | 2\.2\.0 |
| swirl | Learn R, in R (Kross et al. [2020](#ref-R-swirl)) | 2\.4\.5 |
| tibble | Simple Data Frames (Müller and Wickham [2020](#ref-R-tibble)) | 3\.0\.1 |
| tidyr | Tidy Messy Data (H. Wickham and Henry [2020](#ref-R-tidyr)) | 1\.0\.2 |
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/basics.html |
2 The Very Basics
=================
This chapter provides a broad overview of the R language that will get you programming right away. In it, you will build a pair of virtual dice that you can use to generate random numbers. Don’t worry if you’ve never programmed before; the chapter will teach you everything you need to know.
To simulate a pair of dice, you will have to distill each die into its essential features. You cannot place a physical object, like a die, into a computer (well, not without unscrewing some screws), but you can save *information* about the object in your computer’s memory.
Which information should you save? In general, a die has six important pieces of information: when you roll a die, it can only result in one of six numbers: 1, 2, 3, 4, 5, and 6\. You can capture the essential characteristics of a die by saving the numbers 1, 2, 3, 4, 5, and 6 as a group of values in your computer’s memory.
Let’s work on saving these numbers first, and then consider a method for “rolling” our die.
2\.1 The R User Interface
-------------------------
Before you can ask your computer to save some numbers, you’ll need to know how to talk to it. That’s where R and RStudio come in. RStudio gives you a way to talk to your computer. R gives you a language to speak in. To get started, open RStudio just as you would open any other application on your computer. When you do, a window should appear in your screen like the one shown in Figure [2\.1](basics.html#fig:console).
Figure 2\.1: Your computer does your bidding when you type R commands at the prompt in the bottom line of the console pane. Don’t forget to hit the Enter key. When you first open RStudio, the console appears in the pane on your left, but you can change this with File \> Preferences in the menu bar.
If you do not yet have R and RStudio intalled on your computer–or do not know what I am talking about–visit [Appendix A](starting.html#starting). The appendix will give you an overview of the two free tools and tell you how to download them.
The RStudio interface is simple. You type R code into the bottom line of the RStudio console pane and then click Enter to run it. The code you type is called a *command*, because it will command your computer to do something for you. The line you type it into is called the *command line*.
When you type a command at the prompt and hit Enter, your computer executes the command and shows you the results. Then RStudio displays a fresh prompt for your next command. For example, if you type `1 + 1` and hit Enter, RStudio will display:
```
> 1 + 1
[1] 2
>
```
You’ll notice that a `[1]` appears next to your result. R is just letting you know that this line begins with the first value in your result. Some commands return more than one value, and their results may fill up multiple lines. For example, the command `100:130` returns 31 values; it creates a sequence of integers from 100 to 130\. Notice that new bracketed numbers appear at the start of the second and third lines of output. These numbers just mean that the second line begins with the 14th value in the result, and the third line begins with the 25th value. You can mostly ignore the numbers that appear in brackets:
```
> 100:130
[1] 100 101 102 103 104 105 106 107 108 109 110 111 112
[14] 113 114 115 116 117 118 119 120 121 122 123 124 125
[25] 126 127 128 129 130
```
The colon operator (`:`) returns every integer between two integers. It is an easy way to create a sequence of numbers.
**Isn’t R a language?**
You may hear me speak of R in the third person. For example, I might say, “Tell R to do this” or “Tell R to do that”, but of course R can’t do anything; it is just a language. This way of speaking is shorthand for saying, “Tell your computer to do this by writing a command in the R language at the command line of your RStudio console.” Your computer, and not R, does the actual work.
Is this shorthand confusing and slightly lazy to use? Yes. Do a lot of people use it? Everyone I know–probably because it is so convenient.
**When do we compile?**
In some languages, like C, Java, and FORTRAN, you have to compile your human\-readable code into machine\-readable code (often 1s and 0s) before you can run it. If you’ve programmed in such a language before, you may wonder whether you have to compile your R code before you can use it. The answer is no. R is a dynamic programming language, which means R automatically interprets your code as you run it.
If you type an incomplete command and press Enter, R will display a `+` prompt, which means R is waiting for you to type the rest of your command. Either finish the command or hit Escape to start over:
```
> 5 -
+
+ 1
[1] 4
```
If you type a command that R doesn’t recognize, R will return an error message. If you ever see an error message, don’t panic. R is just telling you that your computer couldn’t understand or do what you asked it to do. You can then try a different command at the next prompt:
```
> 3 % 5
Error: unexpected input in "3 % 5"
>
```
Once you get the hang of the command line, you can easily do anything in R that you would do with a calculator. For example, you could do some basic arithmetic:
```
2 * 3
## 6
4 - 1
## 3
6 / (4 - 1)
## 2
```
Did you notice something different about this code? I’ve left out the `>`’s and `[1]`’s. This will make the code easier to copy and paste if you want to put it in your own console.
R treats the hashtag character, `#`, in a special way; R will not run anything that follows a hashtag on a line. This makes hashtags very useful for adding comments and annotations to your code. Humans will be able to read the comments, but your computer will pass over them. The hashtag is known as the *commenting symbol* in R.
For the remainder of the book, I’ll use hashtags to display the output of R code. I’ll use a single hashtag to add my own comments and a double hashtag, `##`, to display the results of code. I’ll avoid showing `>`s and `[1]`s unless I want you to look at them.
**Cancelling commands**
Some R commands may take a long time to run. You can cancel a command once it has begun by pressing ctrl \+ c. Note that it may also take R a long time to cancel the command.
**Exercise 2\.1 (Magic with Numbers)** That’s the basic interface for executing R code in RStudio. Think you have it? If so, try doing these simple tasks. If you execute everything correctly, you should end up with the same number that you started with:
1. Choose any number and add 2 to it.
2. Multiply the result by 3\.
3. Subtract 6 from the answer.
4. Divide what you get by 3\.
Throughout the book, I’ll put exercises in chunks, like the one above. I’ll follow each exercise with a model answer, like the one below.
*Solution.* You could start with the number 10, and then do the following steps:
```
10 + 2
## 12
12 * 3
## 36
36 - 6
## 30
30 / 3
## 10
```
2\.2 Objects
------------
Now that you know how to use R, let’s use it to make a virtual die. The `:` operator from a couple of pages ago gives you a nice way to create a group of numbers from one to six. The `:` operator returns its results as a **vector**, a one\-dimensional set of numbers:
```
1:6
## 1 2 3 4 5 6
```
That’s all there is to how a virtual die looks! But you are not done yet. Running `1:6` generated a vector of numbers for you to see, but it didn’t save that vector anywhere in your computer’s memory. What you are looking at is basically the footprints of six numbers that existed briefly and then melted back into your computer’s RAM. If you want to use those numbers again, you’ll have to ask your computer to save them somewhere. You can do that by creating an R *object*.
R lets you save data by storing it inside an R object. What is an object? Just a name that you can use to call up stored data. For example, you can save data into an object like *`a`* or *`b`*. Wherever R encounters the object, it will replace it with the data saved inside, like so:
```
a <- 1
a
## 1
a + 2
## 3
```
**What just happened?**
1. To create an R object, choose a name and then use the less\-than symbol, `<`, followed by a minus sign, `-`, to save data into it. This combination looks like an arrow, `<-`. R will make an object, give it your name, and store in it whatever follows the arrow. So `a <- 1` stores `1` in an object named `a`.
2. When you ask R what’s in `a`, R tells you on the next line.
3. You can use your object in new R commands, too. Since `a` previously stored the value of `1`, you’re now adding `1` to `2`.
So, for another example, the following code would create an object named `die` that contains the numbers one through six. To see what is stored in an object, just type the object’s name by itself:
```
die <- 1:6
die
## 1 2 3 4 5 6
```
When you create an object, the object will appear in the environment pane of RStudio, as shown in Figure [2\.2](basics.html#fig:environment). This pane will show you all of the objects you’ve created since opening RStudio.
Figure 2\.2: The RStudio environment pane keeps track of the R objects you create.
You can name an object in R almost anything you want, but there are a few rules. First, a name cannot start with a number. Second, a name cannot use some special symbols, like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`:
| Good names | Names that cause errors |
| --- | --- |
| a | 1trial |
| b | $ |
| FOO | ^mean |
| my\_var | 2nd |
| .day | !bad |
**Capitalization**
R is case\-sensitive, so `name` and `Name` will refer to different objects:
`Name <- 1`
`name <- 0`
`Name + 1`
`## 2`
Finally, R will overwrite any previous information stored in an object without asking you for permission. So, it is a good idea to *not* use names that are already taken:
```
my_number <- 1
my_number
## 1
my_number <- 999
my_number
## 999
```
You can see which object names you have already used with the function `ls`:
```
ls()
## "a" "die" "my_number" "name" "Name"
```
You can also see which names you have used by examining RStudio’s environment pane.
You now have a virtual die that is stored in your computer’s memory. You can access it whenever you like by typing the word *`die`*. So what can you do with this die? Quite a lot. R will replace an object with its contents whenever the object’s name appears in a command. So, for example, you can do all sorts of math with the die. Math isn’t so helpful for rolling dice, but manipulating sets of numbers will be your stock and trade as a data scientist. So let’s take a look at how to do that:
```
die - 1
## 0 1 2 3 4 5
die / 2
## 0.5 1.0 1.5 2.0 2.5 3.0
die * die
## 1 4 9 16 25 36
```
If you are a big fan of linear algebra (and who isn’t?), you may notice that R does not always follow the rules of matrix multiplication. Instead, R uses *element\-wise execution*. When you manipulate a set of numbers, R will apply the same operation to each element in the set. So for example, when you run *`die - 1`*, R subtracts one from each element of `die`.
When you use two or more vectors in an operation, R will line up the vectors and perform a sequence of individual operations. For example, when you run *`die * die`*, R lines up the two `die` vectors and then multiplies the first element of vector 1 by the first element of vector 2\. R then multiplies the second element of vector 1 by the second element of vector 2, and so on, until every element has been multiplied. The result will be a new vector the same length as the first two, as shown in Figure [2\.3](basics.html#fig:elementwise).
Figure 2\.3: When R performs element\-wise execution, it matches up vectors and then manipulates each pair of elements independently.
If you give R two vectors of unequal lengths, R will repeat the shorter vector until it is as long as the longer vector, and then do the math, as shown in Figure [2\.4](basics.html#fig:recycle). This isn’t a permanent change–the shorter vector will be its original size after R does the math. If the length of the short vector does not divide evenly into the length of the long vector, R will return a warning message. This behavior is known as *vector recycling*, and it helps R do element\-wise operations:
```
1:2
## 1 2
1:4
## 1 2 3 4
die
## 1 2 3 4 5 6
die + 1:2
## 2 4 4 6 6 8
die + 1:4
## 2 4 6 8 6 8
Warning message:
In die + 1:4 :
longer object length is not a multiple of shorter object length
```
Figure 2\.4: R will repeat a short vector to do element\-wise operations with two vectors of uneven lengths.
Element\-wise operations are a very useful feature in R because they manipulate groups of values in an orderly way. When you start working with data sets, element\-wise operations will ensure that values from one observation or case are only paired with values from the same observation or case. Element\-wise operations also make it easier to write your own programs and functions in R.
But don’t think that R has given up on traditional matrix multiplication. You just have to ask for it when you want it. You can do inner multiplication with the `%*%` operator and outer multiplication with the `%o%` operator:
```
die %*% die
## 91
die %o% die
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 3 4 5 6
## [2,] 2 4 6 8 10 12
## [3,] 3 6 9 12 15 18
## [4,] 4 8 12 16 20 24
## [5,] 5 10 15 20 25 30
## [6,] 6 12 18 24 30 36
```
You can also do things like transpose a matrix with `t` and take its determinant with `det`.
Don’t worry if you’re not familiar with these operations. They are easy to look up, and you won’t need them for this book.
Now that you can do math with your `die` object, let’s look at how you could “roll” it. Rolling your die will require something more sophisticated than basic arithmetic; you’ll need to randomly select one of the die’s values. And for that, you will need a *function*.
2\.3 Functions
--------------
R comes with many functions that you can use to do sophisticated tasks like random sampling. For example, you can round a number with the `round` function, or calculate its factorial with the `factorial` function. Using a function is pretty simple. Just write the name of the function and then the data you want the function to operate on in parentheses:
```
round(3.1415)
## 3
factorial(3)
## 6
```
The data that you pass into the function is called the function’s *argument*. The argument can be raw data, an R object, or even the results of another R function. In this last case, R will work from the innermost function to the outermost, as in Figure [2\.5](basics.html#fig:pemdas).
```
mean(1:6)
## 3.5
mean(die)
## 3.5
round(mean(die))
## 4
```
Figure 2\.5: When you link functions together, R will resolve them from the innermost operation to the outermost. Here R first looks up die, then calculates the mean of one through six, then rounds the mean.
Lucky for us, there is an R function that can help “roll” the die. You can simulate a roll of the die with R’s `sample` function. `sample` takes *two* arguments: a vector named `x` and a number named `size`. `sample` will return `size` elements from the vector:
```
sample(x = 1:4, size = 2)
## 3 2
```
To roll your die and get a number back, set `x` to `die` and sample one element from it. You’ll get a new (maybe different) number each time you roll it:
```
sample(x = die, size = 1)
## 2
sample(x = die, size = 1)
## 1
sample(x = die, size = 1)
## 6
```
Many R functions take multiple arguments that help them do their job. You can give a function as many arguments as you like as long as you separate each argument with a comma.
You may have noticed that I set `die` and `1` equal to the names of the arguments in `sample`, `x` and `size`. Every argument in every R function has a name. You can specify which data should be assigned to which argument by setting a name equal to data, as in the preceding code. This becomes important as you begin to pass multiple arguments to the same function; names help you avoid passing the wrong data to the wrong argument. However, using names is optional. You will notice that R users do not often use the name of the first argument in a function. So you might see the previous code written as:
```
sample(die, size = 1)
## 2
```
Often, the name of the first argument is not very descriptive, and it is usually obvious what the first piece of data refers to anyways.
But how do you know which argument names to use? If you try to use a name that a function does not expect, you will likely get an error:
```
round(3.1415, corners = 2)
## Error in round(3.1415, corners = 2) : unused argument(s) (corners = 2)
```
If you’re not sure which names to use with a function, you can look up the function’s arguments with `args`. To do this, place the name of the function in the parentheses behind `args`. For example, you can see that the `round` function takes two arguments, one named `x` and one named `digits`:
```
args(round)
## function (x, digits = 0)
## NULL
```
Did you notice that `args` shows that the `digits` argument of `round` is already set to 0? Frequently, an R function will take optional arguments like `digits`. These arguments are considered optional because they come with a default value. You can pass a new value to an optional argument if you want, and R will use the default value if you do not. For example, `round` will round your number to 0 digits past the decimal point by default. To override the default, supply your own value for `digits`:
```
round(3.1415)
## 3
round(3.1415, digits = 2)
## 3.14
```
You should write out the names of each argument after the first one or two when you call a function with multiple arguments. Why? First, this will help you and others understand your code. It is usually obvious which argument your first input refers to (and sometimes the second input as well). However, you’d need a large memory to remember the third and fourth arguments of every R function. Second, and more importantly, writing out argument names prevents errors.
If you do not write out the names of your arguments, R will match your values to the arguments in your function by order. For example, in the following code, the first value, `die`, will be matched to the first argument of `sample`, which is named `x`. The next value, `1`, will be matched to the next argument, `size`:
```
sample(die, 1)
## 2
```
As you provide more arguments, it becomes more likely that your order and R’s order may not align. As a result, values may get passed to the wrong argument. Argument names prevent this. R will always match a value to its argument name, no matter where it appears in the order of arguments:
```
sample(size = 1, x = die)
## 2
```
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
2\.4 Writing Your Own Functions
-------------------------------
To recap, you already have working R code that simulates rolling a pair of dice:
```
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
```
You can retype this code into the console anytime you want to re\-roll your dice. However, this is an awkward way to work with the code. It would be easier to use your code if you wrapped it into its own function, which is exactly what we’ll do now. We’re going to write a function named `roll` that you can use to roll your virtual dice. When you’re finished, the function will work like this: each time you call `roll()`, R will return the sum of rolling two dice:
```
roll()
## 8
roll()
## 3
roll()
## 7
```
Functions may seem mysterious or fancy, but they are just another type of R object. Instead of containing data, they contain code. This code is stored in a special format that makes it easy to reuse the code in new situations. You can write your own functions by recreating this format.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
2\.5 Arguments
--------------
What if we removed one line of code from our function and changed the name `die` to `bones`, like this?
```
roll2 <- function() {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now I’ll get an error when I run the function. The function needs the object `bones` to do its job, but there is no object named `bones` to be found:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## object 'bones' not found
```
You can supply `bones` when you call `roll2` if you make `bones` an argument of the function. To do this, put the name `bones` in the parentheses that follow `function` when you define `roll2`:
```
roll2 <- function(bones) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now `roll2` will work as long as you supply `bones` when you call the function. You can take advantage of this to roll different types of dice each time you call `roll2`. Dungeons and Dragons, here we come!
Remember, we’re rolling pairs of dice:
```
roll2(bones = 1:4)
## 3
roll2(bones = 1:6)
## 10
roll2(1:20)
## 31
```
Notice that `roll2` will still give an error if you do not supply a value for the `bones` argument when you call `roll2`:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## argument "bones" is missing, with no default
```
You can prevent this error by giving the `bones` argument a default value. To do this, set `bones` equal to a value when you define `roll2`:
```
roll2 <- function(bones = 1:6) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now you can supply a new value for `bones` if you like, and `roll2` will use the default if you do not:
```
roll2()
## 9
```
You can give your functions as many arguments as you like. Just list their names, separated by commas, in the parentheses that follow `function`. When the function is run, R will replace each argument name in the function body with the value that the user supplies for the argument. If the user does not supply a value, R will replace the argument name with the argument’s default value (if you defined one).
To summarize, `function` helps you construct your own R functions. You create a body of code for your function to run by writing code between the braces that follow `function`. You create arguments for your function to use by supplying their names in the parentheses that follow `function`. Finally, you give your function a name by saving its output to an R object, as shown in Figure [2\.6](basics.html#fig:functions).
Once you’ve created your function, R will treat it like every other function in R. Think about how useful this is. Have you ever tried to create a new Excel option and add it to Microsoft’s menu bar? Or a new slide animation and add it to Powerpoint’s options? When you work with a programming language, you can do these types of things. As you learn to program in R, you will be able to create new, customized, reproducible tools for yourself whenever you like. [Project 3: Slot Machine](#slots) will teach you much more about writing functions in R.
Figure 2\.6: Every function in R has the same parts, and you can use function to create these parts. Assign the result to a name, so you can call the function later.
2\.6 Scripts
------------
What if you want to edit `roll2` again? You could go back and retype each line of code in `roll2`, but it would be so much easier if you had a draft of the code to start from. You can create a draft of your code as you go by using an R *script*. An R script is just a plain text file that you save R code in. You can open an R script in RStudio by going to `File > New File > R script` in the menu bar. RStudio will then open a fresh script above your console pane, as shown in Figure [2\.7](basics.html#fig:script).
I strongly encourage you to write and edit all of your R code in a script before you run it in the console. Why? This habit creates a reproducible record of your work. When you’re finished for the day, you can save your script and then use it to rerun your entire analysis the next day. Scripts are also very handy for editing and proofreading your code, and they make a nice copy of your work to share with others. To save a script, click the scripts pane, and then go to `File > Save As` in the menu bar.
Figure 2\.7: When you open an R Script (File \> New File \> R Script in the menu bar), RStudio creates a fourth pane above the console where you can write and edit your code.
RStudio comes with many built\-in features that make it easy to work with scripts. First, you can automatically execute a line of code in a script by clicking the Run button, as shown in Figure [2\.8](basics.html#fig:run).
R will run whichever line of code your cursor is on. If you have a whole section highlighted, R will run the highlighted code. Alternatively, you can run the entire script by clicking the Source button. Don’t like clicking buttons? You can use Control \+ Return as a shortcut for the Run button. On Macs, that would be Command \+ Return.
Figure 2\.8: You can run a highlighted portion of code in your script if you click the Run button at the top of the scripts pane. You can run the entire script by clicking the Source button.
If you’re not convinced about scripts, you soon will be. It becomes a pain to write multi\-line code in the console’s single\-line command line. Let’s avoid that headache and open your first script now before we move to the next chapter.
**Extract function**
RStudio comes with a tool that can help you build functions. To use it, highlight the lines of code in your R script that you want to turn into a function. Then click `Code > Extract Function` in the menu bar. RStudio will ask you for a function name to use and then wrap your code in a `function` call. It will scan the code for undefined variables and use these as arguments.
You may want to double\-check RStudio’s work. It assumes that your code is correct, so if it does something surprising, you may have a problem in your code.
2\.7 Summary
------------
You’ve covered a lot of ground already. You now have a virtual die stored in your computer’s memory, as well as your own R function that rolls a pair of dice. You’ve also begun speaking the R language.
As you’ve seen, R is a language that you can use to talk to your computer. You write commands in R and run them at the command line for your computer to read. Your computer will sometimes talk back–for example, when you commit an error–but it usually just does what you ask and then displays the result.
The two most important components of the R language are objects, which store data, and functions, which manipulate data. R also uses a host of operators like `+`, `-`, `*`, `/`, and `<-` to do basic tasks. As a data scientist, you will use R objects to store data in your computer’s memory, and you will use functions to automate tasks and do complicated calculations. We will examine objects in more depth later in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) and dig further into functions in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine). The vocabulary you have developed here will make each of those projects easier to understand. However, we’re not done with your dice yet.
In [Packages and Help Pages](packages.html#packages), you’ll run some simulations on your dice and build your first graphs in R. You’ll also look at two of the most useful components of the R language: R *packages*, which are collections of functions writted by R’s talented community of developers, and R documentation, which is a collection of help pages built into R that explains every function and data set in the language.
2\.1 The R User Interface
-------------------------
Before you can ask your computer to save some numbers, you’ll need to know how to talk to it. That’s where R and RStudio come in. RStudio gives you a way to talk to your computer. R gives you a language to speak in. To get started, open RStudio just as you would open any other application on your computer. When you do, a window should appear in your screen like the one shown in Figure [2\.1](basics.html#fig:console).
Figure 2\.1: Your computer does your bidding when you type R commands at the prompt in the bottom line of the console pane. Don’t forget to hit the Enter key. When you first open RStudio, the console appears in the pane on your left, but you can change this with File \> Preferences in the menu bar.
If you do not yet have R and RStudio intalled on your computer–or do not know what I am talking about–visit [Appendix A](starting.html#starting). The appendix will give you an overview of the two free tools and tell you how to download them.
The RStudio interface is simple. You type R code into the bottom line of the RStudio console pane and then click Enter to run it. The code you type is called a *command*, because it will command your computer to do something for you. The line you type it into is called the *command line*.
When you type a command at the prompt and hit Enter, your computer executes the command and shows you the results. Then RStudio displays a fresh prompt for your next command. For example, if you type `1 + 1` and hit Enter, RStudio will display:
```
> 1 + 1
[1] 2
>
```
You’ll notice that a `[1]` appears next to your result. R is just letting you know that this line begins with the first value in your result. Some commands return more than one value, and their results may fill up multiple lines. For example, the command `100:130` returns 31 values; it creates a sequence of integers from 100 to 130\. Notice that new bracketed numbers appear at the start of the second and third lines of output. These numbers just mean that the second line begins with the 14th value in the result, and the third line begins with the 25th value. You can mostly ignore the numbers that appear in brackets:
```
> 100:130
[1] 100 101 102 103 104 105 106 107 108 109 110 111 112
[14] 113 114 115 116 117 118 119 120 121 122 123 124 125
[25] 126 127 128 129 130
```
The colon operator (`:`) returns every integer between two integers. It is an easy way to create a sequence of numbers.
**Isn’t R a language?**
You may hear me speak of R in the third person. For example, I might say, “Tell R to do this” or “Tell R to do that”, but of course R can’t do anything; it is just a language. This way of speaking is shorthand for saying, “Tell your computer to do this by writing a command in the R language at the command line of your RStudio console.” Your computer, and not R, does the actual work.
Is this shorthand confusing and slightly lazy to use? Yes. Do a lot of people use it? Everyone I know–probably because it is so convenient.
**When do we compile?**
In some languages, like C, Java, and FORTRAN, you have to compile your human\-readable code into machine\-readable code (often 1s and 0s) before you can run it. If you’ve programmed in such a language before, you may wonder whether you have to compile your R code before you can use it. The answer is no. R is a dynamic programming language, which means R automatically interprets your code as you run it.
If you type an incomplete command and press Enter, R will display a `+` prompt, which means R is waiting for you to type the rest of your command. Either finish the command or hit Escape to start over:
```
> 5 -
+
+ 1
[1] 4
```
If you type a command that R doesn’t recognize, R will return an error message. If you ever see an error message, don’t panic. R is just telling you that your computer couldn’t understand or do what you asked it to do. You can then try a different command at the next prompt:
```
> 3 % 5
Error: unexpected input in "3 % 5"
>
```
Once you get the hang of the command line, you can easily do anything in R that you would do with a calculator. For example, you could do some basic arithmetic:
```
2 * 3
## 6
4 - 1
## 3
6 / (4 - 1)
## 2
```
Did you notice something different about this code? I’ve left out the `>`’s and `[1]`’s. This will make the code easier to copy and paste if you want to put it in your own console.
R treats the hashtag character, `#`, in a special way; R will not run anything that follows a hashtag on a line. This makes hashtags very useful for adding comments and annotations to your code. Humans will be able to read the comments, but your computer will pass over them. The hashtag is known as the *commenting symbol* in R.
For the remainder of the book, I’ll use hashtags to display the output of R code. I’ll use a single hashtag to add my own comments and a double hashtag, `##`, to display the results of code. I’ll avoid showing `>`s and `[1]`s unless I want you to look at them.
**Cancelling commands**
Some R commands may take a long time to run. You can cancel a command once it has begun by pressing ctrl \+ c. Note that it may also take R a long time to cancel the command.
**Exercise 2\.1 (Magic with Numbers)** That’s the basic interface for executing R code in RStudio. Think you have it? If so, try doing these simple tasks. If you execute everything correctly, you should end up with the same number that you started with:
1. Choose any number and add 2 to it.
2. Multiply the result by 3\.
3. Subtract 6 from the answer.
4. Divide what you get by 3\.
Throughout the book, I’ll put exercises in chunks, like the one above. I’ll follow each exercise with a model answer, like the one below.
*Solution.* You could start with the number 10, and then do the following steps:
```
10 + 2
## 12
12 * 3
## 36
36 - 6
## 30
30 / 3
## 10
```
2\.2 Objects
------------
Now that you know how to use R, let’s use it to make a virtual die. The `:` operator from a couple of pages ago gives you a nice way to create a group of numbers from one to six. The `:` operator returns its results as a **vector**, a one\-dimensional set of numbers:
```
1:6
## 1 2 3 4 5 6
```
That’s all there is to how a virtual die looks! But you are not done yet. Running `1:6` generated a vector of numbers for you to see, but it didn’t save that vector anywhere in your computer’s memory. What you are looking at is basically the footprints of six numbers that existed briefly and then melted back into your computer’s RAM. If you want to use those numbers again, you’ll have to ask your computer to save them somewhere. You can do that by creating an R *object*.
R lets you save data by storing it inside an R object. What is an object? Just a name that you can use to call up stored data. For example, you can save data into an object like *`a`* or *`b`*. Wherever R encounters the object, it will replace it with the data saved inside, like so:
```
a <- 1
a
## 1
a + 2
## 3
```
**What just happened?**
1. To create an R object, choose a name and then use the less\-than symbol, `<`, followed by a minus sign, `-`, to save data into it. This combination looks like an arrow, `<-`. R will make an object, give it your name, and store in it whatever follows the arrow. So `a <- 1` stores `1` in an object named `a`.
2. When you ask R what’s in `a`, R tells you on the next line.
3. You can use your object in new R commands, too. Since `a` previously stored the value of `1`, you’re now adding `1` to `2`.
So, for another example, the following code would create an object named `die` that contains the numbers one through six. To see what is stored in an object, just type the object’s name by itself:
```
die <- 1:6
die
## 1 2 3 4 5 6
```
When you create an object, the object will appear in the environment pane of RStudio, as shown in Figure [2\.2](basics.html#fig:environment). This pane will show you all of the objects you’ve created since opening RStudio.
Figure 2\.2: The RStudio environment pane keeps track of the R objects you create.
You can name an object in R almost anything you want, but there are a few rules. First, a name cannot start with a number. Second, a name cannot use some special symbols, like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`:
| Good names | Names that cause errors |
| --- | --- |
| a | 1trial |
| b | $ |
| FOO | ^mean |
| my\_var | 2nd |
| .day | !bad |
**Capitalization**
R is case\-sensitive, so `name` and `Name` will refer to different objects:
`Name <- 1`
`name <- 0`
`Name + 1`
`## 2`
Finally, R will overwrite any previous information stored in an object without asking you for permission. So, it is a good idea to *not* use names that are already taken:
```
my_number <- 1
my_number
## 1
my_number <- 999
my_number
## 999
```
You can see which object names you have already used with the function `ls`:
```
ls()
## "a" "die" "my_number" "name" "Name"
```
You can also see which names you have used by examining RStudio’s environment pane.
You now have a virtual die that is stored in your computer’s memory. You can access it whenever you like by typing the word *`die`*. So what can you do with this die? Quite a lot. R will replace an object with its contents whenever the object’s name appears in a command. So, for example, you can do all sorts of math with the die. Math isn’t so helpful for rolling dice, but manipulating sets of numbers will be your stock and trade as a data scientist. So let’s take a look at how to do that:
```
die - 1
## 0 1 2 3 4 5
die / 2
## 0.5 1.0 1.5 2.0 2.5 3.0
die * die
## 1 4 9 16 25 36
```
If you are a big fan of linear algebra (and who isn’t?), you may notice that R does not always follow the rules of matrix multiplication. Instead, R uses *element\-wise execution*. When you manipulate a set of numbers, R will apply the same operation to each element in the set. So for example, when you run *`die - 1`*, R subtracts one from each element of `die`.
When you use two or more vectors in an operation, R will line up the vectors and perform a sequence of individual operations. For example, when you run *`die * die`*, R lines up the two `die` vectors and then multiplies the first element of vector 1 by the first element of vector 2\. R then multiplies the second element of vector 1 by the second element of vector 2, and so on, until every element has been multiplied. The result will be a new vector the same length as the first two, as shown in Figure [2\.3](basics.html#fig:elementwise).
Figure 2\.3: When R performs element\-wise execution, it matches up vectors and then manipulates each pair of elements independently.
If you give R two vectors of unequal lengths, R will repeat the shorter vector until it is as long as the longer vector, and then do the math, as shown in Figure [2\.4](basics.html#fig:recycle). This isn’t a permanent change–the shorter vector will be its original size after R does the math. If the length of the short vector does not divide evenly into the length of the long vector, R will return a warning message. This behavior is known as *vector recycling*, and it helps R do element\-wise operations:
```
1:2
## 1 2
1:4
## 1 2 3 4
die
## 1 2 3 4 5 6
die + 1:2
## 2 4 4 6 6 8
die + 1:4
## 2 4 6 8 6 8
Warning message:
In die + 1:4 :
longer object length is not a multiple of shorter object length
```
Figure 2\.4: R will repeat a short vector to do element\-wise operations with two vectors of uneven lengths.
Element\-wise operations are a very useful feature in R because they manipulate groups of values in an orderly way. When you start working with data sets, element\-wise operations will ensure that values from one observation or case are only paired with values from the same observation or case. Element\-wise operations also make it easier to write your own programs and functions in R.
But don’t think that R has given up on traditional matrix multiplication. You just have to ask for it when you want it. You can do inner multiplication with the `%*%` operator and outer multiplication with the `%o%` operator:
```
die %*% die
## 91
die %o% die
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 3 4 5 6
## [2,] 2 4 6 8 10 12
## [3,] 3 6 9 12 15 18
## [4,] 4 8 12 16 20 24
## [5,] 5 10 15 20 25 30
## [6,] 6 12 18 24 30 36
```
You can also do things like transpose a matrix with `t` and take its determinant with `det`.
Don’t worry if you’re not familiar with these operations. They are easy to look up, and you won’t need them for this book.
Now that you can do math with your `die` object, let’s look at how you could “roll” it. Rolling your die will require something more sophisticated than basic arithmetic; you’ll need to randomly select one of the die’s values. And for that, you will need a *function*.
2\.3 Functions
--------------
R comes with many functions that you can use to do sophisticated tasks like random sampling. For example, you can round a number with the `round` function, or calculate its factorial with the `factorial` function. Using a function is pretty simple. Just write the name of the function and then the data you want the function to operate on in parentheses:
```
round(3.1415)
## 3
factorial(3)
## 6
```
The data that you pass into the function is called the function’s *argument*. The argument can be raw data, an R object, or even the results of another R function. In this last case, R will work from the innermost function to the outermost, as in Figure [2\.5](basics.html#fig:pemdas).
```
mean(1:6)
## 3.5
mean(die)
## 3.5
round(mean(die))
## 4
```
Figure 2\.5: When you link functions together, R will resolve them from the innermost operation to the outermost. Here R first looks up die, then calculates the mean of one through six, then rounds the mean.
Lucky for us, there is an R function that can help “roll” the die. You can simulate a roll of the die with R’s `sample` function. `sample` takes *two* arguments: a vector named `x` and a number named `size`. `sample` will return `size` elements from the vector:
```
sample(x = 1:4, size = 2)
## 3 2
```
To roll your die and get a number back, set `x` to `die` and sample one element from it. You’ll get a new (maybe different) number each time you roll it:
```
sample(x = die, size = 1)
## 2
sample(x = die, size = 1)
## 1
sample(x = die, size = 1)
## 6
```
Many R functions take multiple arguments that help them do their job. You can give a function as many arguments as you like as long as you separate each argument with a comma.
You may have noticed that I set `die` and `1` equal to the names of the arguments in `sample`, `x` and `size`. Every argument in every R function has a name. You can specify which data should be assigned to which argument by setting a name equal to data, as in the preceding code. This becomes important as you begin to pass multiple arguments to the same function; names help you avoid passing the wrong data to the wrong argument. However, using names is optional. You will notice that R users do not often use the name of the first argument in a function. So you might see the previous code written as:
```
sample(die, size = 1)
## 2
```
Often, the name of the first argument is not very descriptive, and it is usually obvious what the first piece of data refers to anyways.
But how do you know which argument names to use? If you try to use a name that a function does not expect, you will likely get an error:
```
round(3.1415, corners = 2)
## Error in round(3.1415, corners = 2) : unused argument(s) (corners = 2)
```
If you’re not sure which names to use with a function, you can look up the function’s arguments with `args`. To do this, place the name of the function in the parentheses behind `args`. For example, you can see that the `round` function takes two arguments, one named `x` and one named `digits`:
```
args(round)
## function (x, digits = 0)
## NULL
```
Did you notice that `args` shows that the `digits` argument of `round` is already set to 0? Frequently, an R function will take optional arguments like `digits`. These arguments are considered optional because they come with a default value. You can pass a new value to an optional argument if you want, and R will use the default value if you do not. For example, `round` will round your number to 0 digits past the decimal point by default. To override the default, supply your own value for `digits`:
```
round(3.1415)
## 3
round(3.1415, digits = 2)
## 3.14
```
You should write out the names of each argument after the first one or two when you call a function with multiple arguments. Why? First, this will help you and others understand your code. It is usually obvious which argument your first input refers to (and sometimes the second input as well). However, you’d need a large memory to remember the third and fourth arguments of every R function. Second, and more importantly, writing out argument names prevents errors.
If you do not write out the names of your arguments, R will match your values to the arguments in your function by order. For example, in the following code, the first value, `die`, will be matched to the first argument of `sample`, which is named `x`. The next value, `1`, will be matched to the next argument, `size`:
```
sample(die, 1)
## 2
```
As you provide more arguments, it becomes more likely that your order and R’s order may not align. As a result, values may get passed to the wrong argument. Argument names prevent this. R will always match a value to its argument name, no matter where it appears in the order of arguments:
```
sample(size = 1, x = die)
## 2
```
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
2\.4 Writing Your Own Functions
-------------------------------
To recap, you already have working R code that simulates rolling a pair of dice:
```
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
```
You can retype this code into the console anytime you want to re\-roll your dice. However, this is an awkward way to work with the code. It would be easier to use your code if you wrapped it into its own function, which is exactly what we’ll do now. We’re going to write a function named `roll` that you can use to roll your virtual dice. When you’re finished, the function will work like this: each time you call `roll()`, R will return the sum of rolling two dice:
```
roll()
## 8
roll()
## 3
roll()
## 7
```
Functions may seem mysterious or fancy, but they are just another type of R object. Instead of containing data, they contain code. This code is stored in a special format that makes it easy to reuse the code in new situations. You can write your own functions by recreating this format.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
2\.5 Arguments
--------------
What if we removed one line of code from our function and changed the name `die` to `bones`, like this?
```
roll2 <- function() {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now I’ll get an error when I run the function. The function needs the object `bones` to do its job, but there is no object named `bones` to be found:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## object 'bones' not found
```
You can supply `bones` when you call `roll2` if you make `bones` an argument of the function. To do this, put the name `bones` in the parentheses that follow `function` when you define `roll2`:
```
roll2 <- function(bones) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now `roll2` will work as long as you supply `bones` when you call the function. You can take advantage of this to roll different types of dice each time you call `roll2`. Dungeons and Dragons, here we come!
Remember, we’re rolling pairs of dice:
```
roll2(bones = 1:4)
## 3
roll2(bones = 1:6)
## 10
roll2(1:20)
## 31
```
Notice that `roll2` will still give an error if you do not supply a value for the `bones` argument when you call `roll2`:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## argument "bones" is missing, with no default
```
You can prevent this error by giving the `bones` argument a default value. To do this, set `bones` equal to a value when you define `roll2`:
```
roll2 <- function(bones = 1:6) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now you can supply a new value for `bones` if you like, and `roll2` will use the default if you do not:
```
roll2()
## 9
```
You can give your functions as many arguments as you like. Just list their names, separated by commas, in the parentheses that follow `function`. When the function is run, R will replace each argument name in the function body with the value that the user supplies for the argument. If the user does not supply a value, R will replace the argument name with the argument’s default value (if you defined one).
To summarize, `function` helps you construct your own R functions. You create a body of code for your function to run by writing code between the braces that follow `function`. You create arguments for your function to use by supplying their names in the parentheses that follow `function`. Finally, you give your function a name by saving its output to an R object, as shown in Figure [2\.6](basics.html#fig:functions).
Once you’ve created your function, R will treat it like every other function in R. Think about how useful this is. Have you ever tried to create a new Excel option and add it to Microsoft’s menu bar? Or a new slide animation and add it to Powerpoint’s options? When you work with a programming language, you can do these types of things. As you learn to program in R, you will be able to create new, customized, reproducible tools for yourself whenever you like. [Project 3: Slot Machine](#slots) will teach you much more about writing functions in R.
Figure 2\.6: Every function in R has the same parts, and you can use function to create these parts. Assign the result to a name, so you can call the function later.
2\.6 Scripts
------------
What if you want to edit `roll2` again? You could go back and retype each line of code in `roll2`, but it would be so much easier if you had a draft of the code to start from. You can create a draft of your code as you go by using an R *script*. An R script is just a plain text file that you save R code in. You can open an R script in RStudio by going to `File > New File > R script` in the menu bar. RStudio will then open a fresh script above your console pane, as shown in Figure [2\.7](basics.html#fig:script).
I strongly encourage you to write and edit all of your R code in a script before you run it in the console. Why? This habit creates a reproducible record of your work. When you’re finished for the day, you can save your script and then use it to rerun your entire analysis the next day. Scripts are also very handy for editing and proofreading your code, and they make a nice copy of your work to share with others. To save a script, click the scripts pane, and then go to `File > Save As` in the menu bar.
Figure 2\.7: When you open an R Script (File \> New File \> R Script in the menu bar), RStudio creates a fourth pane above the console where you can write and edit your code.
RStudio comes with many built\-in features that make it easy to work with scripts. First, you can automatically execute a line of code in a script by clicking the Run button, as shown in Figure [2\.8](basics.html#fig:run).
R will run whichever line of code your cursor is on. If you have a whole section highlighted, R will run the highlighted code. Alternatively, you can run the entire script by clicking the Source button. Don’t like clicking buttons? You can use Control \+ Return as a shortcut for the Run button. On Macs, that would be Command \+ Return.
Figure 2\.8: You can run a highlighted portion of code in your script if you click the Run button at the top of the scripts pane. You can run the entire script by clicking the Source button.
If you’re not convinced about scripts, you soon will be. It becomes a pain to write multi\-line code in the console’s single\-line command line. Let’s avoid that headache and open your first script now before we move to the next chapter.
**Extract function**
RStudio comes with a tool that can help you build functions. To use it, highlight the lines of code in your R script that you want to turn into a function. Then click `Code > Extract Function` in the menu bar. RStudio will ask you for a function name to use and then wrap your code in a `function` call. It will scan the code for undefined variables and use these as arguments.
You may want to double\-check RStudio’s work. It assumes that your code is correct, so if it does something surprising, you may have a problem in your code.
2\.7 Summary
------------
You’ve covered a lot of ground already. You now have a virtual die stored in your computer’s memory, as well as your own R function that rolls a pair of dice. You’ve also begun speaking the R language.
As you’ve seen, R is a language that you can use to talk to your computer. You write commands in R and run them at the command line for your computer to read. Your computer will sometimes talk back–for example, when you commit an error–but it usually just does what you ask and then displays the result.
The two most important components of the R language are objects, which store data, and functions, which manipulate data. R also uses a host of operators like `+`, `-`, `*`, `/`, and `<-` to do basic tasks. As a data scientist, you will use R objects to store data in your computer’s memory, and you will use functions to automate tasks and do complicated calculations. We will examine objects in more depth later in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) and dig further into functions in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine). The vocabulary you have developed here will make each of those projects easier to understand. However, we’re not done with your dice yet.
In [Packages and Help Pages](packages.html#packages), you’ll run some simulations on your dice and build your first graphs in R. You’ll also look at two of the most useful components of the R language: R *packages*, which are collections of functions writted by R’s talented community of developers, and R documentation, which is a collection of help pages built into R that explains every function and data set in the language.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/basics.html |
2 The Very Basics
=================
This chapter provides a broad overview of the R language that will get you programming right away. In it, you will build a pair of virtual dice that you can use to generate random numbers. Don’t worry if you’ve never programmed before; the chapter will teach you everything you need to know.
To simulate a pair of dice, you will have to distill each die into its essential features. You cannot place a physical object, like a die, into a computer (well, not without unscrewing some screws), but you can save *information* about the object in your computer’s memory.
Which information should you save? In general, a die has six important pieces of information: when you roll a die, it can only result in one of six numbers: 1, 2, 3, 4, 5, and 6\. You can capture the essential characteristics of a die by saving the numbers 1, 2, 3, 4, 5, and 6 as a group of values in your computer’s memory.
Let’s work on saving these numbers first, and then consider a method for “rolling” our die.
2\.1 The R User Interface
-------------------------
Before you can ask your computer to save some numbers, you’ll need to know how to talk to it. That’s where R and RStudio come in. RStudio gives you a way to talk to your computer. R gives you a language to speak in. To get started, open RStudio just as you would open any other application on your computer. When you do, a window should appear in your screen like the one shown in Figure [2\.1](basics.html#fig:console).
Figure 2\.1: Your computer does your bidding when you type R commands at the prompt in the bottom line of the console pane. Don’t forget to hit the Enter key. When you first open RStudio, the console appears in the pane on your left, but you can change this with File \> Preferences in the menu bar.
If you do not yet have R and RStudio intalled on your computer–or do not know what I am talking about–visit [Appendix A](starting.html#starting). The appendix will give you an overview of the two free tools and tell you how to download them.
The RStudio interface is simple. You type R code into the bottom line of the RStudio console pane and then click Enter to run it. The code you type is called a *command*, because it will command your computer to do something for you. The line you type it into is called the *command line*.
When you type a command at the prompt and hit Enter, your computer executes the command and shows you the results. Then RStudio displays a fresh prompt for your next command. For example, if you type `1 + 1` and hit Enter, RStudio will display:
```
> 1 + 1
[1] 2
>
```
You’ll notice that a `[1]` appears next to your result. R is just letting you know that this line begins with the first value in your result. Some commands return more than one value, and their results may fill up multiple lines. For example, the command `100:130` returns 31 values; it creates a sequence of integers from 100 to 130\. Notice that new bracketed numbers appear at the start of the second and third lines of output. These numbers just mean that the second line begins with the 14th value in the result, and the third line begins with the 25th value. You can mostly ignore the numbers that appear in brackets:
```
> 100:130
[1] 100 101 102 103 104 105 106 107 108 109 110 111 112
[14] 113 114 115 116 117 118 119 120 121 122 123 124 125
[25] 126 127 128 129 130
```
The colon operator (`:`) returns every integer between two integers. It is an easy way to create a sequence of numbers.
**Isn’t R a language?**
You may hear me speak of R in the third person. For example, I might say, “Tell R to do this” or “Tell R to do that”, but of course R can’t do anything; it is just a language. This way of speaking is shorthand for saying, “Tell your computer to do this by writing a command in the R language at the command line of your RStudio console.” Your computer, and not R, does the actual work.
Is this shorthand confusing and slightly lazy to use? Yes. Do a lot of people use it? Everyone I know–probably because it is so convenient.
**When do we compile?**
In some languages, like C, Java, and FORTRAN, you have to compile your human\-readable code into machine\-readable code (often 1s and 0s) before you can run it. If you’ve programmed in such a language before, you may wonder whether you have to compile your R code before you can use it. The answer is no. R is a dynamic programming language, which means R automatically interprets your code as you run it.
If you type an incomplete command and press Enter, R will display a `+` prompt, which means R is waiting for you to type the rest of your command. Either finish the command or hit Escape to start over:
```
> 5 -
+
+ 1
[1] 4
```
If you type a command that R doesn’t recognize, R will return an error message. If you ever see an error message, don’t panic. R is just telling you that your computer couldn’t understand or do what you asked it to do. You can then try a different command at the next prompt:
```
> 3 % 5
Error: unexpected input in "3 % 5"
>
```
Once you get the hang of the command line, you can easily do anything in R that you would do with a calculator. For example, you could do some basic arithmetic:
```
2 * 3
## 6
4 - 1
## 3
6 / (4 - 1)
## 2
```
Did you notice something different about this code? I’ve left out the `>`’s and `[1]`’s. This will make the code easier to copy and paste if you want to put it in your own console.
R treats the hashtag character, `#`, in a special way; R will not run anything that follows a hashtag on a line. This makes hashtags very useful for adding comments and annotations to your code. Humans will be able to read the comments, but your computer will pass over them. The hashtag is known as the *commenting symbol* in R.
For the remainder of the book, I’ll use hashtags to display the output of R code. I’ll use a single hashtag to add my own comments and a double hashtag, `##`, to display the results of code. I’ll avoid showing `>`s and `[1]`s unless I want you to look at them.
**Cancelling commands**
Some R commands may take a long time to run. You can cancel a command once it has begun by pressing ctrl \+ c. Note that it may also take R a long time to cancel the command.
**Exercise 2\.1 (Magic with Numbers)** That’s the basic interface for executing R code in RStudio. Think you have it? If so, try doing these simple tasks. If you execute everything correctly, you should end up with the same number that you started with:
1. Choose any number and add 2 to it.
2. Multiply the result by 3\.
3. Subtract 6 from the answer.
4. Divide what you get by 3\.
Throughout the book, I’ll put exercises in chunks, like the one above. I’ll follow each exercise with a model answer, like the one below.
*Solution.* You could start with the number 10, and then do the following steps:
```
10 + 2
## 12
12 * 3
## 36
36 - 6
## 30
30 / 3
## 10
```
2\.2 Objects
------------
Now that you know how to use R, let’s use it to make a virtual die. The `:` operator from a couple of pages ago gives you a nice way to create a group of numbers from one to six. The `:` operator returns its results as a **vector**, a one\-dimensional set of numbers:
```
1:6
## 1 2 3 4 5 6
```
That’s all there is to how a virtual die looks! But you are not done yet. Running `1:6` generated a vector of numbers for you to see, but it didn’t save that vector anywhere in your computer’s memory. What you are looking at is basically the footprints of six numbers that existed briefly and then melted back into your computer’s RAM. If you want to use those numbers again, you’ll have to ask your computer to save them somewhere. You can do that by creating an R *object*.
R lets you save data by storing it inside an R object. What is an object? Just a name that you can use to call up stored data. For example, you can save data into an object like *`a`* or *`b`*. Wherever R encounters the object, it will replace it with the data saved inside, like so:
```
a <- 1
a
## 1
a + 2
## 3
```
**What just happened?**
1. To create an R object, choose a name and then use the less\-than symbol, `<`, followed by a minus sign, `-`, to save data into it. This combination looks like an arrow, `<-`. R will make an object, give it your name, and store in it whatever follows the arrow. So `a <- 1` stores `1` in an object named `a`.
2. When you ask R what’s in `a`, R tells you on the next line.
3. You can use your object in new R commands, too. Since `a` previously stored the value of `1`, you’re now adding `1` to `2`.
So, for another example, the following code would create an object named `die` that contains the numbers one through six. To see what is stored in an object, just type the object’s name by itself:
```
die <- 1:6
die
## 1 2 3 4 5 6
```
When you create an object, the object will appear in the environment pane of RStudio, as shown in Figure [2\.2](basics.html#fig:environment). This pane will show you all of the objects you’ve created since opening RStudio.
Figure 2\.2: The RStudio environment pane keeps track of the R objects you create.
You can name an object in R almost anything you want, but there are a few rules. First, a name cannot start with a number. Second, a name cannot use some special symbols, like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`:
| Good names | Names that cause errors |
| --- | --- |
| a | 1trial |
| b | $ |
| FOO | ^mean |
| my\_var | 2nd |
| .day | !bad |
**Capitalization**
R is case\-sensitive, so `name` and `Name` will refer to different objects:
`Name <- 1`
`name <- 0`
`Name + 1`
`## 2`
Finally, R will overwrite any previous information stored in an object without asking you for permission. So, it is a good idea to *not* use names that are already taken:
```
my_number <- 1
my_number
## 1
my_number <- 999
my_number
## 999
```
You can see which object names you have already used with the function `ls`:
```
ls()
## "a" "die" "my_number" "name" "Name"
```
You can also see which names you have used by examining RStudio’s environment pane.
You now have a virtual die that is stored in your computer’s memory. You can access it whenever you like by typing the word *`die`*. So what can you do with this die? Quite a lot. R will replace an object with its contents whenever the object’s name appears in a command. So, for example, you can do all sorts of math with the die. Math isn’t so helpful for rolling dice, but manipulating sets of numbers will be your stock and trade as a data scientist. So let’s take a look at how to do that:
```
die - 1
## 0 1 2 3 4 5
die / 2
## 0.5 1.0 1.5 2.0 2.5 3.0
die * die
## 1 4 9 16 25 36
```
If you are a big fan of linear algebra (and who isn’t?), you may notice that R does not always follow the rules of matrix multiplication. Instead, R uses *element\-wise execution*. When you manipulate a set of numbers, R will apply the same operation to each element in the set. So for example, when you run *`die - 1`*, R subtracts one from each element of `die`.
When you use two or more vectors in an operation, R will line up the vectors and perform a sequence of individual operations. For example, when you run *`die * die`*, R lines up the two `die` vectors and then multiplies the first element of vector 1 by the first element of vector 2\. R then multiplies the second element of vector 1 by the second element of vector 2, and so on, until every element has been multiplied. The result will be a new vector the same length as the first two, as shown in Figure [2\.3](basics.html#fig:elementwise).
Figure 2\.3: When R performs element\-wise execution, it matches up vectors and then manipulates each pair of elements independently.
If you give R two vectors of unequal lengths, R will repeat the shorter vector until it is as long as the longer vector, and then do the math, as shown in Figure [2\.4](basics.html#fig:recycle). This isn’t a permanent change–the shorter vector will be its original size after R does the math. If the length of the short vector does not divide evenly into the length of the long vector, R will return a warning message. This behavior is known as *vector recycling*, and it helps R do element\-wise operations:
```
1:2
## 1 2
1:4
## 1 2 3 4
die
## 1 2 3 4 5 6
die + 1:2
## 2 4 4 6 6 8
die + 1:4
## 2 4 6 8 6 8
Warning message:
In die + 1:4 :
longer object length is not a multiple of shorter object length
```
Figure 2\.4: R will repeat a short vector to do element\-wise operations with two vectors of uneven lengths.
Element\-wise operations are a very useful feature in R because they manipulate groups of values in an orderly way. When you start working with data sets, element\-wise operations will ensure that values from one observation or case are only paired with values from the same observation or case. Element\-wise operations also make it easier to write your own programs and functions in R.
But don’t think that R has given up on traditional matrix multiplication. You just have to ask for it when you want it. You can do inner multiplication with the `%*%` operator and outer multiplication with the `%o%` operator:
```
die %*% die
## 91
die %o% die
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 3 4 5 6
## [2,] 2 4 6 8 10 12
## [3,] 3 6 9 12 15 18
## [4,] 4 8 12 16 20 24
## [5,] 5 10 15 20 25 30
## [6,] 6 12 18 24 30 36
```
You can also do things like transpose a matrix with `t` and take its determinant with `det`.
Don’t worry if you’re not familiar with these operations. They are easy to look up, and you won’t need them for this book.
Now that you can do math with your `die` object, let’s look at how you could “roll” it. Rolling your die will require something more sophisticated than basic arithmetic; you’ll need to randomly select one of the die’s values. And for that, you will need a *function*.
2\.3 Functions
--------------
R comes with many functions that you can use to do sophisticated tasks like random sampling. For example, you can round a number with the `round` function, or calculate its factorial with the `factorial` function. Using a function is pretty simple. Just write the name of the function and then the data you want the function to operate on in parentheses:
```
round(3.1415)
## 3
factorial(3)
## 6
```
The data that you pass into the function is called the function’s *argument*. The argument can be raw data, an R object, or even the results of another R function. In this last case, R will work from the innermost function to the outermost, as in Figure [2\.5](basics.html#fig:pemdas).
```
mean(1:6)
## 3.5
mean(die)
## 3.5
round(mean(die))
## 4
```
Figure 2\.5: When you link functions together, R will resolve them from the innermost operation to the outermost. Here R first looks up die, then calculates the mean of one through six, then rounds the mean.
Lucky for us, there is an R function that can help “roll” the die. You can simulate a roll of the die with R’s `sample` function. `sample` takes *two* arguments: a vector named `x` and a number named `size`. `sample` will return `size` elements from the vector:
```
sample(x = 1:4, size = 2)
## 3 2
```
To roll your die and get a number back, set `x` to `die` and sample one element from it. You’ll get a new (maybe different) number each time you roll it:
```
sample(x = die, size = 1)
## 2
sample(x = die, size = 1)
## 1
sample(x = die, size = 1)
## 6
```
Many R functions take multiple arguments that help them do their job. You can give a function as many arguments as you like as long as you separate each argument with a comma.
You may have noticed that I set `die` and `1` equal to the names of the arguments in `sample`, `x` and `size`. Every argument in every R function has a name. You can specify which data should be assigned to which argument by setting a name equal to data, as in the preceding code. This becomes important as you begin to pass multiple arguments to the same function; names help you avoid passing the wrong data to the wrong argument. However, using names is optional. You will notice that R users do not often use the name of the first argument in a function. So you might see the previous code written as:
```
sample(die, size = 1)
## 2
```
Often, the name of the first argument is not very descriptive, and it is usually obvious what the first piece of data refers to anyways.
But how do you know which argument names to use? If you try to use a name that a function does not expect, you will likely get an error:
```
round(3.1415, corners = 2)
## Error in round(3.1415, corners = 2) : unused argument(s) (corners = 2)
```
If you’re not sure which names to use with a function, you can look up the function’s arguments with `args`. To do this, place the name of the function in the parentheses behind `args`. For example, you can see that the `round` function takes two arguments, one named `x` and one named `digits`:
```
args(round)
## function (x, digits = 0)
## NULL
```
Did you notice that `args` shows that the `digits` argument of `round` is already set to 0? Frequently, an R function will take optional arguments like `digits`. These arguments are considered optional because they come with a default value. You can pass a new value to an optional argument if you want, and R will use the default value if you do not. For example, `round` will round your number to 0 digits past the decimal point by default. To override the default, supply your own value for `digits`:
```
round(3.1415)
## 3
round(3.1415, digits = 2)
## 3.14
```
You should write out the names of each argument after the first one or two when you call a function with multiple arguments. Why? First, this will help you and others understand your code. It is usually obvious which argument your first input refers to (and sometimes the second input as well). However, you’d need a large memory to remember the third and fourth arguments of every R function. Second, and more importantly, writing out argument names prevents errors.
If you do not write out the names of your arguments, R will match your values to the arguments in your function by order. For example, in the following code, the first value, `die`, will be matched to the first argument of `sample`, which is named `x`. The next value, `1`, will be matched to the next argument, `size`:
```
sample(die, 1)
## 2
```
As you provide more arguments, it becomes more likely that your order and R’s order may not align. As a result, values may get passed to the wrong argument. Argument names prevent this. R will always match a value to its argument name, no matter where it appears in the order of arguments:
```
sample(size = 1, x = die)
## 2
```
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
2\.4 Writing Your Own Functions
-------------------------------
To recap, you already have working R code that simulates rolling a pair of dice:
```
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
```
You can retype this code into the console anytime you want to re\-roll your dice. However, this is an awkward way to work with the code. It would be easier to use your code if you wrapped it into its own function, which is exactly what we’ll do now. We’re going to write a function named `roll` that you can use to roll your virtual dice. When you’re finished, the function will work like this: each time you call `roll()`, R will return the sum of rolling two dice:
```
roll()
## 8
roll()
## 3
roll()
## 7
```
Functions may seem mysterious or fancy, but they are just another type of R object. Instead of containing data, they contain code. This code is stored in a special format that makes it easy to reuse the code in new situations. You can write your own functions by recreating this format.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
2\.5 Arguments
--------------
What if we removed one line of code from our function and changed the name `die` to `bones`, like this?
```
roll2 <- function() {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now I’ll get an error when I run the function. The function needs the object `bones` to do its job, but there is no object named `bones` to be found:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## object 'bones' not found
```
You can supply `bones` when you call `roll2` if you make `bones` an argument of the function. To do this, put the name `bones` in the parentheses that follow `function` when you define `roll2`:
```
roll2 <- function(bones) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now `roll2` will work as long as you supply `bones` when you call the function. You can take advantage of this to roll different types of dice each time you call `roll2`. Dungeons and Dragons, here we come!
Remember, we’re rolling pairs of dice:
```
roll2(bones = 1:4)
## 3
roll2(bones = 1:6)
## 10
roll2(1:20)
## 31
```
Notice that `roll2` will still give an error if you do not supply a value for the `bones` argument when you call `roll2`:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## argument "bones" is missing, with no default
```
You can prevent this error by giving the `bones` argument a default value. To do this, set `bones` equal to a value when you define `roll2`:
```
roll2 <- function(bones = 1:6) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now you can supply a new value for `bones` if you like, and `roll2` will use the default if you do not:
```
roll2()
## 9
```
You can give your functions as many arguments as you like. Just list their names, separated by commas, in the parentheses that follow `function`. When the function is run, R will replace each argument name in the function body with the value that the user supplies for the argument. If the user does not supply a value, R will replace the argument name with the argument’s default value (if you defined one).
To summarize, `function` helps you construct your own R functions. You create a body of code for your function to run by writing code between the braces that follow `function`. You create arguments for your function to use by supplying their names in the parentheses that follow `function`. Finally, you give your function a name by saving its output to an R object, as shown in Figure [2\.6](basics.html#fig:functions).
Once you’ve created your function, R will treat it like every other function in R. Think about how useful this is. Have you ever tried to create a new Excel option and add it to Microsoft’s menu bar? Or a new slide animation and add it to Powerpoint’s options? When you work with a programming language, you can do these types of things. As you learn to program in R, you will be able to create new, customized, reproducible tools for yourself whenever you like. [Project 3: Slot Machine](#slots) will teach you much more about writing functions in R.
Figure 2\.6: Every function in R has the same parts, and you can use function to create these parts. Assign the result to a name, so you can call the function later.
2\.6 Scripts
------------
What if you want to edit `roll2` again? You could go back and retype each line of code in `roll2`, but it would be so much easier if you had a draft of the code to start from. You can create a draft of your code as you go by using an R *script*. An R script is just a plain text file that you save R code in. You can open an R script in RStudio by going to `File > New File > R script` in the menu bar. RStudio will then open a fresh script above your console pane, as shown in Figure [2\.7](basics.html#fig:script).
I strongly encourage you to write and edit all of your R code in a script before you run it in the console. Why? This habit creates a reproducible record of your work. When you’re finished for the day, you can save your script and then use it to rerun your entire analysis the next day. Scripts are also very handy for editing and proofreading your code, and they make a nice copy of your work to share with others. To save a script, click the scripts pane, and then go to `File > Save As` in the menu bar.
Figure 2\.7: When you open an R Script (File \> New File \> R Script in the menu bar), RStudio creates a fourth pane above the console where you can write and edit your code.
RStudio comes with many built\-in features that make it easy to work with scripts. First, you can automatically execute a line of code in a script by clicking the Run button, as shown in Figure [2\.8](basics.html#fig:run).
R will run whichever line of code your cursor is on. If you have a whole section highlighted, R will run the highlighted code. Alternatively, you can run the entire script by clicking the Source button. Don’t like clicking buttons? You can use Control \+ Return as a shortcut for the Run button. On Macs, that would be Command \+ Return.
Figure 2\.8: You can run a highlighted portion of code in your script if you click the Run button at the top of the scripts pane. You can run the entire script by clicking the Source button.
If you’re not convinced about scripts, you soon will be. It becomes a pain to write multi\-line code in the console’s single\-line command line. Let’s avoid that headache and open your first script now before we move to the next chapter.
**Extract function**
RStudio comes with a tool that can help you build functions. To use it, highlight the lines of code in your R script that you want to turn into a function. Then click `Code > Extract Function` in the menu bar. RStudio will ask you for a function name to use and then wrap your code in a `function` call. It will scan the code for undefined variables and use these as arguments.
You may want to double\-check RStudio’s work. It assumes that your code is correct, so if it does something surprising, you may have a problem in your code.
2\.7 Summary
------------
You’ve covered a lot of ground already. You now have a virtual die stored in your computer’s memory, as well as your own R function that rolls a pair of dice. You’ve also begun speaking the R language.
As you’ve seen, R is a language that you can use to talk to your computer. You write commands in R and run them at the command line for your computer to read. Your computer will sometimes talk back–for example, when you commit an error–but it usually just does what you ask and then displays the result.
The two most important components of the R language are objects, which store data, and functions, which manipulate data. R also uses a host of operators like `+`, `-`, `*`, `/`, and `<-` to do basic tasks. As a data scientist, you will use R objects to store data in your computer’s memory, and you will use functions to automate tasks and do complicated calculations. We will examine objects in more depth later in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) and dig further into functions in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine). The vocabulary you have developed here will make each of those projects easier to understand. However, we’re not done with your dice yet.
In [Packages and Help Pages](packages.html#packages), you’ll run some simulations on your dice and build your first graphs in R. You’ll also look at two of the most useful components of the R language: R *packages*, which are collections of functions writted by R’s talented community of developers, and R documentation, which is a collection of help pages built into R that explains every function and data set in the language.
2\.1 The R User Interface
-------------------------
Before you can ask your computer to save some numbers, you’ll need to know how to talk to it. That’s where R and RStudio come in. RStudio gives you a way to talk to your computer. R gives you a language to speak in. To get started, open RStudio just as you would open any other application on your computer. When you do, a window should appear in your screen like the one shown in Figure [2\.1](basics.html#fig:console).
Figure 2\.1: Your computer does your bidding when you type R commands at the prompt in the bottom line of the console pane. Don’t forget to hit the Enter key. When you first open RStudio, the console appears in the pane on your left, but you can change this with File \> Preferences in the menu bar.
If you do not yet have R and RStudio intalled on your computer–or do not know what I am talking about–visit [Appendix A](starting.html#starting). The appendix will give you an overview of the two free tools and tell you how to download them.
The RStudio interface is simple. You type R code into the bottom line of the RStudio console pane and then click Enter to run it. The code you type is called a *command*, because it will command your computer to do something for you. The line you type it into is called the *command line*.
When you type a command at the prompt and hit Enter, your computer executes the command and shows you the results. Then RStudio displays a fresh prompt for your next command. For example, if you type `1 + 1` and hit Enter, RStudio will display:
```
> 1 + 1
[1] 2
>
```
You’ll notice that a `[1]` appears next to your result. R is just letting you know that this line begins with the first value in your result. Some commands return more than one value, and their results may fill up multiple lines. For example, the command `100:130` returns 31 values; it creates a sequence of integers from 100 to 130\. Notice that new bracketed numbers appear at the start of the second and third lines of output. These numbers just mean that the second line begins with the 14th value in the result, and the third line begins with the 25th value. You can mostly ignore the numbers that appear in brackets:
```
> 100:130
[1] 100 101 102 103 104 105 106 107 108 109 110 111 112
[14] 113 114 115 116 117 118 119 120 121 122 123 124 125
[25] 126 127 128 129 130
```
The colon operator (`:`) returns every integer between two integers. It is an easy way to create a sequence of numbers.
**Isn’t R a language?**
You may hear me speak of R in the third person. For example, I might say, “Tell R to do this” or “Tell R to do that”, but of course R can’t do anything; it is just a language. This way of speaking is shorthand for saying, “Tell your computer to do this by writing a command in the R language at the command line of your RStudio console.” Your computer, and not R, does the actual work.
Is this shorthand confusing and slightly lazy to use? Yes. Do a lot of people use it? Everyone I know–probably because it is so convenient.
**When do we compile?**
In some languages, like C, Java, and FORTRAN, you have to compile your human\-readable code into machine\-readable code (often 1s and 0s) before you can run it. If you’ve programmed in such a language before, you may wonder whether you have to compile your R code before you can use it. The answer is no. R is a dynamic programming language, which means R automatically interprets your code as you run it.
If you type an incomplete command and press Enter, R will display a `+` prompt, which means R is waiting for you to type the rest of your command. Either finish the command or hit Escape to start over:
```
> 5 -
+
+ 1
[1] 4
```
If you type a command that R doesn’t recognize, R will return an error message. If you ever see an error message, don’t panic. R is just telling you that your computer couldn’t understand or do what you asked it to do. You can then try a different command at the next prompt:
```
> 3 % 5
Error: unexpected input in "3 % 5"
>
```
Once you get the hang of the command line, you can easily do anything in R that you would do with a calculator. For example, you could do some basic arithmetic:
```
2 * 3
## 6
4 - 1
## 3
6 / (4 - 1)
## 2
```
Did you notice something different about this code? I’ve left out the `>`’s and `[1]`’s. This will make the code easier to copy and paste if you want to put it in your own console.
R treats the hashtag character, `#`, in a special way; R will not run anything that follows a hashtag on a line. This makes hashtags very useful for adding comments and annotations to your code. Humans will be able to read the comments, but your computer will pass over them. The hashtag is known as the *commenting symbol* in R.
For the remainder of the book, I’ll use hashtags to display the output of R code. I’ll use a single hashtag to add my own comments and a double hashtag, `##`, to display the results of code. I’ll avoid showing `>`s and `[1]`s unless I want you to look at them.
**Cancelling commands**
Some R commands may take a long time to run. You can cancel a command once it has begun by pressing ctrl \+ c. Note that it may also take R a long time to cancel the command.
**Exercise 2\.1 (Magic with Numbers)** That’s the basic interface for executing R code in RStudio. Think you have it? If so, try doing these simple tasks. If you execute everything correctly, you should end up with the same number that you started with:
1. Choose any number and add 2 to it.
2. Multiply the result by 3\.
3. Subtract 6 from the answer.
4. Divide what you get by 3\.
Throughout the book, I’ll put exercises in chunks, like the one above. I’ll follow each exercise with a model answer, like the one below.
*Solution.* You could start with the number 10, and then do the following steps:
```
10 + 2
## 12
12 * 3
## 36
36 - 6
## 30
30 / 3
## 10
```
2\.2 Objects
------------
Now that you know how to use R, let’s use it to make a virtual die. The `:` operator from a couple of pages ago gives you a nice way to create a group of numbers from one to six. The `:` operator returns its results as a **vector**, a one\-dimensional set of numbers:
```
1:6
## 1 2 3 4 5 6
```
That’s all there is to how a virtual die looks! But you are not done yet. Running `1:6` generated a vector of numbers for you to see, but it didn’t save that vector anywhere in your computer’s memory. What you are looking at is basically the footprints of six numbers that existed briefly and then melted back into your computer’s RAM. If you want to use those numbers again, you’ll have to ask your computer to save them somewhere. You can do that by creating an R *object*.
R lets you save data by storing it inside an R object. What is an object? Just a name that you can use to call up stored data. For example, you can save data into an object like *`a`* or *`b`*. Wherever R encounters the object, it will replace it with the data saved inside, like so:
```
a <- 1
a
## 1
a + 2
## 3
```
**What just happened?**
1. To create an R object, choose a name and then use the less\-than symbol, `<`, followed by a minus sign, `-`, to save data into it. This combination looks like an arrow, `<-`. R will make an object, give it your name, and store in it whatever follows the arrow. So `a <- 1` stores `1` in an object named `a`.
2. When you ask R what’s in `a`, R tells you on the next line.
3. You can use your object in new R commands, too. Since `a` previously stored the value of `1`, you’re now adding `1` to `2`.
So, for another example, the following code would create an object named `die` that contains the numbers one through six. To see what is stored in an object, just type the object’s name by itself:
```
die <- 1:6
die
## 1 2 3 4 5 6
```
When you create an object, the object will appear in the environment pane of RStudio, as shown in Figure [2\.2](basics.html#fig:environment). This pane will show you all of the objects you’ve created since opening RStudio.
Figure 2\.2: The RStudio environment pane keeps track of the R objects you create.
You can name an object in R almost anything you want, but there are a few rules. First, a name cannot start with a number. Second, a name cannot use some special symbols, like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`:
| Good names | Names that cause errors |
| --- | --- |
| a | 1trial |
| b | $ |
| FOO | ^mean |
| my\_var | 2nd |
| .day | !bad |
**Capitalization**
R is case\-sensitive, so `name` and `Name` will refer to different objects:
`Name <- 1`
`name <- 0`
`Name + 1`
`## 2`
Finally, R will overwrite any previous information stored in an object without asking you for permission. So, it is a good idea to *not* use names that are already taken:
```
my_number <- 1
my_number
## 1
my_number <- 999
my_number
## 999
```
You can see which object names you have already used with the function `ls`:
```
ls()
## "a" "die" "my_number" "name" "Name"
```
You can also see which names you have used by examining RStudio’s environment pane.
You now have a virtual die that is stored in your computer’s memory. You can access it whenever you like by typing the word *`die`*. So what can you do with this die? Quite a lot. R will replace an object with its contents whenever the object’s name appears in a command. So, for example, you can do all sorts of math with the die. Math isn’t so helpful for rolling dice, but manipulating sets of numbers will be your stock and trade as a data scientist. So let’s take a look at how to do that:
```
die - 1
## 0 1 2 3 4 5
die / 2
## 0.5 1.0 1.5 2.0 2.5 3.0
die * die
## 1 4 9 16 25 36
```
If you are a big fan of linear algebra (and who isn’t?), you may notice that R does not always follow the rules of matrix multiplication. Instead, R uses *element\-wise execution*. When you manipulate a set of numbers, R will apply the same operation to each element in the set. So for example, when you run *`die - 1`*, R subtracts one from each element of `die`.
When you use two or more vectors in an operation, R will line up the vectors and perform a sequence of individual operations. For example, when you run *`die * die`*, R lines up the two `die` vectors and then multiplies the first element of vector 1 by the first element of vector 2\. R then multiplies the second element of vector 1 by the second element of vector 2, and so on, until every element has been multiplied. The result will be a new vector the same length as the first two, as shown in Figure [2\.3](basics.html#fig:elementwise).
Figure 2\.3: When R performs element\-wise execution, it matches up vectors and then manipulates each pair of elements independently.
If you give R two vectors of unequal lengths, R will repeat the shorter vector until it is as long as the longer vector, and then do the math, as shown in Figure [2\.4](basics.html#fig:recycle). This isn’t a permanent change–the shorter vector will be its original size after R does the math. If the length of the short vector does not divide evenly into the length of the long vector, R will return a warning message. This behavior is known as *vector recycling*, and it helps R do element\-wise operations:
```
1:2
## 1 2
1:4
## 1 2 3 4
die
## 1 2 3 4 5 6
die + 1:2
## 2 4 4 6 6 8
die + 1:4
## 2 4 6 8 6 8
Warning message:
In die + 1:4 :
longer object length is not a multiple of shorter object length
```
Figure 2\.4: R will repeat a short vector to do element\-wise operations with two vectors of uneven lengths.
Element\-wise operations are a very useful feature in R because they manipulate groups of values in an orderly way. When you start working with data sets, element\-wise operations will ensure that values from one observation or case are only paired with values from the same observation or case. Element\-wise operations also make it easier to write your own programs and functions in R.
But don’t think that R has given up on traditional matrix multiplication. You just have to ask for it when you want it. You can do inner multiplication with the `%*%` operator and outer multiplication with the `%o%` operator:
```
die %*% die
## 91
die %o% die
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 3 4 5 6
## [2,] 2 4 6 8 10 12
## [3,] 3 6 9 12 15 18
## [4,] 4 8 12 16 20 24
## [5,] 5 10 15 20 25 30
## [6,] 6 12 18 24 30 36
```
You can also do things like transpose a matrix with `t` and take its determinant with `det`.
Don’t worry if you’re not familiar with these operations. They are easy to look up, and you won’t need them for this book.
Now that you can do math with your `die` object, let’s look at how you could “roll” it. Rolling your die will require something more sophisticated than basic arithmetic; you’ll need to randomly select one of the die’s values. And for that, you will need a *function*.
2\.3 Functions
--------------
R comes with many functions that you can use to do sophisticated tasks like random sampling. For example, you can round a number with the `round` function, or calculate its factorial with the `factorial` function. Using a function is pretty simple. Just write the name of the function and then the data you want the function to operate on in parentheses:
```
round(3.1415)
## 3
factorial(3)
## 6
```
The data that you pass into the function is called the function’s *argument*. The argument can be raw data, an R object, or even the results of another R function. In this last case, R will work from the innermost function to the outermost, as in Figure [2\.5](basics.html#fig:pemdas).
```
mean(1:6)
## 3.5
mean(die)
## 3.5
round(mean(die))
## 4
```
Figure 2\.5: When you link functions together, R will resolve them from the innermost operation to the outermost. Here R first looks up die, then calculates the mean of one through six, then rounds the mean.
Lucky for us, there is an R function that can help “roll” the die. You can simulate a roll of the die with R’s `sample` function. `sample` takes *two* arguments: a vector named `x` and a number named `size`. `sample` will return `size` elements from the vector:
```
sample(x = 1:4, size = 2)
## 3 2
```
To roll your die and get a number back, set `x` to `die` and sample one element from it. You’ll get a new (maybe different) number each time you roll it:
```
sample(x = die, size = 1)
## 2
sample(x = die, size = 1)
## 1
sample(x = die, size = 1)
## 6
```
Many R functions take multiple arguments that help them do their job. You can give a function as many arguments as you like as long as you separate each argument with a comma.
You may have noticed that I set `die` and `1` equal to the names of the arguments in `sample`, `x` and `size`. Every argument in every R function has a name. You can specify which data should be assigned to which argument by setting a name equal to data, as in the preceding code. This becomes important as you begin to pass multiple arguments to the same function; names help you avoid passing the wrong data to the wrong argument. However, using names is optional. You will notice that R users do not often use the name of the first argument in a function. So you might see the previous code written as:
```
sample(die, size = 1)
## 2
```
Often, the name of the first argument is not very descriptive, and it is usually obvious what the first piece of data refers to anyways.
But how do you know which argument names to use? If you try to use a name that a function does not expect, you will likely get an error:
```
round(3.1415, corners = 2)
## Error in round(3.1415, corners = 2) : unused argument(s) (corners = 2)
```
If you’re not sure which names to use with a function, you can look up the function’s arguments with `args`. To do this, place the name of the function in the parentheses behind `args`. For example, you can see that the `round` function takes two arguments, one named `x` and one named `digits`:
```
args(round)
## function (x, digits = 0)
## NULL
```
Did you notice that `args` shows that the `digits` argument of `round` is already set to 0? Frequently, an R function will take optional arguments like `digits`. These arguments are considered optional because they come with a default value. You can pass a new value to an optional argument if you want, and R will use the default value if you do not. For example, `round` will round your number to 0 digits past the decimal point by default. To override the default, supply your own value for `digits`:
```
round(3.1415)
## 3
round(3.1415, digits = 2)
## 3.14
```
You should write out the names of each argument after the first one or two when you call a function with multiple arguments. Why? First, this will help you and others understand your code. It is usually obvious which argument your first input refers to (and sometimes the second input as well). However, you’d need a large memory to remember the third and fourth arguments of every R function. Second, and more importantly, writing out argument names prevents errors.
If you do not write out the names of your arguments, R will match your values to the arguments in your function by order. For example, in the following code, the first value, `die`, will be matched to the first argument of `sample`, which is named `x`. The next value, `1`, will be matched to the next argument, `size`:
```
sample(die, 1)
## 2
```
As you provide more arguments, it becomes more likely that your order and R’s order may not align. As a result, values may get passed to the wrong argument. Argument names prevent this. R will always match a value to its argument name, no matter where it appears in the order of arguments:
```
sample(size = 1, x = die)
## 2
```
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
### 2\.3\.1 Sample with Replacement
If you set `size = 2`, you can *almost* simulate a pair of dice. Before we run that code, think for a minute why that might be the case. `sample` will return two numbers, one for each die:
```
sample(die, size = 2)
## 3 4
```
I said this “almost” works because this method does something funny. If you use it many times, you’ll notice that the second die never has the same value as the first die, which means you’ll never roll something like a pair of threes or snake eyes. What is going on?
By default, `sample` builds a sample *without replacement*. To see what this means, imagine that `sample` places all of the values of `die` in a jar or urn. Then imagine that `sample` reaches into the jar and pulls out values one by one to build its sample. Once a value has been drawn from the jar, `sample` sets it aside. The value doesn’t go back into the jar, so it cannot be drawn again. So if `sample` selects a six on its first draw, it will not be able to select a six on the second draw; six is no longer in the jar to be selected. Although `sample` creates its sample electronically, it follows this seemingly physical behavior.
One side effect of this behavior is that each draw depends on the draws that come before it. In the real world, however, when you roll a pair of dice, each die is independent of the other. If the first die comes up six, it does not prevent the second die from coming up six. In fact, it doesn’t influence the second die in any way whatsoever. You can recreate this behavior in `sample` by adding the argument `replace = TRUE`:
```
sample(die, size = 2, replace = TRUE)
## 5 5
```
The argument `replace = TRUE` causes `sample` to sample *with replacement*. Our jar example provides a good way to understand the difference between sampling with replacement and without. When `sample` uses replacement, it draws a value from the jar and records the value. Then it puts the value back into the jar. In other words, `sample` *replaces* each value after each draw. As a result, `sample` may select the same value on the second draw. Each value has a chance of being selected each time. It is as if every draw were the first draw.
Sampling with replacement is an easy way to create *independent random samples*. Each value in your sample will be a sample of size one that is independent of the other values. This is the correct way to simulate a pair of dice:
```
sample(die, size = 2, replace = TRUE)
## 2 4
```
Congratulate yourself; you’ve just run your first simulation in R! You now have a method for simulating the result of rolling a pair of dice. If you want to add up the dice, you can feed your result straight into the `sum` function:
```
dice <- sample(die, size = 2, replace = TRUE)
dice
## 2 4
sum(dice)
## 6
```
What would happen if you call `dice` multiple times? Would R generate a new pair of dice values each time? Let’s give it a try:
```
dice
## 2 4
dice
## 2 4
dice
## 2 4
```
Nope. Each time you call `dice`, R will show you the result of that one time you called `sample` and saved the output to `dice`. R won’t rerun `sample(die, 2, replace = TRUE)` to create a new roll of the dice. This is a relief in a way. Once you save a set of results to an R object, those results do not change. Programming would be quite hard if the values of your objects changed each time you called them.
However, it *would* be convenient to have an object that can re\-roll the dice whenever you call it. You can make such an object by writing your own R function.
2\.4 Writing Your Own Functions
-------------------------------
To recap, you already have working R code that simulates rolling a pair of dice:
```
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
```
You can retype this code into the console anytime you want to re\-roll your dice. However, this is an awkward way to work with the code. It would be easier to use your code if you wrapped it into its own function, which is exactly what we’ll do now. We’re going to write a function named `roll` that you can use to roll your virtual dice. When you’re finished, the function will work like this: each time you call `roll()`, R will return the sum of rolling two dice:
```
roll()
## 8
roll()
## 3
roll()
## 7
```
Functions may seem mysterious or fancy, but they are just another type of R object. Instead of containing data, they contain code. This code is stored in a special format that makes it easy to reuse the code in new situations. You can write your own functions by recreating this format.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
### 2\.4\.1 The Function Constructor
Every function in R has three basic parts: a name, a body of code, and a set of arguments. To make your own function, you need to replicate these parts and store them in an R object, which you can do with the `function` function. To do this, call `function()` and follow it with a pair of braces, `{}`:
```
my_function <- function() {}
```
`function` will build a function out of whatever R code you place between the braces. For example, you can turn your dice code into a function by calling:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
Notice that I indented each line of code between the braces. This makes the code easier for you and me to read but has no impact on how the code runs. R ignores spaces and line breaks and executes one complete expression at a time.
Just hit the Enter key between each line after the first brace, `{`. R will wait for you to type the last brace, `}`, before it responds.
Don’t forget to save the output of `function` to an R object. This object will become your new function. To use it, write the object’s name followed by an open and closed parenthesis:
```
roll()
## 9
```
You can think of the parentheses as the “trigger” that causes R to run the function. If you type in a function’s name *without* the parentheses, R will show you the code that is stored inside the function. If you type in the name *with* the parentheses, R will run that code:
```
roll
## function() {
## die <- 1:6
## dice <- sample(die, size = 2, replace = TRUE)
## sum(dice)
## }
roll()
## 6
```
The code that you place inside your function is known as the *body* of the function. When you run a function in R, R will execute all of the code in the body and then return the result of the last line of code. If the last line of code doesn’t return a value, neither will your function, so you want to ensure that your final line of code returns a value. One way to check this is to think about what would happen if you ran the body of code line by line in the command line. Would R display a result after the last line, or would it not?
Here’s some code that would display a result:
```
dice
1 + 1
sqrt(2)
```
And here’s some code that would not:
```
dice <- sample(die, size = 2, replace = TRUE)
two <- 1 + 1
a <- sqrt(2)
```
Do you notice the pattern? These lines of code do not return a value to the command line; they save a value to an object.
2\.5 Arguments
--------------
What if we removed one line of code from our function and changed the name `die` to `bones`, like this?
```
roll2 <- function() {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now I’ll get an error when I run the function. The function needs the object `bones` to do its job, but there is no object named `bones` to be found:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## object 'bones' not found
```
You can supply `bones` when you call `roll2` if you make `bones` an argument of the function. To do this, put the name `bones` in the parentheses that follow `function` when you define `roll2`:
```
roll2 <- function(bones) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now `roll2` will work as long as you supply `bones` when you call the function. You can take advantage of this to roll different types of dice each time you call `roll2`. Dungeons and Dragons, here we come!
Remember, we’re rolling pairs of dice:
```
roll2(bones = 1:4)
## 3
roll2(bones = 1:6)
## 10
roll2(1:20)
## 31
```
Notice that `roll2` will still give an error if you do not supply a value for the `bones` argument when you call `roll2`:
```
roll2()
## Error in sample(bones, size = 2, replace = TRUE) :
## argument "bones" is missing, with no default
```
You can prevent this error by giving the `bones` argument a default value. To do this, set `bones` equal to a value when you define `roll2`:
```
roll2 <- function(bones = 1:6) {
dice <- sample(bones, size = 2, replace = TRUE)
sum(dice)
}
```
Now you can supply a new value for `bones` if you like, and `roll2` will use the default if you do not:
```
roll2()
## 9
```
You can give your functions as many arguments as you like. Just list their names, separated by commas, in the parentheses that follow `function`. When the function is run, R will replace each argument name in the function body with the value that the user supplies for the argument. If the user does not supply a value, R will replace the argument name with the argument’s default value (if you defined one).
To summarize, `function` helps you construct your own R functions. You create a body of code for your function to run by writing code between the braces that follow `function`. You create arguments for your function to use by supplying their names in the parentheses that follow `function`. Finally, you give your function a name by saving its output to an R object, as shown in Figure [2\.6](basics.html#fig:functions).
Once you’ve created your function, R will treat it like every other function in R. Think about how useful this is. Have you ever tried to create a new Excel option and add it to Microsoft’s menu bar? Or a new slide animation and add it to Powerpoint’s options? When you work with a programming language, you can do these types of things. As you learn to program in R, you will be able to create new, customized, reproducible tools for yourself whenever you like. [Project 3: Slot Machine](#slots) will teach you much more about writing functions in R.
Figure 2\.6: Every function in R has the same parts, and you can use function to create these parts. Assign the result to a name, so you can call the function later.
2\.6 Scripts
------------
What if you want to edit `roll2` again? You could go back and retype each line of code in `roll2`, but it would be so much easier if you had a draft of the code to start from. You can create a draft of your code as you go by using an R *script*. An R script is just a plain text file that you save R code in. You can open an R script in RStudio by going to `File > New File > R script` in the menu bar. RStudio will then open a fresh script above your console pane, as shown in Figure [2\.7](basics.html#fig:script).
I strongly encourage you to write and edit all of your R code in a script before you run it in the console. Why? This habit creates a reproducible record of your work. When you’re finished for the day, you can save your script and then use it to rerun your entire analysis the next day. Scripts are also very handy for editing and proofreading your code, and they make a nice copy of your work to share with others. To save a script, click the scripts pane, and then go to `File > Save As` in the menu bar.
Figure 2\.7: When you open an R Script (File \> New File \> R Script in the menu bar), RStudio creates a fourth pane above the console where you can write and edit your code.
RStudio comes with many built\-in features that make it easy to work with scripts. First, you can automatically execute a line of code in a script by clicking the Run button, as shown in Figure [2\.8](basics.html#fig:run).
R will run whichever line of code your cursor is on. If you have a whole section highlighted, R will run the highlighted code. Alternatively, you can run the entire script by clicking the Source button. Don’t like clicking buttons? You can use Control \+ Return as a shortcut for the Run button. On Macs, that would be Command \+ Return.
Figure 2\.8: You can run a highlighted portion of code in your script if you click the Run button at the top of the scripts pane. You can run the entire script by clicking the Source button.
If you’re not convinced about scripts, you soon will be. It becomes a pain to write multi\-line code in the console’s single\-line command line. Let’s avoid that headache and open your first script now before we move to the next chapter.
**Extract function**
RStudio comes with a tool that can help you build functions. To use it, highlight the lines of code in your R script that you want to turn into a function. Then click `Code > Extract Function` in the menu bar. RStudio will ask you for a function name to use and then wrap your code in a `function` call. It will scan the code for undefined variables and use these as arguments.
You may want to double\-check RStudio’s work. It assumes that your code is correct, so if it does something surprising, you may have a problem in your code.
2\.7 Summary
------------
You’ve covered a lot of ground already. You now have a virtual die stored in your computer’s memory, as well as your own R function that rolls a pair of dice. You’ve also begun speaking the R language.
As you’ve seen, R is a language that you can use to talk to your computer. You write commands in R and run them at the command line for your computer to read. Your computer will sometimes talk back–for example, when you commit an error–but it usually just does what you ask and then displays the result.
The two most important components of the R language are objects, which store data, and functions, which manipulate data. R also uses a host of operators like `+`, `-`, `*`, `/`, and `<-` to do basic tasks. As a data scientist, you will use R objects to store data in your computer’s memory, and you will use functions to automate tasks and do complicated calculations. We will examine objects in more depth later in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) and dig further into functions in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine). The vocabulary you have developed here will make each of those projects easier to understand. However, we’re not done with your dice yet.
In [Packages and Help Pages](packages.html#packages), you’ll run some simulations on your dice and build your first graphs in R. You’ll also look at two of the most useful components of the R language: R *packages*, which are collections of functions writted by R’s talented community of developers, and R documentation, which is a collection of help pages built into R that explains every function and data set in the language.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/packages.html |
3 Packages and Help Pages
=========================
You now have a function that simulates rolling a pair of dice. Let’s make things a little more interesting by weighting the dice in your favor. The house always wins, right? Let’s make the dice roll high numbers slightly more often than it rolls low numbers.
Before we weight the dice, we should make sure that they are fair to begin with. Two tools will help you do this: *repetition* and *visualization*. By coincidence, these tools are also two of the most useful superpowers in the world of data science.
We will repeat our dice rolls with a function called `replicate`, and we will visualize our rolls with a function called `qplot`. `qplot` does not come with R when you download it; `qplot` comes in a standalone R package. Many of the most useful R tools come in R packages, so let’s take a moment to look at what R packages are and how you can use them.
3\.1 Packages
-------------
You’re not the only person writing your own functions with R. Many professors, programmers, and statisticians use R to design tools that can help people analyze data. They then make these tools free for anyone to use. To use these tools, you just have to download them. They come as preassembled collections of functions and objects called packages. [Appendix 2: R Packages](packages2.html#packages2) contains detailed instructions for downloading and updating R packages, but we’ll look at the basics here.
We’re going to use the `qplot` function to make some quick plots. `qplot` comes in the *ggplot2* package, a popular package for making graphs. Before you can use `qplot`, or anything else in the ggplot2 package, you need to download and install it.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
3\.2 Getting Help with Help Pages
---------------------------------
There are over 1,000 functions at the core of R, and new R functions are created all of the time. This can be a lot of material to memorize and learn! Luckily, each R function comes with its own help page, which you can access by typing the function’s name after a question mark. For example, each of these commands will open a help page. Look for the pages to appear in the Help tab of RStudio’s bottom\-right pane:
```
?sqrt
?log10
?sample
```
Help pages contain useful information about what each function does. These help pages also serve as code documentation, so reading them can be bittersweet. They often seem to be written for people who already understand the function and do not need help.
Don’t let this bother you—you can gain a lot from a help page by scanning it for information that makes sense and glossing over the rest. This technique will inevitably bring you to the most helpful part of each help page: the bottom. Here, almost every help page includes some example code that puts the function in action. Running this code is a great way to learn by example.
If a function comes in an R package, R won’t be able to find its help page unless the package is loaded.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
3\.3 Summary
------------
R’s packages and help pages can make you a more productive programmer. You saw in [The Very Basics](basics.html#basics) that R gives you the power to write your own functions that do specific things, but often the function that you want to write will already exist in an R package. Professors, programmers, and scientists have developed over 13,000 packages for you to use, which can save you valuable programming time. To use a package, you need to install it to your computer once with `install.packages`, and then load it into each new R session with `library`.
R’s help pages will help you master the functions that appear in R and its packages. Each function and data set in R has its own help page. Although help pages often contain advanced content, they also contain valuable clues and examples that can help you learn how to use a function.
You have now seen enough of R to learn by doing, which is the best way to learn R. You can make your own R commands, run them, and get help when you need to understand something that I have not explained. I encourage you to experiment with your own ideas in R as you read through the next two projects.
3\.4 Project 1 Wrap\-up
-----------------------
You’ve done more in this project than enable fraud and gambling; you’ve also learned how to speak to your computer in the language of R. R is a language like English, Spanish, or German, except R helps you talk to computers, not humans.
You’ve met the nouns of the R language, objects. And hopefully you guessed that functions are the verbs (I suppose function arguments would be the adverbs). When you combine functions and objects, you express a complete thought. By stringing thoughts together in a logical sequence, you can build eloquent, even artistic statements. In that respect, R is not that different than any other language.
R shares another characteristic of human languages: you won’t feel very comfortable speaking R until you build up a vocabulary of R commands to use. Fortunately, you don’t have to be bashful. Your computer will be the only one to “hear” you speak R. Your computer is not very forgiving, but it also doesn’t judge. Not that you need to worry; you’ll broaden your R vocabulary tremendously between here and the end of the book.
Now that you can use R, it is time to become an expert at using R to do data science. The foundation of data science is the ability to store large amounts of data and recall values on demand. From this, all else follows—manipulating data, visualizing data, modeling data, and more. However, you cannot easily store a data set in your mind by memorizing it. Nor can you easily store a data set on paper by writing it down. The only efficient way to store large amounts of data is with a computer. In fact, computers are so efficient that their development over the last three decades has completely changed the type of data we can accumulate and the methods we can use to analyze it. In short, computer data storage has driven the revolution in science that we call data science.
[Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) will make you part of this revolution by teaching you how to use R to store data sets in your computer’s memory and how to retrieve and manipulate data once it’s there.
3\.1 Packages
-------------
You’re not the only person writing your own functions with R. Many professors, programmers, and statisticians use R to design tools that can help people analyze data. They then make these tools free for anyone to use. To use these tools, you just have to download them. They come as preassembled collections of functions and objects called packages. [Appendix 2: R Packages](packages2.html#packages2) contains detailed instructions for downloading and updating R packages, but we’ll look at the basics here.
We’re going to use the `qplot` function to make some quick plots. `qplot` comes in the *ggplot2* package, a popular package for making graphs. Before you can use `qplot`, or anything else in the ggplot2 package, you need to download and install it.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
3\.2 Getting Help with Help Pages
---------------------------------
There are over 1,000 functions at the core of R, and new R functions are created all of the time. This can be a lot of material to memorize and learn! Luckily, each R function comes with its own help page, which you can access by typing the function’s name after a question mark. For example, each of these commands will open a help page. Look for the pages to appear in the Help tab of RStudio’s bottom\-right pane:
```
?sqrt
?log10
?sample
```
Help pages contain useful information about what each function does. These help pages also serve as code documentation, so reading them can be bittersweet. They often seem to be written for people who already understand the function and do not need help.
Don’t let this bother you—you can gain a lot from a help page by scanning it for information that makes sense and glossing over the rest. This technique will inevitably bring you to the most helpful part of each help page: the bottom. Here, almost every help page includes some example code that puts the function in action. Running this code is a great way to learn by example.
If a function comes in an R package, R won’t be able to find its help page unless the package is loaded.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
3\.3 Summary
------------
R’s packages and help pages can make you a more productive programmer. You saw in [The Very Basics](basics.html#basics) that R gives you the power to write your own functions that do specific things, but often the function that you want to write will already exist in an R package. Professors, programmers, and scientists have developed over 13,000 packages for you to use, which can save you valuable programming time. To use a package, you need to install it to your computer once with `install.packages`, and then load it into each new R session with `library`.
R’s help pages will help you master the functions that appear in R and its packages. Each function and data set in R has its own help page. Although help pages often contain advanced content, they also contain valuable clues and examples that can help you learn how to use a function.
You have now seen enough of R to learn by doing, which is the best way to learn R. You can make your own R commands, run them, and get help when you need to understand something that I have not explained. I encourage you to experiment with your own ideas in R as you read through the next two projects.
3\.4 Project 1 Wrap\-up
-----------------------
You’ve done more in this project than enable fraud and gambling; you’ve also learned how to speak to your computer in the language of R. R is a language like English, Spanish, or German, except R helps you talk to computers, not humans.
You’ve met the nouns of the R language, objects. And hopefully you guessed that functions are the verbs (I suppose function arguments would be the adverbs). When you combine functions and objects, you express a complete thought. By stringing thoughts together in a logical sequence, you can build eloquent, even artistic statements. In that respect, R is not that different than any other language.
R shares another characteristic of human languages: you won’t feel very comfortable speaking R until you build up a vocabulary of R commands to use. Fortunately, you don’t have to be bashful. Your computer will be the only one to “hear” you speak R. Your computer is not very forgiving, but it also doesn’t judge. Not that you need to worry; you’ll broaden your R vocabulary tremendously between here and the end of the book.
Now that you can use R, it is time to become an expert at using R to do data science. The foundation of data science is the ability to store large amounts of data and recall values on demand. From this, all else follows—manipulating data, visualizing data, modeling data, and more. However, you cannot easily store a data set in your mind by memorizing it. Nor can you easily store a data set on paper by writing it down. The only efficient way to store large amounts of data is with a computer. In fact, computers are so efficient that their development over the last three decades has completely changed the type of data we can accumulate and the methods we can use to analyze it. In short, computer data storage has driven the revolution in science that we call data science.
[Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) will make you part of this revolution by teaching you how to use R to store data sets in your computer’s memory and how to retrieve and manipulate data once it’s there.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/packages.html |
3 Packages and Help Pages
=========================
You now have a function that simulates rolling a pair of dice. Let’s make things a little more interesting by weighting the dice in your favor. The house always wins, right? Let’s make the dice roll high numbers slightly more often than it rolls low numbers.
Before we weight the dice, we should make sure that they are fair to begin with. Two tools will help you do this: *repetition* and *visualization*. By coincidence, these tools are also two of the most useful superpowers in the world of data science.
We will repeat our dice rolls with a function called `replicate`, and we will visualize our rolls with a function called `qplot`. `qplot` does not come with R when you download it; `qplot` comes in a standalone R package. Many of the most useful R tools come in R packages, so let’s take a moment to look at what R packages are and how you can use them.
3\.1 Packages
-------------
You’re not the only person writing your own functions with R. Many professors, programmers, and statisticians use R to design tools that can help people analyze data. They then make these tools free for anyone to use. To use these tools, you just have to download them. They come as preassembled collections of functions and objects called packages. [Appendix 2: R Packages](packages2.html#packages2) contains detailed instructions for downloading and updating R packages, but we’ll look at the basics here.
We’re going to use the `qplot` function to make some quick plots. `qplot` comes in the *ggplot2* package, a popular package for making graphs. Before you can use `qplot`, or anything else in the ggplot2 package, you need to download and install it.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
3\.2 Getting Help with Help Pages
---------------------------------
There are over 1,000 functions at the core of R, and new R functions are created all of the time. This can be a lot of material to memorize and learn! Luckily, each R function comes with its own help page, which you can access by typing the function’s name after a question mark. For example, each of these commands will open a help page. Look for the pages to appear in the Help tab of RStudio’s bottom\-right pane:
```
?sqrt
?log10
?sample
```
Help pages contain useful information about what each function does. These help pages also serve as code documentation, so reading them can be bittersweet. They often seem to be written for people who already understand the function and do not need help.
Don’t let this bother you—you can gain a lot from a help page by scanning it for information that makes sense and glossing over the rest. This technique will inevitably bring you to the most helpful part of each help page: the bottom. Here, almost every help page includes some example code that puts the function in action. Running this code is a great way to learn by example.
If a function comes in an R package, R won’t be able to find its help page unless the package is loaded.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
3\.3 Summary
------------
R’s packages and help pages can make you a more productive programmer. You saw in [The Very Basics](basics.html#basics) that R gives you the power to write your own functions that do specific things, but often the function that you want to write will already exist in an R package. Professors, programmers, and scientists have developed over 13,000 packages for you to use, which can save you valuable programming time. To use a package, you need to install it to your computer once with `install.packages`, and then load it into each new R session with `library`.
R’s help pages will help you master the functions that appear in R and its packages. Each function and data set in R has its own help page. Although help pages often contain advanced content, they also contain valuable clues and examples that can help you learn how to use a function.
You have now seen enough of R to learn by doing, which is the best way to learn R. You can make your own R commands, run them, and get help when you need to understand something that I have not explained. I encourage you to experiment with your own ideas in R as you read through the next two projects.
3\.4 Project 1 Wrap\-up
-----------------------
You’ve done more in this project than enable fraud and gambling; you’ve also learned how to speak to your computer in the language of R. R is a language like English, Spanish, or German, except R helps you talk to computers, not humans.
You’ve met the nouns of the R language, objects. And hopefully you guessed that functions are the verbs (I suppose function arguments would be the adverbs). When you combine functions and objects, you express a complete thought. By stringing thoughts together in a logical sequence, you can build eloquent, even artistic statements. In that respect, R is not that different than any other language.
R shares another characteristic of human languages: you won’t feel very comfortable speaking R until you build up a vocabulary of R commands to use. Fortunately, you don’t have to be bashful. Your computer will be the only one to “hear” you speak R. Your computer is not very forgiving, but it also doesn’t judge. Not that you need to worry; you’ll broaden your R vocabulary tremendously between here and the end of the book.
Now that you can use R, it is time to become an expert at using R to do data science. The foundation of data science is the ability to store large amounts of data and recall values on demand. From this, all else follows—manipulating data, visualizing data, modeling data, and more. However, you cannot easily store a data set in your mind by memorizing it. Nor can you easily store a data set on paper by writing it down. The only efficient way to store large amounts of data is with a computer. In fact, computers are so efficient that their development over the last three decades has completely changed the type of data we can accumulate and the methods we can use to analyze it. In short, computer data storage has driven the revolution in science that we call data science.
[Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) will make you part of this revolution by teaching you how to use R to store data sets in your computer’s memory and how to retrieve and manipulate data once it’s there.
3\.1 Packages
-------------
You’re not the only person writing your own functions with R. Many professors, programmers, and statisticians use R to design tools that can help people analyze data. They then make these tools free for anyone to use. To use these tools, you just have to download them. They come as preassembled collections of functions and objects called packages. [Appendix 2: R Packages](packages2.html#packages2) contains detailed instructions for downloading and updating R packages, but we’ll look at the basics here.
We’re going to use the `qplot` function to make some quick plots. `qplot` comes in the *ggplot2* package, a popular package for making graphs. Before you can use `qplot`, or anything else in the ggplot2 package, you need to download and install it.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
### 3\.1\.1 install.packages
Each R package is hosted at [http://cran.r\-project.org](http://cran.r-project.org), the same website that hosts R. However, you don’t need to visit the website to download an R package; you can download packages straight from R’s command line. Here’s how:
* Open RStudio.
* Make sure you are connected to the Internet.
* Run *`install.packages("ggplot2")`* at the command line.
That’s it. R will have your computer visit the website, download ggplot2, and install the package in your hard drive right where R wants to find it. You now have the ggplot2 package. If you would like to install another package, replace ggplot2 with your package name in the code.
### 3\.1\.2 library
Installing a package doesn’t place its functions at your fingertips just yet: it simply places them in your hard drive. To use an R package, you next have to load it in your R session with the command *`library("ggplot2")`*. If you would like to load a different package, replace ggplot2 with your package name in the code.
To see what this does, try an experiment. First, ask R to show you the `qplot` function. R won’t be able to find `qplot` because `qplot` lives in the ggplot2 package, which you haven’t loaded:
```
qplot
## Error: object 'qplot' not found
```
Now load the ggplot2 package:
```
library("ggplot2")
```
If you installed the package with `install.packages` as instructed, everything should go fine. Don’t worry if you don’t see any results or messages. No news is fine news when loading a package. Don’t worry if you do see a message either; ggplot2 sometimes displays helpful start up messages. As long as you do not see anything that says “Error,” you are doing fine.
Now if you ask to see `qplot`, R will show you quite a bit of code (`qplot` is a long function):
```
qplot
## (quite a bit of code)
```
[Appendix 2: R Packages](packages2.html#packages2) contains many more details about acquiring and using packages. I recommend that you read it if you are unfamiliar with R’s package system. The main thing to remember is that you only need to install a package once, but you need to load it with `library` each time you wish to use it in a new R session. R will unload all of its packages each time you close RStudio.
Now that you’ve loaded `qplot`, let’s take it for a spin. `qplot` makes “quick plots.” If you give `qplot` two vectors of equal lengths, `qplot` will draw a scatterplot for you. `qplot` will use the first vector as a set of x values and the second vector as a set of y values. Look for the plot to appear in the Plots tab of the bottom\-right pane in your RStudio window.
The following code will make the plot that appears in Figure [3\.1](packages.html#fig:qplot). Until now, we’ve been creating sequences of numbers with the `:` operator; but you can also create vectors of numbers with the `c` function. Give `c` all of the numbers that you want to appear in the vector, separated by a comma. `c` stands for *concatenate*, but you can think of it as “collect” or “combine”:
```
x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
x
## -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
y <- x^3
y
## -1.000 -0.512 -0.216 -0.064 -0.008 0.000 0.008
## 0.064 0.216 0.512 1.000
qplot(x, y)
```
Figure 3\.1: qplot makes a scatterplot when you give it two vectors.
You don’t need to name your vectors `x` and `y`. I just did that to make the example clear. As you can see in Figure [3\.1](packages.html#fig:qplot), a scatterplot is a set of points, each plotted according to its x and y values. Together, the vectors `x` and `y` describe a set of 10 points. How did R match up the values in `x` and `y` to make these points? With element\-wise execution, as we saw in Figure [2\.3](basics.html#fig:elementwise).
Scatterplots are useful for visualizing the relationship between two variables. However, we’re going to use a different type of graph, a *histogram*. A histogram visualizes the distribution of a single variable; it displays how many data points appear at each value of x.
Let’s take a look at a histogram to see if this makes sense. `qplot` will make a histogram whenever you give it only *one* vector to plot. The following code makes the left\-hand plot in Figure [3\.2](packages.html#fig:hist) (we’ll worry about the right\-hand plot in just second). To make sure our graphs look the same, use the extra argument *`binwidth = 1`*:
```
x <- c(1, 2, 2, 2, 3, 3)
qplot(x, binwidth = 1)
```
Figure 3\.2: qplot makes a histogram when you give it a single vector.
This plot shows that our vector contains one value in the interval `[1, 2)` by placing a bar of height 1 above that interval. Similarly, the plot shows that the vector contains three values in the interval `[2, 3)` by placing a bar of height 3 in that interval. It shows that the vector contains two values in the interval `[3, 4)` by placing a bar of height 2 in that interval. In these intervals, the hard bracket, `[`, means that the first number is included in the interval. The parenthesis, `)`, means that the last number is *not* included.
Let’s try another histogram. This code makes the right\-hand plot in Figure [3\.2](packages.html#fig:hist). Notice that there are five points with a value of 1 in `x2`. The histogram displays this by plotting a bar of height 5 above the interval x2 \= \[1, 2\):
```
x2 <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4)
qplot(x2, binwidth = 1)
```
**Exercise 3\.1 (Visualize a Histogram)** Let `x3` be the following vector:
`x3 <- c(0, 1, 1, 2, 2, 2, 3, 3, 4)`
Imagine what a histogram of `x3` would look like. Assume that the histogram has a bin width of 1\. How many bars will the histogram have? Where will they appear? How high will each be?
When you are done, plot a histogram of `x3` with `binwidth = 1`, and see if you are right.
*Solution.* You can make a histogram of `x3` with `qplot(x3, binwidth = 1)`. The histogram will look like a symmetric pyramid. The middle bar will have a height of 3 and will appear above `[2, 3)`, but be sure to try it and see for yourself.
You can use a histogram to display visually how common different values of `x` are. Numbers covered by a tall bar are more common than numbers covered by a short bar.
How can you use a histogram to check the accuracy of your dice?
Well, if you roll your dice many times and keep track of the results, you would expect some numbers to occur more than others. This is because there are more ways to get some numbers by adding two dice together than to get other numbers, as shown in Figure [3\.3](packages.html#fig:probs).
If you roll your dice many times and plot the results with `qplot`, the histogram will show you how often each sum appeared. The sums that occurred most often will have the highest bars. The histogram should look like the pattern in Figure [3\.3](packages.html#fig:probs) if the dice are fairly weighted.
This is where `replicate` comes in. `replicate` provides an easy way to repeat an R command many times. To use it, first give `replicate` the number of times you wish to repeat an R command, and then give it the command you wish to repeat. `replicate` will run the command multiple times and store the results as a vector:
```
replicate(3, 1 + 1)
## 2 2 2
replicate(10, roll())
## 3 7 5 3 6 2 3 8 11 7
```
Figure 3\.3: Each individual dice combination should occur with the same frequency. As a result, some sums will occur more often than others. With fair dice, each sum should appear in proportion to the number of combinations that make it.
A histogram of your first 10 rolls probably won’t look like the pattern shown in Figure [3\.3](packages.html#fig:probs). Why not? There is too much randomness involved. Remember that we use dice in real life because they are effective random number generators. Patterns of long run frequencies will only appear *over the long run*. So let’s simulate 10,000 dice rolls and plot the results. Don’t worry; `qplot` and `replicate` can handle it. The results appear in Figure [3\.4](packages.html#fig:fair):
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
The results suggest that the dice are fair. Over the long run, each number occurs in proportion to the number of combinations that generate it.
Now how can you bias these results? The previous pattern occurs because each underlying combination of dice (e.g., (3,4\)) occurs with the same frequency. If you could increase the probability that a 6 is rolled on either die, then any combination with a six in it will occur more often than any combination without a six in it. The combination (6, 6\) would occur most of all. This won’t make the dice add up to 12 more often than they add up to seven, but it will skew the results toward the higher numbers.
Figure 3\.4: The behavior of our dice suggests that they are fair. Seven occurs more often than any other number, and frequencies diminish in proportion to the number of die combinations that create each number.
To put it another way, the probability of rolling any single number on a fair die is 1/6\. I’d like you to change the probability to 1/8 for each number below six, and then increase the probability of rolling a six to 3/8:
| Number | Fair probability | Weighted probability |
| --- | --- | --- |
| 1 | 1/6 | 1/8 |
| 2 | 1/6 | 1/8 |
| 3 | 1/6 | 1/8 |
| 4 | 1/6 | 1/8 |
| 5 | 1/6 | 1/8 |
| 6 | 1/6 | 3/8 |
You can change the probabilities by adding a new argument to the `sample` function. I’m not going to tell you what the argument is; instead I’ll point you to the help page for the `sample` function. What’s that? R functions come with help pages? Yes they do, so let’s learn how to read one.
3\.2 Getting Help with Help Pages
---------------------------------
There are over 1,000 functions at the core of R, and new R functions are created all of the time. This can be a lot of material to memorize and learn! Luckily, each R function comes with its own help page, which you can access by typing the function’s name after a question mark. For example, each of these commands will open a help page. Look for the pages to appear in the Help tab of RStudio’s bottom\-right pane:
```
?sqrt
?log10
?sample
```
Help pages contain useful information about what each function does. These help pages also serve as code documentation, so reading them can be bittersweet. They often seem to be written for people who already understand the function and do not need help.
Don’t let this bother you—you can gain a lot from a help page by scanning it for information that makes sense and glossing over the rest. This technique will inevitably bring you to the most helpful part of each help page: the bottom. Here, almost every help page includes some example code that puts the function in action. Running this code is a great way to learn by example.
If a function comes in an R package, R won’t be able to find its help page unless the package is loaded.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
### 3\.2\.1 Parts of a Help Page
Each help page is divided into sections. Which sections appear can vary from help page to help page, but you can usually expect to find these useful topics:
**Description** \- A short summary of what the function does.
**Usage** \- An example of how you would type the function. Each argument of the function will appear in the order R expects you to supply it (if you don’t use argument names).
**Arguments** \- A list of each argument the function takes, what type of information R expects you to supply for the argument, and what the function will do with the information.
**Details** \- A more in\-depth description of the function and how it operates. The details section also gives the function author a chance to alert you to anything you might want to know when using the function.
**Value** \- A description of what the function returns when you run it.
**See Also** \- A short list of related R functions.
**Examples** \- Example code that uses the function and is guaranteed to work. The examples section of a help page usually demonstrates a couple different ways to use a function. This helps give you an idea of what the function is capable of.
If you’d like to look up the help page for a function but have forgotten the function’s name, you can search by keyword. To do this, type two question marks followed by a keyword in R’s command line. R will pull up a list of links to help pages related to the keyword. You can think of this as the help page for the help page:
```
??log
```
Let’s take a stroll through `sample`’s help page. Remember: we’re searching for anything that could help you change the probabilities involved in the sampling process. I’m not going to reproduce the whole help page here (just the juiciest parts), so you should follow along on your computer.
First, open the help page. It will appear in the same pane in RStudio as your plots did (but in the Help tab, not the Plots tab):
```
?sample
```
What do you see? Starting from the top:
```
Random Samples and Permutations
Description
sample takes a sample of the specified size from the elements of x using
either with or without replacement.
```
So far, so good. You knew all of that. The next section, Usage, has a possible clue. It mentions an argument called `prob`:
```
Usage
sample(x, size, replace = FALSE, prob = NULL)
```
If you scroll down to the arguments section, the description of \+prob\+ sounds *very* promising:
```
A vector of probability weights for obtaining the elements of the vector being
sampled.
```
The Details section confirms our suspicions. In this case, it also tells you how to proceed:
```
The optional prob argument can be used to give a vector of weights for obtaining
the elements of the vector being sampled. They need not sum to one, but they
should be nonnegative and not all zero.
```
Although the help page does not say it here, these weights will be matched up to the elements being sampled in element\-wise fashion. The first weight will describe the first element, the second weight the second element, and so on. This is common practice in R.
Reading on:
```
If replace is true, Walker's alias method (Ripley, 1987) is used...
```
Okay, that looks like time to start skimming. We should have enough info now to figure out how to weight our dice.
**Exercise 3\.2 (Roll a Pair of Dice)** Rewrite the `roll` function below to roll a pair of weighted dice:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
You will need to add a `prob` argument to the `sample` function inside of `roll`. This argument should tell `sample` to sample the numbers one through five with probability 1/8 and the number 6 with probability 3/8\.
When you are finished, read on for a model answer.
*Solution.* To weight your dice, you need to add a `prob` argument with a vector of weights to `sample`, like this:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE,
prob = c(1/8, 1/8, 1/8, 1/8, 1/8, 3/8))
sum(dice)
}
```
This will cause `roll` to pick 1 through 5 with probability 1/8 and 6 with probability 3/8\.
Overwrite your previous version of `roll` with the new function (by running the previous code snippet in your command line). Then visualize the new long\-term behavior of your dice. I’ve put the results in Figure [3\.5](packages.html#fig:weighted) next to our original results:
```
rolls <- replicate(10000, roll())
qplot(rolls, binwidth = 1)
```
This confirms that we’ve effectively weighted the dice. High numbers occur much more often than low numbers. The remarkable thing is that this behavior will only be apparent when you examine long\-term frequencies. On any single roll, the dice will appear to behave randomly. This is great news if you play Settlers of Catan (just tell your friends you lost the dice), but it should be disturbing if you analyze data, because it means that bias can easily occur without anyone noticing it in the short run.
Figure 3\.5: The dice are now clearly biased towards high numbers, since high sums occur much more often than low sums.
### 3\.2\.2 Getting More Help
R also comes with a super active community of users that you can turn to for [help on the R\-help mailing list](http://bit.ly/r-help). You can email the list with questions, but there’s a great chance that your question has already been answered. Find out by searching the [archives](http://bit.ly/R_archives).
Even better than the R\-help list is [Stack Overflow](http://stackoverflow.com), a website that allows programmers to answer questions and users to rank answers based on helpfulness. Personally, I find the Stack Overflow format to be more user\-friendly than the R\-help email list (and the respondents to be more human friendly). You can submit your own question or search through Stack Overflow’s previously answered questions related to R. There are over 30,000\.
Best of all is [community.rstudio.com](http://community.rstudio.com), a friendly, inclusive place to share questions related to R. community.rstudio.com is a very active forum focused on R. Don’t be surprised if you ask a question about an R package, and the author of the package shows up to answer.
For all of the R help list, Stack Overflow, and community.rstudio.com, you’re more likely to get a useful answer if you provide a reproducible example with your question. This means pasting in a short snippet of code that users can run to arrive at the bug or question you have in mind.
3\.3 Summary
------------
R’s packages and help pages can make you a more productive programmer. You saw in [The Very Basics](basics.html#basics) that R gives you the power to write your own functions that do specific things, but often the function that you want to write will already exist in an R package. Professors, programmers, and scientists have developed over 13,000 packages for you to use, which can save you valuable programming time. To use a package, you need to install it to your computer once with `install.packages`, and then load it into each new R session with `library`.
R’s help pages will help you master the functions that appear in R and its packages. Each function and data set in R has its own help page. Although help pages often contain advanced content, they also contain valuable clues and examples that can help you learn how to use a function.
You have now seen enough of R to learn by doing, which is the best way to learn R. You can make your own R commands, run them, and get help when you need to understand something that I have not explained. I encourage you to experiment with your own ideas in R as you read through the next two projects.
3\.4 Project 1 Wrap\-up
-----------------------
You’ve done more in this project than enable fraud and gambling; you’ve also learned how to speak to your computer in the language of R. R is a language like English, Spanish, or German, except R helps you talk to computers, not humans.
You’ve met the nouns of the R language, objects. And hopefully you guessed that functions are the verbs (I suppose function arguments would be the adverbs). When you combine functions and objects, you express a complete thought. By stringing thoughts together in a logical sequence, you can build eloquent, even artistic statements. In that respect, R is not that different than any other language.
R shares another characteristic of human languages: you won’t feel very comfortable speaking R until you build up a vocabulary of R commands to use. Fortunately, you don’t have to be bashful. Your computer will be the only one to “hear” you speak R. Your computer is not very forgiving, but it also doesn’t judge. Not that you need to worry; you’ll broaden your R vocabulary tremendously between here and the end of the book.
Now that you can use R, it is time to become an expert at using R to do data science. The foundation of data science is the ability to store large amounts of data and recall values on demand. From this, all else follows—manipulating data, visualizing data, modeling data, and more. However, you cannot easily store a data set in your mind by memorizing it. Nor can you easily store a data set on paper by writing it down. The only efficient way to store large amounts of data is with a computer. In fact, computers are so efficient that their development over the last three decades has completely changed the type of data we can accumulate and the methods we can use to analyze it. In short, computer data storage has driven the revolution in science that we call data science.
[Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards) will make you part of this revolution by teaching you how to use R to store data sets in your computer’s memory and how to retrieve and manipulate data once it’s there.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/r-objects.html |
5 R Objects
===========
In this chapter, you’ll use R to assemble a deck of 52 playing cards.
You’ll start by building simple R objects that represent playing cards and then work your way up to a full\-blown table of data. In short, you’ll build the equivalent of an Excel spreadsheet from scratch. When you are finished, your deck of cards will look something like this:
```
face suit value
king spades 13
queen spades 12
jack spades 11
ten spades 10
nine spades 9
eight spades 8
...
```
Do you need to build a data set from scratch to use it in R? Not at all. You can load most data sets into R with one simple step, see [Loading Data](r-objects.html#loading). But this exercise will teach you how R stores data, and how you can assemble—or disassemble—your own data sets. You will also learn about the various types of objects available for you to use in R (not all R objects are the same!). Consider this exercise a rite of passage; by doing it, you will become an expert on storing data in R.
We’ll start with the very basics. The most simple type of object in R is an *atomic vector*. Atomic vectors are not nuclear powered, but they are very simple and they do show up everywhere. If you look closely enough, you’ll see that most structures in R are built from atomic vectors.
5\.1 Atomic Vectors
-------------------
An atomic vector is just a simple vector of data. In fact, you’ve already made an atomic vector, your `die` object from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). You can make an atomic vector by grouping some values of data together with `c`:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
is.vector(die)
## TRUE
```
**is.vector**
`is.vector` tests whether an object is an atomic vector. It returns `TRUE` if the object is an atomic vector and `FALSE` otherwise.
You can also make an atomic vector with just one value. R saves single values as an atomic vector of length 1:
```
five <- 5
five
## 5
is.vector(five)
## TRUE
length(five)
## 1
length(die)
## 6
```
**length**
`length` returns the length of an atomic vector.
Each atomic vector stores its values as a one\-dimensional vector, and each atomic vector can only store one type of data. You can save different types of data in R by using different types of atomic vectors. Altogether, R recognizes six basic types of atomic vectors: *doubles*, *integers*, *characters*, *logicals*, *complex*, and *raw*.
To create your card deck, you will need to use different types of atomic vectors to save different types of information (text and numbers). You can do this by using some simple conventions when you enter your data. For example, you can create an integer vector by including a capital `L` with your input. You can create a character vector by surrounding your input in quotation marks:
```
int <- 1L
text <- "ace"
```
Each type of atomic vector has its own convention (described below). R will recognize the convention and use it to create an atomic vector of the appropriate type. If you’d like to make atomic vectors that have more than one element in them, you can combine an element with the `c` function from [Packages and Help Pages](packages.html#packages). Use the same convention with each element:
```
int <- c(1L, 5L)
text <- c("ace", "hearts")
```
You may wonder why R uses multiple types of vectors. Vector types help R behave as you would expect. For example, R will do math with atomic vectors that contain numbers, but not with atomic vectors that contain character strings:
```
sum(int)
## 6
sum(text)
## Error in sum(text) : invalid 'type' (character) of argument
```
But we’re getting ahead of ourselves! Get ready to say hello to the six types of atomic vectors in R.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
5\.2 Attributes
---------------
An attribute is a piece of information that you can attach to an atomic vector (or any R object). The attribute won’t affect any of the values in the object, and it will not appear when you display your object. You can think of an attribute as “metadata”; it is just a convenient place to put information associated with an object. R will normally ignore this metadata, but some R functions will check for specific attributes. These functions may use the attributes to do special things with the data.
You can see which attributes an object has with `attributes`. `attributes` will return `NULL` if an object has no attributes. An atomic vector, like `die`, won’t have any attributes unless you give it some:
```
attributes(die)
## NULL
```
**NULL**
R uses `NULL` to represent the null set, an empty object. `NULL` is often returned by functions whose values are undefined. You can create a `NULL` object by typing `NULL` in capital letters.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
5\.3 Matrices
-------------
Matrices store values in a two\-dimensional array, just like a matrix from linear algebra. To create one, first give `matrix` an atomic vector to reorganize into a matrix. Then, define how many rows should be in the matrix by setting the `nrow` argument to a number. `matrix` will organize your vector of values into a matrix with the specified number of rows. Alternatively, you can set the `ncol` argument, which tells R how many columns to include in the matrix:
```
m <- matrix(die, nrow = 2)
m
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
`matrix` will fill up the matrix column by column by default, but you can fill the matrix row by row if you include the argument `byrow = TRUE`:
```
m <- matrix(die, nrow = 2, byrow = TRUE)
m
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
`matrix` also has other default arguments that you can use to customize your matrix. You can read about them at `matrix`’s help page (accessible by `?matrix`).
5\.4 Arrays
-----------
The `array` function creates an n\-dimensional array. For example, you could use `array` to sort values into a cube of three dimensions or a hypercube in 4, 5, or *n* dimensions. `array` is not as customizeable as `matrix` and basically does the same thing as setting the `dim` attribute. To use `array`, provide an atomic vector as the first argument, and a vector of dimensions as the second argument, now called `dim`:
```
ar <- array(c(11:14, 21:24, 31:34), dim = c(2, 2, 3))
ar
## , , 1
##
## [,1] [,2]
## [1,] 11 13
## [2,] 12 14
##
## , , 2
##
## [,1] [,2]
## [1,] 21 23
## [2,] 22 24
##
## , , 3
##
## [,1] [,2]
## [1,] 31 33
## [2,] 32 34
```
**Exercise 5\.3 (Make a Matrix)** Create the following matrix, which stores the name and suit of every card in a royal flush.
```
## [,1] [,2]
## [1,] "ace" "spades"
## [2,] "king" "spades"
## [3,] "queen" "spades"
## [4,] "jack" "spades"
## [5,] "ten" "spades"
```
*Solution.* There is more than one way to build this matrix, but in every case, you will need to start by making a character vector with 10 values. If you start with the following character vector, you can turn it into a matrix with any of the following three commands:
```
hand1 <- c("ace", "king", "queen", "jack", "ten", "spades", "spades",
"spades", "spades", "spades")
matrix(hand1, nrow = 5)
matrix(hand1, ncol = 2)
dim(hand1) <- c(5, 2)
```
You can also start with a character vector that lists the cards in a slightly different order. In this case, you will need to ask R to fill the matrix row by row instead of column by column:
```
hand2 <- c("ace", "spades", "king", "spades", "queen", "spades", "jack",
"spades", "ten", "spades")
matrix(hand2, nrow = 5, byrow = TRUE)
matrix(hand2, ncol = 2, byrow = TRUE)
```
5\.5 Class
----------
Notice that changing the dimensions of your object will not change the type of the object, but it *will* change the object’s `class` attribute:
```
dim(die) <- c(2, 3)
typeof(die)
## "double"
class(die)
## "matrix"
```
A matrix is a special case of an atomic vector. For example, the `die` matrix is a special case of a double vector. Every element in the matrix is still a double, but the elements have been arranged into a new structure. R added a `class` attribute to `die` when you changed its dimensions. This class describes `die`’s new format. Many R functions will specifically look for an object’s `class` attribute, and then handle the object in a predetermined way based on the attribute.
Note that an object’s `class` attribute will not always appear when you run `attributes`; you may need to specifically search for it with `class`:
```
attributes(die)
## $dim
## [1] 2 3
```
You can apply `class` to objects that do not have a `class` attribute. `class` will return a value based on the object’s atomic type. Notice that the “class” of a double is “numeric,” an odd deviation, but one I am thankful for. I think that the most important property of a double vector is that it contains numbers, a property that “numeric” makes obvious:
```
class("Hello")
## "character"
class(5)
## "numeric"
```
You can also use `class` to set an object’s `class` attribute, but this is usually a bad idea. R will expect objects of a class to share certain traits, such as attributes, that your object may not possess. You’ll learn how to make and use your own classes in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
5\.6 Coercion
-------------
R’s coercion behavior may seem inconvenient, but it is not arbitrary. R always follows the same rules when it coerces data types. Once you are familiar with these rules, you can use R’s coercion behavior to do surprisingly useful things.
So how does R coerce data types? If a character string is present in an atomic vector, R will convert everything else in the vector to character strings. If a vector only contains logicals and numbers, R will convert the logicals to numbers; every `TRUE` becomes a 1, and every `FALSE` becomes a 0, as shown in Figure [5\.1](r-objects.html#fig:coercion).
Figure 5\.1: R always uses the same rules to coerce data to a single type. If character strings are present, everything will be coerced to a character string. Otherwise, logicals are coerced to numerics.
This arrangement preserves information. It is easy to look at a character string and tell what information it used to contain. For example, you can easily spot the origins of `"TRUE"` and `"5"`. You can also easily back\-transform a vector of 1s and 0s to `TRUE`s and `FALSE`s.
R uses the same coercion rules when you try to do math with logical values. So the following code:
```
sum(c(TRUE, TRUE, FALSE, FALSE))
```
will become:
```
sum(c(1, 1, 0, 0))
## 2
```
This means that `sum` will count the number of `TRUE`s in a logical vector (and `mean` will calculate the proportion of `TRUE`s). Neat, huh?
You can explicitly ask R to convert data from one type to another with the `as` functions. R will convert the data whenever there is a sensible way to do so:
```
as.character(1)
## "1"
as.logical(1)
## TRUE
as.numeric(FALSE)
## 0
```
You now know how R coerces data types, but this won’t help you save a playing card. To do that, you will need to avoid coercion altogether. You can do this by using a new type of object, a *list*.
Before we look at lists, let’s address a question that might be on your mind.
Many data sets contain multiple types of information. The inability of vectors, matrices, and arrays to store multiple data types seems like a major limitation. So why bother with them?
In some cases, using only a single type of data is a huge advantage. Vectors, matrices, and arrays make it very easy to do math on large sets of numbers because R knows that it can manipulate each value the same way. Operations with vectors, matrices, and arrays also tend to be fast because the objects are so simple to store in memory.
In other cases, allowing only a single type of data is not a disadvantage. Vectors are the most common data structure in R because they store variables very well. Each value in a variable measures the same property, so there’s no need to use different types of data.
5\.7 Lists
----------
Lists are like atomic vectors because they group data into a one\-dimensional set. However, lists do not group together individual values; lists group together R objects, such as atomic vectors and other lists. For example, you can make a list that contains a numeric vector of length 31 in its first element, a character vector of length 1 in its second element, and a new list of length 2 in its third element. To do this, use the `list` function.
`list` creates a list the same way `c` creates a vector. Separate each element in the list with a comma:
```
list1 <- list(100:130, "R", list(TRUE, FALSE))
list1
## [[1]]
## [1] 100 101 102 103 104 105 106 107 108 109 110 111 112
## [14] 113 114 115 116 117 118 119 120 121 122 123 124 125
## [27] 126 127 128 129 130
##
## [[2]]
## [1] "R"
##
## [[3]]
## [[3]][[1]]
## [1] TRUE
##
## [[3]][[2]]
## [1] FALSE
```
I left the `[1]` notation in the output so you can see how it changes for lists. The double\-bracketed indexes tell you which element of the list is being displayed. The single\-bracket indexes tell you which subelement of an element is being displayed. For example, `100` is the first subelement of the first element in the list. `"R"` is the first sub\-element of the second element. This two\-system notation arises because each element of a list can be *any* R object, including a new vector (or list) with its own indexes.
Lists are a basic type of object in R, on par with atomic vectors. Like atomic vectors, they are used as building blocks to create many more spohisticated types of R objects.
As you can imagine, the structure of lists can become quite complicated, but this flexibility makes lists a useful all\-purpose storage tool in R: you can group together anything with a list.
However, not every list needs to be complicated. You can store a playing card in a very simple list.
**Exercise 5\.5 (Use a List to Make a Card)** Use a list to store a single playing card, like the ace of hearts, which has a point value of one. The list should save the face of the card, the suit, and the point value in separate elements.
*Solution.* You can create your card like this. In the following example, the first element of the list is a character vector (of length 1\). The second element is also a character vector, and the third element is a numeric vector:
```
card <- list("ace", "hearts", 1)
card
## [[1]]
## [1] "ace"
##
## [[2]]
## [1] "hearts"
##
## [[3]]
## [1] 1
```
You can also use a list to store a whole deck of playing cards. Since you can save a single playing card as a list, you can save a deck of playing cards as a list of 52 sublists (one for each card). But let’s not bother—there’s a much cleaner way to do the same thing. You can use a special class of list, known as a *data frame*.
5\.8 Data Frames
----------------
Data frames are the two\-dimensional version of a list. They are far and away the most useful storage structure for data analysis, and they provide an ideal way to store an entire deck of cards. You can think of a data frame as R’s equivalent to the Excel spreadsheet because it stores data in a similar format.
Data frames group vectors together into a two\-dimensional table. Each vector becomes a column in the table. As a result, each column of a data frame can contain a different type of data; but within a column, every cell must be the same type of data, as in Figure [5\.2](r-objects.html#fig:data-frame).
Figure 5\.2: Data frames store data as a sequence of columns. Each column can be a different data type. Every column in a data frame must be the same length.
Creating a data frame by hand takes a lot of typing, but you can do it (if you like) with the `data.frame` function. Give `data.frame` any number of vectors, each separated with a comma. Each vector should be set equal to a name that describes the vector. `data.frame` will turn each vector into a column of the new data frame:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3))
df
## face suit value
## ace clubs 1
## two clubs 2
## six clubs 3
```
You’ll need to make sure that each vector is the same length (or can be made so with R’s recycling rules; see Figure [2\.4](basics.html#fig:recycle), as data frames cannot combine columns of different lengths.
In the previous code, I named the arguments in `data.frame` `face`, `suit`, and `value`, but you can name the arguments whatever you like. `data.frame` will use your argument names to label the columns of the data frame.
**Names**
You can also give names to a list or vector when you create one of these objects. Use the same syntax as with `data.frame`:
`list(face = "ace", suit = "hearts", value = 1)`
`c(face = "ace", suit = "hearts", value = "one")`
The names will be stored in the object’s `names` attribute.
If you look at the type of a data frame, you will see that it is a list. In fact, each data frame is a list with class `data.frame`. You can see what types of objects are grouped together by a list (or data frame) with the `str` function:
```
typeof(df)
## "list"
class(df)
## "data.frame"
str(df)
## 'data.frame': 3 obs. of 3 variables:
## $ face : Factor w/ 3 levels "ace","six","two": 1 3 2
## $ suit : Factor w/ 1 level "clubs": 1 1 1
## $ value: num 1 2 3
```
Notice that R saved your character strings as factors. I told you that R likes factors! It is not a very big deal here, but you can prevent this behavior by adding the argument `stringsAsFactors = FALSE` to `data.frame`:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3),
stringsAsFactors = FALSE)
```
A data frame is a great way to build an entire deck of cards. You can make each row in the data frame a playing card, and each column a type of value—each with its own appropriate data type. The data frame would look something like this:
```
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
## king clubs 13
## queen clubs 12
## jack clubs 11
## ten clubs 10
## ... and so on.
```
You could create this data frame with `data.frame`, but look at the typing involved! You need to write three vectors, each with 52 elements:
```
deck <- data.frame(
face = c("king", "queen", "jack", "ten", "nine", "eight", "seven", "six",
"five", "four", "three", "two", "ace", "king", "queen", "jack", "ten",
"nine", "eight", "seven", "six", "five", "four", "three", "two", "ace",
"king", "queen", "jack", "ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "ace", "king", "queen", "jack", "ten", "nine",
"eight", "seven", "six", "five", "four", "three", "two", "ace"),
suit = c("spades", "spades", "spades", "spades", "spades", "spades",
"spades", "spades", "spades", "spades", "spades", "spades", "spades",
"clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs",
"clubs", "clubs", "clubs", "clubs", "clubs", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts", "hearts", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts"),
value = c(13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8,
7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11,
10, 9, 8, 7, 6, 5, 4, 3, 2, 1)
)
```
You should avoid typing large data sets in by hand whenever possible. Typing invites typos and errors, not to mention RSI. It is always better to acquire large data sets as a computer file. You can then ask R to read the file and store the contents as an object.
I’ve created a file for you to load that contains a data frame of playing\-card information, so don’t worry about typing in the code. Instead, turn your attention toward loading data into R.
5\.9 Loading Data
-----------------
You can load the `deck` data frame from the file [*deck.csv*](http://bit.ly/deck_CSV). Please take a moment to download the file before reading on. Visit the website, click “Download Zip,” and then unzip and open the folder that your web browser downloads. *deck.csv* will be inside.
*deck.csv* is a comma\-separated values file, or CSV for short. CSVs are plain\-text files, which means you can open them in a text editor (as well as many other programs). If you open *desk.csv*, you’ll notice that it contains a table of data that looks like the following table. Each row of the table is saved on its own line, and a comma is used to separate the cells within each row. Every CSV file shares this basic format:
```
"face","suit,"value"
"king","spades",13
"queen","spades,12
"jack","spades,11
"ten","spades,10
"nine","spades,9
... and so on.
```
Most data\-science applications can open plain\-text files and export data as plain\-text files. This makes plain\-text files a sort of lingua franca for data science.
To load a plain\-text file into R, click the Import Dataset icon in RStudio, shown in Figure [5\.3](r-objects.html#fig:import). Then select “From text file.”
Figure 5\.3: You can import data from plain\-text files with RStudio’s Import Dataset.
RStudio will ask you to select the file you want to import, then it will open a wizard to help you import the data, as in Figure [5\.4](r-objects.html#fig:wizard). Use the wizard to tell RStudio what name to give the data set. You can also use the wizard to tell RStudio which character the data set uses as a separator, which character it uses to represent decimals (usually a period in the United States and a comma in Europe), and whether or not the data set comes with a row of column names (known as a *header*). To help you out, the wizard shows you what the raw file looks like, as well as what your loaded data will look like based on the input settings.
You can also unclick the box “Strings as factors” in the wizard. I recommend doing this. If you do, R will load all of your character strings as character strings. If you do not, R will convert them to factors.
Figure 5\.4: RStudio’s import wizard.
Once everything looks right, click Import. RStudio will read in the data and save it to a data frame. RStudio will also open a data viewer, so you can see your new data in a spreadsheet format. This is a good way to check that everything came through as expected. If all worked well, your file should appear in a View tab of RStudio, like in Figure [5\.5](r-objects.html#fig:view). You can examine the data frame in the console with *`head(deck)`*.
**Online data**
You can load a plain\-text file straight from the Internet by clicking the “From Web URL…” option under Import Dataset. The file will need to have its own URL, and you will need to be connected.
Figure 5\.5: When you import a data set, RStudio will save the data to a data frame and then display the data frame in a View tab. You can open any data frame in a View tab at any time with the View function.
Now it is your turn. Download *deck.csv* and import it into RStudio. Be sure to save the output to an R object called `deck`: you’ll use it in the next few chapters. If everything goes correctly, the first few lines of your data frame should look like this:
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
`head` and `tail` are two functions that provide an easy way to peek at large data sets. `head` will return just the first six rows of the data set, and `tail` will return just the last six rows. To see a different number of rows, give `head` or `tails` a second argument, the number of rows you would like to view, for example, `head(deck, 10)`.
R can open many types of files—not just CSVs. Visit [Loading and Saving Data in R](dataio.html#dataio) to learn how to open other common types of files in R.
5\.10 Saving Data
-----------------
Before we go any further, let’s save a copy of `deck` as a new *.csv* file. That way you can email it to a colleague, store it on a thumb drive, or open it in a different program. You can save any data frame in R to a *.csv* file with the command `write.csv`. To save `deck`, run:
```
write.csv(deck, file = "cards.csv", row.names = FALSE)
```
R will turn your data frame into a plain\-text file with the comma\-separated values format and save the file to your working directory. To see where your working directory is, run *`getwd()`*. To change the location of your working directory, visit Session \> Set Working Directory \> Choose Directory in the RStudio menu bar.
You can customize the save process with `write.csv`’s large set of optional arguments (see `?write.csv` for details). However, there are three arguments that you should use *every* time you run `write.csv`.
First, you should give `write.csv` the name of the data frame that you wish to save. Next, you should provide a file name to give your file. R will take this name quite literally, so be sure to provide an extension.
Finally, you should add the argument `row.names = FALSE`. This will prevent R from adding a column of numbers at the start of your data frame. These numbers will identify your rows from 1 to 52, but it is unlikely that whatever program you open *cards.csv* in will understand the row name system. More than likely, the program will assume that the row names are the first column of data in your data frame. In fact, this is exactly what R will assume if you reopen *cards.csv*. If you save and open *cards.csv* several times in R, you’ll notice duplicate columns of row numbers forming at the start of your data frame. I can’t explain why R does this, but I can explain how to avoid it: use `row.names = FALSE` whenever you save data with `write.csv`.
For more details about saving files, including how to compress saved files and how to save files in other formats, see [Loading and Saving Data in R](dataio.html#dataio).
Good work. You now have a virtual deck of cards to work with. Take a breather, and when you come back, we’ll start writing some functions to use on your deck.
5\.11 Summary
-------------
You can save data in R with five different objects, which let you store different types of values in different types of relationships, as in Figure [5\.6](r-objects.html#fig:structures). Of these objects, data frames are by far the most useful for data science. Data frames store one of the most common forms of data used in data science, tabular data.
Figure 5\.6: R’s most common data structures are vectors, matrices, arrays, lists, and data frames.
You can load tabular data into a data frame with RStudio’s Import Dataset button—so long as the data is saved as a plain\-text file. This requirement is not as limiting as it sounds. Most software programs can export data as a plain\-text file. So if you have an Excel file (for example) you can open the file in Excel and export the data as a CSV to use with R. In fact, opening a file in its original program is good practice. Excel files use metadata, like sheets and formulas, that help Excel work with the file. R can try to extract raw data from the file, but it won’t be as good at doing this as Microsoft Excel is. No program is better at converting Excel files than Excel. Similarly, no program is better at converting SAS Xport files than SAS, and so on.
However, you may find yourself with a program\-specific file, but not the program that created it. You wouldn’t want to buy a multi\-thousand\-dollar SAS license just to open a SAS file. Thankfully R *can* open many types of files, including files from other programs and databases. R even has its own program\-specific formats that can help you save memory and time if you know that you will be working entirely in R. If you’d like to know more about all of your options for loading and saving data in R, see [Loading and Saving Data in R](dataio.html#dataio).
[R Notation](r-notation.html#r-notation) will build upon the skills you learned in this chapter. Here, you learned how to store data in R. In [R Notation](r-notation.html#r-notation), you will learn how to access values once they’ve been stored. You’ll also write two functions that will let you start using your deck, a shuffle function and a deal function.
5\.1 Atomic Vectors
-------------------
An atomic vector is just a simple vector of data. In fact, you’ve already made an atomic vector, your `die` object from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). You can make an atomic vector by grouping some values of data together with `c`:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
is.vector(die)
## TRUE
```
**is.vector**
`is.vector` tests whether an object is an atomic vector. It returns `TRUE` if the object is an atomic vector and `FALSE` otherwise.
You can also make an atomic vector with just one value. R saves single values as an atomic vector of length 1:
```
five <- 5
five
## 5
is.vector(five)
## TRUE
length(five)
## 1
length(die)
## 6
```
**length**
`length` returns the length of an atomic vector.
Each atomic vector stores its values as a one\-dimensional vector, and each atomic vector can only store one type of data. You can save different types of data in R by using different types of atomic vectors. Altogether, R recognizes six basic types of atomic vectors: *doubles*, *integers*, *characters*, *logicals*, *complex*, and *raw*.
To create your card deck, you will need to use different types of atomic vectors to save different types of information (text and numbers). You can do this by using some simple conventions when you enter your data. For example, you can create an integer vector by including a capital `L` with your input. You can create a character vector by surrounding your input in quotation marks:
```
int <- 1L
text <- "ace"
```
Each type of atomic vector has its own convention (described below). R will recognize the convention and use it to create an atomic vector of the appropriate type. If you’d like to make atomic vectors that have more than one element in them, you can combine an element with the `c` function from [Packages and Help Pages](packages.html#packages). Use the same convention with each element:
```
int <- c(1L, 5L)
text <- c("ace", "hearts")
```
You may wonder why R uses multiple types of vectors. Vector types help R behave as you would expect. For example, R will do math with atomic vectors that contain numbers, but not with atomic vectors that contain character strings:
```
sum(int)
## 6
sum(text)
## Error in sum(text) : invalid 'type' (character) of argument
```
But we’re getting ahead of ourselves! Get ready to say hello to the six types of atomic vectors in R.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
5\.2 Attributes
---------------
An attribute is a piece of information that you can attach to an atomic vector (or any R object). The attribute won’t affect any of the values in the object, and it will not appear when you display your object. You can think of an attribute as “metadata”; it is just a convenient place to put information associated with an object. R will normally ignore this metadata, but some R functions will check for specific attributes. These functions may use the attributes to do special things with the data.
You can see which attributes an object has with `attributes`. `attributes` will return `NULL` if an object has no attributes. An atomic vector, like `die`, won’t have any attributes unless you give it some:
```
attributes(die)
## NULL
```
**NULL**
R uses `NULL` to represent the null set, an empty object. `NULL` is often returned by functions whose values are undefined. You can create a `NULL` object by typing `NULL` in capital letters.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
5\.3 Matrices
-------------
Matrices store values in a two\-dimensional array, just like a matrix from linear algebra. To create one, first give `matrix` an atomic vector to reorganize into a matrix. Then, define how many rows should be in the matrix by setting the `nrow` argument to a number. `matrix` will organize your vector of values into a matrix with the specified number of rows. Alternatively, you can set the `ncol` argument, which tells R how many columns to include in the matrix:
```
m <- matrix(die, nrow = 2)
m
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
`matrix` will fill up the matrix column by column by default, but you can fill the matrix row by row if you include the argument `byrow = TRUE`:
```
m <- matrix(die, nrow = 2, byrow = TRUE)
m
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
`matrix` also has other default arguments that you can use to customize your matrix. You can read about them at `matrix`’s help page (accessible by `?matrix`).
5\.4 Arrays
-----------
The `array` function creates an n\-dimensional array. For example, you could use `array` to sort values into a cube of three dimensions or a hypercube in 4, 5, or *n* dimensions. `array` is not as customizeable as `matrix` and basically does the same thing as setting the `dim` attribute. To use `array`, provide an atomic vector as the first argument, and a vector of dimensions as the second argument, now called `dim`:
```
ar <- array(c(11:14, 21:24, 31:34), dim = c(2, 2, 3))
ar
## , , 1
##
## [,1] [,2]
## [1,] 11 13
## [2,] 12 14
##
## , , 2
##
## [,1] [,2]
## [1,] 21 23
## [2,] 22 24
##
## , , 3
##
## [,1] [,2]
## [1,] 31 33
## [2,] 32 34
```
**Exercise 5\.3 (Make a Matrix)** Create the following matrix, which stores the name and suit of every card in a royal flush.
```
## [,1] [,2]
## [1,] "ace" "spades"
## [2,] "king" "spades"
## [3,] "queen" "spades"
## [4,] "jack" "spades"
## [5,] "ten" "spades"
```
*Solution.* There is more than one way to build this matrix, but in every case, you will need to start by making a character vector with 10 values. If you start with the following character vector, you can turn it into a matrix with any of the following three commands:
```
hand1 <- c("ace", "king", "queen", "jack", "ten", "spades", "spades",
"spades", "spades", "spades")
matrix(hand1, nrow = 5)
matrix(hand1, ncol = 2)
dim(hand1) <- c(5, 2)
```
You can also start with a character vector that lists the cards in a slightly different order. In this case, you will need to ask R to fill the matrix row by row instead of column by column:
```
hand2 <- c("ace", "spades", "king", "spades", "queen", "spades", "jack",
"spades", "ten", "spades")
matrix(hand2, nrow = 5, byrow = TRUE)
matrix(hand2, ncol = 2, byrow = TRUE)
```
5\.5 Class
----------
Notice that changing the dimensions of your object will not change the type of the object, but it *will* change the object’s `class` attribute:
```
dim(die) <- c(2, 3)
typeof(die)
## "double"
class(die)
## "matrix"
```
A matrix is a special case of an atomic vector. For example, the `die` matrix is a special case of a double vector. Every element in the matrix is still a double, but the elements have been arranged into a new structure. R added a `class` attribute to `die` when you changed its dimensions. This class describes `die`’s new format. Many R functions will specifically look for an object’s `class` attribute, and then handle the object in a predetermined way based on the attribute.
Note that an object’s `class` attribute will not always appear when you run `attributes`; you may need to specifically search for it with `class`:
```
attributes(die)
## $dim
## [1] 2 3
```
You can apply `class` to objects that do not have a `class` attribute. `class` will return a value based on the object’s atomic type. Notice that the “class” of a double is “numeric,” an odd deviation, but one I am thankful for. I think that the most important property of a double vector is that it contains numbers, a property that “numeric” makes obvious:
```
class("Hello")
## "character"
class(5)
## "numeric"
```
You can also use `class` to set an object’s `class` attribute, but this is usually a bad idea. R will expect objects of a class to share certain traits, such as attributes, that your object may not possess. You’ll learn how to make and use your own classes in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
5\.6 Coercion
-------------
R’s coercion behavior may seem inconvenient, but it is not arbitrary. R always follows the same rules when it coerces data types. Once you are familiar with these rules, you can use R’s coercion behavior to do surprisingly useful things.
So how does R coerce data types? If a character string is present in an atomic vector, R will convert everything else in the vector to character strings. If a vector only contains logicals and numbers, R will convert the logicals to numbers; every `TRUE` becomes a 1, and every `FALSE` becomes a 0, as shown in Figure [5\.1](r-objects.html#fig:coercion).
Figure 5\.1: R always uses the same rules to coerce data to a single type. If character strings are present, everything will be coerced to a character string. Otherwise, logicals are coerced to numerics.
This arrangement preserves information. It is easy to look at a character string and tell what information it used to contain. For example, you can easily spot the origins of `"TRUE"` and `"5"`. You can also easily back\-transform a vector of 1s and 0s to `TRUE`s and `FALSE`s.
R uses the same coercion rules when you try to do math with logical values. So the following code:
```
sum(c(TRUE, TRUE, FALSE, FALSE))
```
will become:
```
sum(c(1, 1, 0, 0))
## 2
```
This means that `sum` will count the number of `TRUE`s in a logical vector (and `mean` will calculate the proportion of `TRUE`s). Neat, huh?
You can explicitly ask R to convert data from one type to another with the `as` functions. R will convert the data whenever there is a sensible way to do so:
```
as.character(1)
## "1"
as.logical(1)
## TRUE
as.numeric(FALSE)
## 0
```
You now know how R coerces data types, but this won’t help you save a playing card. To do that, you will need to avoid coercion altogether. You can do this by using a new type of object, a *list*.
Before we look at lists, let’s address a question that might be on your mind.
Many data sets contain multiple types of information. The inability of vectors, matrices, and arrays to store multiple data types seems like a major limitation. So why bother with them?
In some cases, using only a single type of data is a huge advantage. Vectors, matrices, and arrays make it very easy to do math on large sets of numbers because R knows that it can manipulate each value the same way. Operations with vectors, matrices, and arrays also tend to be fast because the objects are so simple to store in memory.
In other cases, allowing only a single type of data is not a disadvantage. Vectors are the most common data structure in R because they store variables very well. Each value in a variable measures the same property, so there’s no need to use different types of data.
5\.7 Lists
----------
Lists are like atomic vectors because they group data into a one\-dimensional set. However, lists do not group together individual values; lists group together R objects, such as atomic vectors and other lists. For example, you can make a list that contains a numeric vector of length 31 in its first element, a character vector of length 1 in its second element, and a new list of length 2 in its third element. To do this, use the `list` function.
`list` creates a list the same way `c` creates a vector. Separate each element in the list with a comma:
```
list1 <- list(100:130, "R", list(TRUE, FALSE))
list1
## [[1]]
## [1] 100 101 102 103 104 105 106 107 108 109 110 111 112
## [14] 113 114 115 116 117 118 119 120 121 122 123 124 125
## [27] 126 127 128 129 130
##
## [[2]]
## [1] "R"
##
## [[3]]
## [[3]][[1]]
## [1] TRUE
##
## [[3]][[2]]
## [1] FALSE
```
I left the `[1]` notation in the output so you can see how it changes for lists. The double\-bracketed indexes tell you which element of the list is being displayed. The single\-bracket indexes tell you which subelement of an element is being displayed. For example, `100` is the first subelement of the first element in the list. `"R"` is the first sub\-element of the second element. This two\-system notation arises because each element of a list can be *any* R object, including a new vector (or list) with its own indexes.
Lists are a basic type of object in R, on par with atomic vectors. Like atomic vectors, they are used as building blocks to create many more spohisticated types of R objects.
As you can imagine, the structure of lists can become quite complicated, but this flexibility makes lists a useful all\-purpose storage tool in R: you can group together anything with a list.
However, not every list needs to be complicated. You can store a playing card in a very simple list.
**Exercise 5\.5 (Use a List to Make a Card)** Use a list to store a single playing card, like the ace of hearts, which has a point value of one. The list should save the face of the card, the suit, and the point value in separate elements.
*Solution.* You can create your card like this. In the following example, the first element of the list is a character vector (of length 1\). The second element is also a character vector, and the third element is a numeric vector:
```
card <- list("ace", "hearts", 1)
card
## [[1]]
## [1] "ace"
##
## [[2]]
## [1] "hearts"
##
## [[3]]
## [1] 1
```
You can also use a list to store a whole deck of playing cards. Since you can save a single playing card as a list, you can save a deck of playing cards as a list of 52 sublists (one for each card). But let’s not bother—there’s a much cleaner way to do the same thing. You can use a special class of list, known as a *data frame*.
5\.8 Data Frames
----------------
Data frames are the two\-dimensional version of a list. They are far and away the most useful storage structure for data analysis, and they provide an ideal way to store an entire deck of cards. You can think of a data frame as R’s equivalent to the Excel spreadsheet because it stores data in a similar format.
Data frames group vectors together into a two\-dimensional table. Each vector becomes a column in the table. As a result, each column of a data frame can contain a different type of data; but within a column, every cell must be the same type of data, as in Figure [5\.2](r-objects.html#fig:data-frame).
Figure 5\.2: Data frames store data as a sequence of columns. Each column can be a different data type. Every column in a data frame must be the same length.
Creating a data frame by hand takes a lot of typing, but you can do it (if you like) with the `data.frame` function. Give `data.frame` any number of vectors, each separated with a comma. Each vector should be set equal to a name that describes the vector. `data.frame` will turn each vector into a column of the new data frame:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3))
df
## face suit value
## ace clubs 1
## two clubs 2
## six clubs 3
```
You’ll need to make sure that each vector is the same length (or can be made so with R’s recycling rules; see Figure [2\.4](basics.html#fig:recycle), as data frames cannot combine columns of different lengths.
In the previous code, I named the arguments in `data.frame` `face`, `suit`, and `value`, but you can name the arguments whatever you like. `data.frame` will use your argument names to label the columns of the data frame.
**Names**
You can also give names to a list or vector when you create one of these objects. Use the same syntax as with `data.frame`:
`list(face = "ace", suit = "hearts", value = 1)`
`c(face = "ace", suit = "hearts", value = "one")`
The names will be stored in the object’s `names` attribute.
If you look at the type of a data frame, you will see that it is a list. In fact, each data frame is a list with class `data.frame`. You can see what types of objects are grouped together by a list (or data frame) with the `str` function:
```
typeof(df)
## "list"
class(df)
## "data.frame"
str(df)
## 'data.frame': 3 obs. of 3 variables:
## $ face : Factor w/ 3 levels "ace","six","two": 1 3 2
## $ suit : Factor w/ 1 level "clubs": 1 1 1
## $ value: num 1 2 3
```
Notice that R saved your character strings as factors. I told you that R likes factors! It is not a very big deal here, but you can prevent this behavior by adding the argument `stringsAsFactors = FALSE` to `data.frame`:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3),
stringsAsFactors = FALSE)
```
A data frame is a great way to build an entire deck of cards. You can make each row in the data frame a playing card, and each column a type of value—each with its own appropriate data type. The data frame would look something like this:
```
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
## king clubs 13
## queen clubs 12
## jack clubs 11
## ten clubs 10
## ... and so on.
```
You could create this data frame with `data.frame`, but look at the typing involved! You need to write three vectors, each with 52 elements:
```
deck <- data.frame(
face = c("king", "queen", "jack", "ten", "nine", "eight", "seven", "six",
"five", "four", "three", "two", "ace", "king", "queen", "jack", "ten",
"nine", "eight", "seven", "six", "five", "four", "three", "two", "ace",
"king", "queen", "jack", "ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "ace", "king", "queen", "jack", "ten", "nine",
"eight", "seven", "six", "five", "four", "three", "two", "ace"),
suit = c("spades", "spades", "spades", "spades", "spades", "spades",
"spades", "spades", "spades", "spades", "spades", "spades", "spades",
"clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs",
"clubs", "clubs", "clubs", "clubs", "clubs", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts", "hearts", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts"),
value = c(13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8,
7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11,
10, 9, 8, 7, 6, 5, 4, 3, 2, 1)
)
```
You should avoid typing large data sets in by hand whenever possible. Typing invites typos and errors, not to mention RSI. It is always better to acquire large data sets as a computer file. You can then ask R to read the file and store the contents as an object.
I’ve created a file for you to load that contains a data frame of playing\-card information, so don’t worry about typing in the code. Instead, turn your attention toward loading data into R.
5\.9 Loading Data
-----------------
You can load the `deck` data frame from the file [*deck.csv*](http://bit.ly/deck_CSV). Please take a moment to download the file before reading on. Visit the website, click “Download Zip,” and then unzip and open the folder that your web browser downloads. *deck.csv* will be inside.
*deck.csv* is a comma\-separated values file, or CSV for short. CSVs are plain\-text files, which means you can open them in a text editor (as well as many other programs). If you open *desk.csv*, you’ll notice that it contains a table of data that looks like the following table. Each row of the table is saved on its own line, and a comma is used to separate the cells within each row. Every CSV file shares this basic format:
```
"face","suit,"value"
"king","spades",13
"queen","spades,12
"jack","spades,11
"ten","spades,10
"nine","spades,9
... and so on.
```
Most data\-science applications can open plain\-text files and export data as plain\-text files. This makes plain\-text files a sort of lingua franca for data science.
To load a plain\-text file into R, click the Import Dataset icon in RStudio, shown in Figure [5\.3](r-objects.html#fig:import). Then select “From text file.”
Figure 5\.3: You can import data from plain\-text files with RStudio’s Import Dataset.
RStudio will ask you to select the file you want to import, then it will open a wizard to help you import the data, as in Figure [5\.4](r-objects.html#fig:wizard). Use the wizard to tell RStudio what name to give the data set. You can also use the wizard to tell RStudio which character the data set uses as a separator, which character it uses to represent decimals (usually a period in the United States and a comma in Europe), and whether or not the data set comes with a row of column names (known as a *header*). To help you out, the wizard shows you what the raw file looks like, as well as what your loaded data will look like based on the input settings.
You can also unclick the box “Strings as factors” in the wizard. I recommend doing this. If you do, R will load all of your character strings as character strings. If you do not, R will convert them to factors.
Figure 5\.4: RStudio’s import wizard.
Once everything looks right, click Import. RStudio will read in the data and save it to a data frame. RStudio will also open a data viewer, so you can see your new data in a spreadsheet format. This is a good way to check that everything came through as expected. If all worked well, your file should appear in a View tab of RStudio, like in Figure [5\.5](r-objects.html#fig:view). You can examine the data frame in the console with *`head(deck)`*.
**Online data**
You can load a plain\-text file straight from the Internet by clicking the “From Web URL…” option under Import Dataset. The file will need to have its own URL, and you will need to be connected.
Figure 5\.5: When you import a data set, RStudio will save the data to a data frame and then display the data frame in a View tab. You can open any data frame in a View tab at any time with the View function.
Now it is your turn. Download *deck.csv* and import it into RStudio. Be sure to save the output to an R object called `deck`: you’ll use it in the next few chapters. If everything goes correctly, the first few lines of your data frame should look like this:
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
`head` and `tail` are two functions that provide an easy way to peek at large data sets. `head` will return just the first six rows of the data set, and `tail` will return just the last six rows. To see a different number of rows, give `head` or `tails` a second argument, the number of rows you would like to view, for example, `head(deck, 10)`.
R can open many types of files—not just CSVs. Visit [Loading and Saving Data in R](dataio.html#dataio) to learn how to open other common types of files in R.
5\.10 Saving Data
-----------------
Before we go any further, let’s save a copy of `deck` as a new *.csv* file. That way you can email it to a colleague, store it on a thumb drive, or open it in a different program. You can save any data frame in R to a *.csv* file with the command `write.csv`. To save `deck`, run:
```
write.csv(deck, file = "cards.csv", row.names = FALSE)
```
R will turn your data frame into a plain\-text file with the comma\-separated values format and save the file to your working directory. To see where your working directory is, run *`getwd()`*. To change the location of your working directory, visit Session \> Set Working Directory \> Choose Directory in the RStudio menu bar.
You can customize the save process with `write.csv`’s large set of optional arguments (see `?write.csv` for details). However, there are three arguments that you should use *every* time you run `write.csv`.
First, you should give `write.csv` the name of the data frame that you wish to save. Next, you should provide a file name to give your file. R will take this name quite literally, so be sure to provide an extension.
Finally, you should add the argument `row.names = FALSE`. This will prevent R from adding a column of numbers at the start of your data frame. These numbers will identify your rows from 1 to 52, but it is unlikely that whatever program you open *cards.csv* in will understand the row name system. More than likely, the program will assume that the row names are the first column of data in your data frame. In fact, this is exactly what R will assume if you reopen *cards.csv*. If you save and open *cards.csv* several times in R, you’ll notice duplicate columns of row numbers forming at the start of your data frame. I can’t explain why R does this, but I can explain how to avoid it: use `row.names = FALSE` whenever you save data with `write.csv`.
For more details about saving files, including how to compress saved files and how to save files in other formats, see [Loading and Saving Data in R](dataio.html#dataio).
Good work. You now have a virtual deck of cards to work with. Take a breather, and when you come back, we’ll start writing some functions to use on your deck.
5\.11 Summary
-------------
You can save data in R with five different objects, which let you store different types of values in different types of relationships, as in Figure [5\.6](r-objects.html#fig:structures). Of these objects, data frames are by far the most useful for data science. Data frames store one of the most common forms of data used in data science, tabular data.
Figure 5\.6: R’s most common data structures are vectors, matrices, arrays, lists, and data frames.
You can load tabular data into a data frame with RStudio’s Import Dataset button—so long as the data is saved as a plain\-text file. This requirement is not as limiting as it sounds. Most software programs can export data as a plain\-text file. So if you have an Excel file (for example) you can open the file in Excel and export the data as a CSV to use with R. In fact, opening a file in its original program is good practice. Excel files use metadata, like sheets and formulas, that help Excel work with the file. R can try to extract raw data from the file, but it won’t be as good at doing this as Microsoft Excel is. No program is better at converting Excel files than Excel. Similarly, no program is better at converting SAS Xport files than SAS, and so on.
However, you may find yourself with a program\-specific file, but not the program that created it. You wouldn’t want to buy a multi\-thousand\-dollar SAS license just to open a SAS file. Thankfully R *can* open many types of files, including files from other programs and databases. R even has its own program\-specific formats that can help you save memory and time if you know that you will be working entirely in R. If you’d like to know more about all of your options for loading and saving data in R, see [Loading and Saving Data in R](dataio.html#dataio).
[R Notation](r-notation.html#r-notation) will build upon the skills you learned in this chapter. Here, you learned how to store data in R. In [R Notation](r-notation.html#r-notation), you will learn how to access values once they’ve been stored. You’ll also write two functions that will let you start using your deck, a shuffle function and a deal function.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/r-objects.html |
5 R Objects
===========
In this chapter, you’ll use R to assemble a deck of 52 playing cards.
You’ll start by building simple R objects that represent playing cards and then work your way up to a full\-blown table of data. In short, you’ll build the equivalent of an Excel spreadsheet from scratch. When you are finished, your deck of cards will look something like this:
```
face suit value
king spades 13
queen spades 12
jack spades 11
ten spades 10
nine spades 9
eight spades 8
...
```
Do you need to build a data set from scratch to use it in R? Not at all. You can load most data sets into R with one simple step, see [Loading Data](r-objects.html#loading). But this exercise will teach you how R stores data, and how you can assemble—or disassemble—your own data sets. You will also learn about the various types of objects available for you to use in R (not all R objects are the same!). Consider this exercise a rite of passage; by doing it, you will become an expert on storing data in R.
We’ll start with the very basics. The most simple type of object in R is an *atomic vector*. Atomic vectors are not nuclear powered, but they are very simple and they do show up everywhere. If you look closely enough, you’ll see that most structures in R are built from atomic vectors.
5\.1 Atomic Vectors
-------------------
An atomic vector is just a simple vector of data. In fact, you’ve already made an atomic vector, your `die` object from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). You can make an atomic vector by grouping some values of data together with `c`:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
is.vector(die)
## TRUE
```
**is.vector**
`is.vector` tests whether an object is an atomic vector. It returns `TRUE` if the object is an atomic vector and `FALSE` otherwise.
You can also make an atomic vector with just one value. R saves single values as an atomic vector of length 1:
```
five <- 5
five
## 5
is.vector(five)
## TRUE
length(five)
## 1
length(die)
## 6
```
**length**
`length` returns the length of an atomic vector.
Each atomic vector stores its values as a one\-dimensional vector, and each atomic vector can only store one type of data. You can save different types of data in R by using different types of atomic vectors. Altogether, R recognizes six basic types of atomic vectors: *doubles*, *integers*, *characters*, *logicals*, *complex*, and *raw*.
To create your card deck, you will need to use different types of atomic vectors to save different types of information (text and numbers). You can do this by using some simple conventions when you enter your data. For example, you can create an integer vector by including a capital `L` with your input. You can create a character vector by surrounding your input in quotation marks:
```
int <- 1L
text <- "ace"
```
Each type of atomic vector has its own convention (described below). R will recognize the convention and use it to create an atomic vector of the appropriate type. If you’d like to make atomic vectors that have more than one element in them, you can combine an element with the `c` function from [Packages and Help Pages](packages.html#packages). Use the same convention with each element:
```
int <- c(1L, 5L)
text <- c("ace", "hearts")
```
You may wonder why R uses multiple types of vectors. Vector types help R behave as you would expect. For example, R will do math with atomic vectors that contain numbers, but not with atomic vectors that contain character strings:
```
sum(int)
## 6
sum(text)
## Error in sum(text) : invalid 'type' (character) of argument
```
But we’re getting ahead of ourselves! Get ready to say hello to the six types of atomic vectors in R.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
5\.2 Attributes
---------------
An attribute is a piece of information that you can attach to an atomic vector (or any R object). The attribute won’t affect any of the values in the object, and it will not appear when you display your object. You can think of an attribute as “metadata”; it is just a convenient place to put information associated with an object. R will normally ignore this metadata, but some R functions will check for specific attributes. These functions may use the attributes to do special things with the data.
You can see which attributes an object has with `attributes`. `attributes` will return `NULL` if an object has no attributes. An atomic vector, like `die`, won’t have any attributes unless you give it some:
```
attributes(die)
## NULL
```
**NULL**
R uses `NULL` to represent the null set, an empty object. `NULL` is often returned by functions whose values are undefined. You can create a `NULL` object by typing `NULL` in capital letters.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
5\.3 Matrices
-------------
Matrices store values in a two\-dimensional array, just like a matrix from linear algebra. To create one, first give `matrix` an atomic vector to reorganize into a matrix. Then, define how many rows should be in the matrix by setting the `nrow` argument to a number. `matrix` will organize your vector of values into a matrix with the specified number of rows. Alternatively, you can set the `ncol` argument, which tells R how many columns to include in the matrix:
```
m <- matrix(die, nrow = 2)
m
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
`matrix` will fill up the matrix column by column by default, but you can fill the matrix row by row if you include the argument `byrow = TRUE`:
```
m <- matrix(die, nrow = 2, byrow = TRUE)
m
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
`matrix` also has other default arguments that you can use to customize your matrix. You can read about them at `matrix`’s help page (accessible by `?matrix`).
5\.4 Arrays
-----------
The `array` function creates an n\-dimensional array. For example, you could use `array` to sort values into a cube of three dimensions or a hypercube in 4, 5, or *n* dimensions. `array` is not as customizeable as `matrix` and basically does the same thing as setting the `dim` attribute. To use `array`, provide an atomic vector as the first argument, and a vector of dimensions as the second argument, now called `dim`:
```
ar <- array(c(11:14, 21:24, 31:34), dim = c(2, 2, 3))
ar
## , , 1
##
## [,1] [,2]
## [1,] 11 13
## [2,] 12 14
##
## , , 2
##
## [,1] [,2]
## [1,] 21 23
## [2,] 22 24
##
## , , 3
##
## [,1] [,2]
## [1,] 31 33
## [2,] 32 34
```
**Exercise 5\.3 (Make a Matrix)** Create the following matrix, which stores the name and suit of every card in a royal flush.
```
## [,1] [,2]
## [1,] "ace" "spades"
## [2,] "king" "spades"
## [3,] "queen" "spades"
## [4,] "jack" "spades"
## [5,] "ten" "spades"
```
*Solution.* There is more than one way to build this matrix, but in every case, you will need to start by making a character vector with 10 values. If you start with the following character vector, you can turn it into a matrix with any of the following three commands:
```
hand1 <- c("ace", "king", "queen", "jack", "ten", "spades", "spades",
"spades", "spades", "spades")
matrix(hand1, nrow = 5)
matrix(hand1, ncol = 2)
dim(hand1) <- c(5, 2)
```
You can also start with a character vector that lists the cards in a slightly different order. In this case, you will need to ask R to fill the matrix row by row instead of column by column:
```
hand2 <- c("ace", "spades", "king", "spades", "queen", "spades", "jack",
"spades", "ten", "spades")
matrix(hand2, nrow = 5, byrow = TRUE)
matrix(hand2, ncol = 2, byrow = TRUE)
```
5\.5 Class
----------
Notice that changing the dimensions of your object will not change the type of the object, but it *will* change the object’s `class` attribute:
```
dim(die) <- c(2, 3)
typeof(die)
## "double"
class(die)
## "matrix"
```
A matrix is a special case of an atomic vector. For example, the `die` matrix is a special case of a double vector. Every element in the matrix is still a double, but the elements have been arranged into a new structure. R added a `class` attribute to `die` when you changed its dimensions. This class describes `die`’s new format. Many R functions will specifically look for an object’s `class` attribute, and then handle the object in a predetermined way based on the attribute.
Note that an object’s `class` attribute will not always appear when you run `attributes`; you may need to specifically search for it with `class`:
```
attributes(die)
## $dim
## [1] 2 3
```
You can apply `class` to objects that do not have a `class` attribute. `class` will return a value based on the object’s atomic type. Notice that the “class” of a double is “numeric,” an odd deviation, but one I am thankful for. I think that the most important property of a double vector is that it contains numbers, a property that “numeric” makes obvious:
```
class("Hello")
## "character"
class(5)
## "numeric"
```
You can also use `class` to set an object’s `class` attribute, but this is usually a bad idea. R will expect objects of a class to share certain traits, such as attributes, that your object may not possess. You’ll learn how to make and use your own classes in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
5\.6 Coercion
-------------
R’s coercion behavior may seem inconvenient, but it is not arbitrary. R always follows the same rules when it coerces data types. Once you are familiar with these rules, you can use R’s coercion behavior to do surprisingly useful things.
So how does R coerce data types? If a character string is present in an atomic vector, R will convert everything else in the vector to character strings. If a vector only contains logicals and numbers, R will convert the logicals to numbers; every `TRUE` becomes a 1, and every `FALSE` becomes a 0, as shown in Figure [5\.1](r-objects.html#fig:coercion).
Figure 5\.1: R always uses the same rules to coerce data to a single type. If character strings are present, everything will be coerced to a character string. Otherwise, logicals are coerced to numerics.
This arrangement preserves information. It is easy to look at a character string and tell what information it used to contain. For example, you can easily spot the origins of `"TRUE"` and `"5"`. You can also easily back\-transform a vector of 1s and 0s to `TRUE`s and `FALSE`s.
R uses the same coercion rules when you try to do math with logical values. So the following code:
```
sum(c(TRUE, TRUE, FALSE, FALSE))
```
will become:
```
sum(c(1, 1, 0, 0))
## 2
```
This means that `sum` will count the number of `TRUE`s in a logical vector (and `mean` will calculate the proportion of `TRUE`s). Neat, huh?
You can explicitly ask R to convert data from one type to another with the `as` functions. R will convert the data whenever there is a sensible way to do so:
```
as.character(1)
## "1"
as.logical(1)
## TRUE
as.numeric(FALSE)
## 0
```
You now know how R coerces data types, but this won’t help you save a playing card. To do that, you will need to avoid coercion altogether. You can do this by using a new type of object, a *list*.
Before we look at lists, let’s address a question that might be on your mind.
Many data sets contain multiple types of information. The inability of vectors, matrices, and arrays to store multiple data types seems like a major limitation. So why bother with them?
In some cases, using only a single type of data is a huge advantage. Vectors, matrices, and arrays make it very easy to do math on large sets of numbers because R knows that it can manipulate each value the same way. Operations with vectors, matrices, and arrays also tend to be fast because the objects are so simple to store in memory.
In other cases, allowing only a single type of data is not a disadvantage. Vectors are the most common data structure in R because they store variables very well. Each value in a variable measures the same property, so there’s no need to use different types of data.
5\.7 Lists
----------
Lists are like atomic vectors because they group data into a one\-dimensional set. However, lists do not group together individual values; lists group together R objects, such as atomic vectors and other lists. For example, you can make a list that contains a numeric vector of length 31 in its first element, a character vector of length 1 in its second element, and a new list of length 2 in its third element. To do this, use the `list` function.
`list` creates a list the same way `c` creates a vector. Separate each element in the list with a comma:
```
list1 <- list(100:130, "R", list(TRUE, FALSE))
list1
## [[1]]
## [1] 100 101 102 103 104 105 106 107 108 109 110 111 112
## [14] 113 114 115 116 117 118 119 120 121 122 123 124 125
## [27] 126 127 128 129 130
##
## [[2]]
## [1] "R"
##
## [[3]]
## [[3]][[1]]
## [1] TRUE
##
## [[3]][[2]]
## [1] FALSE
```
I left the `[1]` notation in the output so you can see how it changes for lists. The double\-bracketed indexes tell you which element of the list is being displayed. The single\-bracket indexes tell you which subelement of an element is being displayed. For example, `100` is the first subelement of the first element in the list. `"R"` is the first sub\-element of the second element. This two\-system notation arises because each element of a list can be *any* R object, including a new vector (or list) with its own indexes.
Lists are a basic type of object in R, on par with atomic vectors. Like atomic vectors, they are used as building blocks to create many more spohisticated types of R objects.
As you can imagine, the structure of lists can become quite complicated, but this flexibility makes lists a useful all\-purpose storage tool in R: you can group together anything with a list.
However, not every list needs to be complicated. You can store a playing card in a very simple list.
**Exercise 5\.5 (Use a List to Make a Card)** Use a list to store a single playing card, like the ace of hearts, which has a point value of one. The list should save the face of the card, the suit, and the point value in separate elements.
*Solution.* You can create your card like this. In the following example, the first element of the list is a character vector (of length 1\). The second element is also a character vector, and the third element is a numeric vector:
```
card <- list("ace", "hearts", 1)
card
## [[1]]
## [1] "ace"
##
## [[2]]
## [1] "hearts"
##
## [[3]]
## [1] 1
```
You can also use a list to store a whole deck of playing cards. Since you can save a single playing card as a list, you can save a deck of playing cards as a list of 52 sublists (one for each card). But let’s not bother—there’s a much cleaner way to do the same thing. You can use a special class of list, known as a *data frame*.
5\.8 Data Frames
----------------
Data frames are the two\-dimensional version of a list. They are far and away the most useful storage structure for data analysis, and they provide an ideal way to store an entire deck of cards. You can think of a data frame as R’s equivalent to the Excel spreadsheet because it stores data in a similar format.
Data frames group vectors together into a two\-dimensional table. Each vector becomes a column in the table. As a result, each column of a data frame can contain a different type of data; but within a column, every cell must be the same type of data, as in Figure [5\.2](r-objects.html#fig:data-frame).
Figure 5\.2: Data frames store data as a sequence of columns. Each column can be a different data type. Every column in a data frame must be the same length.
Creating a data frame by hand takes a lot of typing, but you can do it (if you like) with the `data.frame` function. Give `data.frame` any number of vectors, each separated with a comma. Each vector should be set equal to a name that describes the vector. `data.frame` will turn each vector into a column of the new data frame:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3))
df
## face suit value
## ace clubs 1
## two clubs 2
## six clubs 3
```
You’ll need to make sure that each vector is the same length (or can be made so with R’s recycling rules; see Figure [2\.4](basics.html#fig:recycle), as data frames cannot combine columns of different lengths.
In the previous code, I named the arguments in `data.frame` `face`, `suit`, and `value`, but you can name the arguments whatever you like. `data.frame` will use your argument names to label the columns of the data frame.
**Names**
You can also give names to a list or vector when you create one of these objects. Use the same syntax as with `data.frame`:
`list(face = "ace", suit = "hearts", value = 1)`
`c(face = "ace", suit = "hearts", value = "one")`
The names will be stored in the object’s `names` attribute.
If you look at the type of a data frame, you will see that it is a list. In fact, each data frame is a list with class `data.frame`. You can see what types of objects are grouped together by a list (or data frame) with the `str` function:
```
typeof(df)
## "list"
class(df)
## "data.frame"
str(df)
## 'data.frame': 3 obs. of 3 variables:
## $ face : Factor w/ 3 levels "ace","six","two": 1 3 2
## $ suit : Factor w/ 1 level "clubs": 1 1 1
## $ value: num 1 2 3
```
Notice that R saved your character strings as factors. I told you that R likes factors! It is not a very big deal here, but you can prevent this behavior by adding the argument `stringsAsFactors = FALSE` to `data.frame`:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3),
stringsAsFactors = FALSE)
```
A data frame is a great way to build an entire deck of cards. You can make each row in the data frame a playing card, and each column a type of value—each with its own appropriate data type. The data frame would look something like this:
```
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
## king clubs 13
## queen clubs 12
## jack clubs 11
## ten clubs 10
## ... and so on.
```
You could create this data frame with `data.frame`, but look at the typing involved! You need to write three vectors, each with 52 elements:
```
deck <- data.frame(
face = c("king", "queen", "jack", "ten", "nine", "eight", "seven", "six",
"five", "four", "three", "two", "ace", "king", "queen", "jack", "ten",
"nine", "eight", "seven", "six", "five", "four", "three", "two", "ace",
"king", "queen", "jack", "ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "ace", "king", "queen", "jack", "ten", "nine",
"eight", "seven", "six", "five", "four", "three", "two", "ace"),
suit = c("spades", "spades", "spades", "spades", "spades", "spades",
"spades", "spades", "spades", "spades", "spades", "spades", "spades",
"clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs",
"clubs", "clubs", "clubs", "clubs", "clubs", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts", "hearts", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts"),
value = c(13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8,
7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11,
10, 9, 8, 7, 6, 5, 4, 3, 2, 1)
)
```
You should avoid typing large data sets in by hand whenever possible. Typing invites typos and errors, not to mention RSI. It is always better to acquire large data sets as a computer file. You can then ask R to read the file and store the contents as an object.
I’ve created a file for you to load that contains a data frame of playing\-card information, so don’t worry about typing in the code. Instead, turn your attention toward loading data into R.
5\.9 Loading Data
-----------------
You can load the `deck` data frame from the file [*deck.csv*](http://bit.ly/deck_CSV). Please take a moment to download the file before reading on. Visit the website, click “Download Zip,” and then unzip and open the folder that your web browser downloads. *deck.csv* will be inside.
*deck.csv* is a comma\-separated values file, or CSV for short. CSVs are plain\-text files, which means you can open them in a text editor (as well as many other programs). If you open *desk.csv*, you’ll notice that it contains a table of data that looks like the following table. Each row of the table is saved on its own line, and a comma is used to separate the cells within each row. Every CSV file shares this basic format:
```
"face","suit,"value"
"king","spades",13
"queen","spades,12
"jack","spades,11
"ten","spades,10
"nine","spades,9
... and so on.
```
Most data\-science applications can open plain\-text files and export data as plain\-text files. This makes plain\-text files a sort of lingua franca for data science.
To load a plain\-text file into R, click the Import Dataset icon in RStudio, shown in Figure [5\.3](r-objects.html#fig:import). Then select “From text file.”
Figure 5\.3: You can import data from plain\-text files with RStudio’s Import Dataset.
RStudio will ask you to select the file you want to import, then it will open a wizard to help you import the data, as in Figure [5\.4](r-objects.html#fig:wizard). Use the wizard to tell RStudio what name to give the data set. You can also use the wizard to tell RStudio which character the data set uses as a separator, which character it uses to represent decimals (usually a period in the United States and a comma in Europe), and whether or not the data set comes with a row of column names (known as a *header*). To help you out, the wizard shows you what the raw file looks like, as well as what your loaded data will look like based on the input settings.
You can also unclick the box “Strings as factors” in the wizard. I recommend doing this. If you do, R will load all of your character strings as character strings. If you do not, R will convert them to factors.
Figure 5\.4: RStudio’s import wizard.
Once everything looks right, click Import. RStudio will read in the data and save it to a data frame. RStudio will also open a data viewer, so you can see your new data in a spreadsheet format. This is a good way to check that everything came through as expected. If all worked well, your file should appear in a View tab of RStudio, like in Figure [5\.5](r-objects.html#fig:view). You can examine the data frame in the console with *`head(deck)`*.
**Online data**
You can load a plain\-text file straight from the Internet by clicking the “From Web URL…” option under Import Dataset. The file will need to have its own URL, and you will need to be connected.
Figure 5\.5: When you import a data set, RStudio will save the data to a data frame and then display the data frame in a View tab. You can open any data frame in a View tab at any time with the View function.
Now it is your turn. Download *deck.csv* and import it into RStudio. Be sure to save the output to an R object called `deck`: you’ll use it in the next few chapters. If everything goes correctly, the first few lines of your data frame should look like this:
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
`head` and `tail` are two functions that provide an easy way to peek at large data sets. `head` will return just the first six rows of the data set, and `tail` will return just the last six rows. To see a different number of rows, give `head` or `tails` a second argument, the number of rows you would like to view, for example, `head(deck, 10)`.
R can open many types of files—not just CSVs. Visit [Loading and Saving Data in R](dataio.html#dataio) to learn how to open other common types of files in R.
5\.10 Saving Data
-----------------
Before we go any further, let’s save a copy of `deck` as a new *.csv* file. That way you can email it to a colleague, store it on a thumb drive, or open it in a different program. You can save any data frame in R to a *.csv* file with the command `write.csv`. To save `deck`, run:
```
write.csv(deck, file = "cards.csv", row.names = FALSE)
```
R will turn your data frame into a plain\-text file with the comma\-separated values format and save the file to your working directory. To see where your working directory is, run *`getwd()`*. To change the location of your working directory, visit Session \> Set Working Directory \> Choose Directory in the RStudio menu bar.
You can customize the save process with `write.csv`’s large set of optional arguments (see `?write.csv` for details). However, there are three arguments that you should use *every* time you run `write.csv`.
First, you should give `write.csv` the name of the data frame that you wish to save. Next, you should provide a file name to give your file. R will take this name quite literally, so be sure to provide an extension.
Finally, you should add the argument `row.names = FALSE`. This will prevent R from adding a column of numbers at the start of your data frame. These numbers will identify your rows from 1 to 52, but it is unlikely that whatever program you open *cards.csv* in will understand the row name system. More than likely, the program will assume that the row names are the first column of data in your data frame. In fact, this is exactly what R will assume if you reopen *cards.csv*. If you save and open *cards.csv* several times in R, you’ll notice duplicate columns of row numbers forming at the start of your data frame. I can’t explain why R does this, but I can explain how to avoid it: use `row.names = FALSE` whenever you save data with `write.csv`.
For more details about saving files, including how to compress saved files and how to save files in other formats, see [Loading and Saving Data in R](dataio.html#dataio).
Good work. You now have a virtual deck of cards to work with. Take a breather, and when you come back, we’ll start writing some functions to use on your deck.
5\.11 Summary
-------------
You can save data in R with five different objects, which let you store different types of values in different types of relationships, as in Figure [5\.6](r-objects.html#fig:structures). Of these objects, data frames are by far the most useful for data science. Data frames store one of the most common forms of data used in data science, tabular data.
Figure 5\.6: R’s most common data structures are vectors, matrices, arrays, lists, and data frames.
You can load tabular data into a data frame with RStudio’s Import Dataset button—so long as the data is saved as a plain\-text file. This requirement is not as limiting as it sounds. Most software programs can export data as a plain\-text file. So if you have an Excel file (for example) you can open the file in Excel and export the data as a CSV to use with R. In fact, opening a file in its original program is good practice. Excel files use metadata, like sheets and formulas, that help Excel work with the file. R can try to extract raw data from the file, but it won’t be as good at doing this as Microsoft Excel is. No program is better at converting Excel files than Excel. Similarly, no program is better at converting SAS Xport files than SAS, and so on.
However, you may find yourself with a program\-specific file, but not the program that created it. You wouldn’t want to buy a multi\-thousand\-dollar SAS license just to open a SAS file. Thankfully R *can* open many types of files, including files from other programs and databases. R even has its own program\-specific formats that can help you save memory and time if you know that you will be working entirely in R. If you’d like to know more about all of your options for loading and saving data in R, see [Loading and Saving Data in R](dataio.html#dataio).
[R Notation](r-notation.html#r-notation) will build upon the skills you learned in this chapter. Here, you learned how to store data in R. In [R Notation](r-notation.html#r-notation), you will learn how to access values once they’ve been stored. You’ll also write two functions that will let you start using your deck, a shuffle function and a deal function.
5\.1 Atomic Vectors
-------------------
An atomic vector is just a simple vector of data. In fact, you’ve already made an atomic vector, your `die` object from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). You can make an atomic vector by grouping some values of data together with `c`:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
is.vector(die)
## TRUE
```
**is.vector**
`is.vector` tests whether an object is an atomic vector. It returns `TRUE` if the object is an atomic vector and `FALSE` otherwise.
You can also make an atomic vector with just one value. R saves single values as an atomic vector of length 1:
```
five <- 5
five
## 5
is.vector(five)
## TRUE
length(five)
## 1
length(die)
## 6
```
**length**
`length` returns the length of an atomic vector.
Each atomic vector stores its values as a one\-dimensional vector, and each atomic vector can only store one type of data. You can save different types of data in R by using different types of atomic vectors. Altogether, R recognizes six basic types of atomic vectors: *doubles*, *integers*, *characters*, *logicals*, *complex*, and *raw*.
To create your card deck, you will need to use different types of atomic vectors to save different types of information (text and numbers). You can do this by using some simple conventions when you enter your data. For example, you can create an integer vector by including a capital `L` with your input. You can create a character vector by surrounding your input in quotation marks:
```
int <- 1L
text <- "ace"
```
Each type of atomic vector has its own convention (described below). R will recognize the convention and use it to create an atomic vector of the appropriate type. If you’d like to make atomic vectors that have more than one element in them, you can combine an element with the `c` function from [Packages and Help Pages](packages.html#packages). Use the same convention with each element:
```
int <- c(1L, 5L)
text <- c("ace", "hearts")
```
You may wonder why R uses multiple types of vectors. Vector types help R behave as you would expect. For example, R will do math with atomic vectors that contain numbers, but not with atomic vectors that contain character strings:
```
sum(int)
## 6
sum(text)
## Error in sum(text) : invalid 'type' (character) of argument
```
But we’re getting ahead of ourselves! Get ready to say hello to the six types of atomic vectors in R.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
### 5\.1\.1 Doubles
A double vector stores regular numbers. The numbers can be positive or negative, large or small, and have digits to the right of the decimal place or not. In general, R will save any number that you type in R as a double. So, for example, the die you made in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) was a double object:
```
die <- c(1, 2, 3, 4, 5, 6)
die
## 1 2 3 4 5 6
```
You’ll usually know what type of object you are working with in R (it will be obvious), but you can also ask R what type of object an object is with `typeof`. For example:
```
typeof(die)
## "double"
```
Some R functions refer to doubles as “numerics,” and I will often do the same. Double is a computer science term. It refers to the specific number of bytes your computer uses to store a number, but I find “numeric” to be much more intuitive when doing data science.
### 5\.1\.2 Integers
Integer vectors store integers, numbers that can be written without a decimal component. As a data scientist, you won’t use the integer type very often because you can save integers as a double object.
You can specifically create an integer in R by typing a number followed by an uppercase `L`. For example:
```
int <- c(-1L, 2L, 4L)
int
## -1 2 4
typeof(int)
## "integer"
```
Note that R won’t save a number as an integer unless you include the `L`. Integer numbers without the `L` will be saved as doubles. The only difference between `4` and `4L` is how R saves the number in your computer’s memory. Integers are defined more precisely in your computer’s memory than doubles (unless the integer is *very* large or small).
Why would you save your data as an integer instead of a double? Sometimes a difference in precision can have surprising effects. Your computer allocates 64 bits of memory to store each double in an R program. This allows a lot of precision, but some numbers cannot be expressed exactly in 64 bits, the equivalent of a sequence of 64 ones and zeroes. For example, the number \\(\\pi\\) contains an endless sequences of digits to the right of the decimal place. Your computer must round \\(\\pi\\) to something close to, but not exactly equal to \\(\\pi\\) to store \\(\\pi\\) in its memory. Many decimal numbers share a similar fate.
As a result, each double is accurate to about 16 significant digits. This introduces a little bit of error. In most cases, this rounding error will go unnoticed. However, in some situations, the rounding error can cause surprising results. For example, you may expect the result of the expression below to be zero, but it is not:
```
sqrt(2)^2 - 2
## 4.440892e-16
```
The square root of two cannot be expressed exactly in 16 significant digits. As a result, R has to round the quantity, and the expression resolves to something very close to—but not quite—zero.
These errors are known as *floating\-point* errors, and doing arithmetic in these conditions is known as *floating\-point arithmetic*. Floating\-point arithmetic is not a feature of R; it is a feature of computer programming. Usually floating\-point errors won’t be enough to ruin your day. Just keep in mind that they may be the cause of surprising results.
You can avoid floating\-point errors by avoiding decimals and only using integers. However, this is not an option in most data\-science situations. You cannot do much math with integers before you need a noninteger to express the result. Luckily, the errors caused by floating\-point arithmetic are usually insignificant (and when they are not, they are easy to spot). As a result, you’ll generally use doubles instead of integers as a data scientist.
### 5\.1\.3 Characters
A character vector stores small pieces of text. You can create a character vector in R by typing a character or string of characters surrounded by quotes:
```
text <- c("Hello", "World")
text
## "Hello" "World"
typeof(text)
## "character"
typeof("Hello")
## "character"
```
The individual elements of a character vector are known as *strings*. Note that a string can contain more than just letters. You can assemble a character string from numbers or symbols as well.
**Exercise 5\.1 (Character or Number?)** Can you spot the difference between a character string and a number? Here’s a test: Which of these are character strings and which are numbers? `1`, `"1"`, `"one"`.
*Solution.* `"1"` and `"one"` are both character strings.
Character strings can contain number characters, but that doesn’t make them numeric. They’re just strings that happen to have numbers in them. You can tell strings from real numbers because strings come surrounded by quotes. In fact, anything surrounded by quotes in R will be treated as a character string—no matter what appears between the quotes.
It is easy to confuse R objects with character strings. Why? Because both appear as pieces of text in R code. For example, `x` is the name of an R object named “x,” `"x"` is a character string that contains the character “x.” One is an object that contains raw data, the other is a piece of raw data itself.
Expect an error whenever you forget your quotation marks; R will start looking for an object that probably does not exist.
### 5\.1\.4 Logicals
Logical vectors store `TRUE`s and `FALSE`s, R’s form of Boolean data. Logicals are very helpful for doing things like comparisons:
```
3 > 4
## FALSE
```
Any time you type `TRUE` or `FALSE` in capital letters (without quotation marks), R will treat your input as logical data. R also assumes that `T` and `F` are shorthand for `TRUE` and `FALSE`, unless they are defined elsewhere (e.g. `T <- 500`). Since the meaning of `T` and `F` can change, its best to stick with `TRUE` and `FALSE`:
```
logic <- c(TRUE, FALSE, TRUE)
logic
## TRUE FALSE TRUE
typeof(logic)
## "logical"
typeof(F)
## "logical"
```
### 5\.1\.5 Complex and Raw
Doubles, integers, characters, and logicals are the most common types of atomic vectors in R, but R also recognizes two more types: complex and raw. It is doubtful that you will ever use these to analyze data, but here they are for the sake of thoroughness.
Complex vectors store complex numbers. To create a complex vector, add an imaginary term to a number with `i`:
```
comp <- c(1 + 1i, 1 + 2i, 1 + 3i)
comp
## 1+1i 1+2i 1+3i
typeof(comp)
## "complex"
```
Raw vectors store raw bytes of data. Making raw vectors gets complicated, but you can make an empty raw vector of length *n* with `raw(n)`. See the help page of `raw` for more options when working with this type of data:
```
raw(3)
## 00 00 00
typeof(raw(3))
## "raw"
```
**Exercise 5\.2 (Vector of Cards)** Create an atomic vector that stores just the face names of the cards in a royal flush, for example, the ace of spades, king of spades, queen of spades, jack of spades, and ten of spades. The face name of the ace of spades would be “ace,” and “spades” is the suit.
Which type of vector will you use to save the names?
*Solution.* A character vector is the most appropriate type of atomic vector in which to save card names. You can create one with the `c` function if you surround each name with quotation marks:
```
hand <- c("ace", "king", "queen", "jack", "ten")
hand
## "ace" "king" "queen" "jack" "ten"
typeof(hand)
## "character"
```
This creates a one\-dimensional group of card names—great job! Now let’s make a more sophisticated data structure, a two\-dimensional table of card names and suits. You can build a more sophisticated object from an atomic vector by giving it some attributes and assigning it a class.
5\.2 Attributes
---------------
An attribute is a piece of information that you can attach to an atomic vector (or any R object). The attribute won’t affect any of the values in the object, and it will not appear when you display your object. You can think of an attribute as “metadata”; it is just a convenient place to put information associated with an object. R will normally ignore this metadata, but some R functions will check for specific attributes. These functions may use the attributes to do special things with the data.
You can see which attributes an object has with `attributes`. `attributes` will return `NULL` if an object has no attributes. An atomic vector, like `die`, won’t have any attributes unless you give it some:
```
attributes(die)
## NULL
```
**NULL**
R uses `NULL` to represent the null set, an empty object. `NULL` is often returned by functions whose values are undefined. You can create a `NULL` object by typing `NULL` in capital letters.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
### 5\.2\.1 Names
The most common attributes to give an atomic vector are names, dimensions (dim), and classes. Each of these attributes has its own helper function that you can use to give attributes to an object. You can also use the helper functions to look up the value of these attributes for objects that already have them. For example, you can look up the value of the names attribute of `die` with `names`:
```
names(die)
## NULL
```
`NULL` means that `die` does not have a names attribute. You can give one to `die` by assigning a character vector to the output of `names`. The vector should include one name for each element in `die`:
```
names(die) <- c("one", "two", "three", "four", "five", "six")
```
Now `die` has a names attribute:
```
names(die)
## "one" "two" "three" "four" "five" "six"
attributes(die)
## $names
## [1] "one" "two" "three" "four" "five" "six"
```
R will display the names above the elements of `die` whenever you look at the vector:
```
die
## one two three four five six
## 1 2 3 4 5 6
```
However, the names won’t affect the actual values of the vector, nor will the names be affected when you manipulate the values of the vector:
```
die + 1
## one two three four five six
## 2 3 4 5 6 7
```
You can also use `names` to change the names attribute or remove it all together. To change the names, assign a new set of labels to `names`:
```
names(die) <- c("uno", "dos", "tres", "quatro", "cinco", "seis")
die
## uno dos tres quatro cinco seis
## 1 2 3 4 5 6
```
To remove the names attribute, set it to `NULL`:
```
names(die) <- NULL
die
## 1 2 3 4 5 6
```
### 5\.2\.2 Dim
You can transform an atomic vector into an *n*\-dimensional array by giving it a dimensions attribute with `dim`. To do this, set the `dim` attribute to a numeric vector of length *n*. R will reorganize the elements of the vector into *n* dimensions. Each dimension will have as many rows (or columns, etc.) as the *nth* value of the `dim` vector. For example, you can reorganize `die` into a 2 × 3 matrix (which has 2 rows and 3 columns):
```
dim(die) <- c(2, 3)
die
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
or a 3 × 2 matrix (which has 3 rows and 2 columns):
```
dim(die) <- c(3, 2)
die
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
or a 1 × 2 × 3 hypercube (which has 1 row, 2 columns, and 3 “slices”). This is a three\-dimensional structure, but R will need to show it slice by slice by slice on your two\-dimensional computer screen:
```
dim(die) <- c(1, 2, 3)
die
## , , 1
##
## [,1] [,2]
## [1,] 1 2
##
## , , 2
##
## [,1] [,2]
## [1,] 3 4
##
## , , 3
##
## [,1] [,2]
## [1,] 5 6
```
R will always use the first value in `dim` for the number of rows and the second value for the number of columns. In general, rows always come first in R operations that deal with both rows and columns.
You may notice that you don’t have much control over how R reorganizes the values into rows and columns. For example, R always fills up each matrix by columns, instead of by rows. If you’d like more control over this process, you can use one of R’s helper functions, `matrix` or `array`. They do the same thing as changing the `dim` attribute, but they provide extra arguments to customize the process.
5\.3 Matrices
-------------
Matrices store values in a two\-dimensional array, just like a matrix from linear algebra. To create one, first give `matrix` an atomic vector to reorganize into a matrix. Then, define how many rows should be in the matrix by setting the `nrow` argument to a number. `matrix` will organize your vector of values into a matrix with the specified number of rows. Alternatively, you can set the `ncol` argument, which tells R how many columns to include in the matrix:
```
m <- matrix(die, nrow = 2)
m
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
`matrix` will fill up the matrix column by column by default, but you can fill the matrix row by row if you include the argument `byrow = TRUE`:
```
m <- matrix(die, nrow = 2, byrow = TRUE)
m
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 4 5 6
```
`matrix` also has other default arguments that you can use to customize your matrix. You can read about them at `matrix`’s help page (accessible by `?matrix`).
5\.4 Arrays
-----------
The `array` function creates an n\-dimensional array. For example, you could use `array` to sort values into a cube of three dimensions or a hypercube in 4, 5, or *n* dimensions. `array` is not as customizeable as `matrix` and basically does the same thing as setting the `dim` attribute. To use `array`, provide an atomic vector as the first argument, and a vector of dimensions as the second argument, now called `dim`:
```
ar <- array(c(11:14, 21:24, 31:34), dim = c(2, 2, 3))
ar
## , , 1
##
## [,1] [,2]
## [1,] 11 13
## [2,] 12 14
##
## , , 2
##
## [,1] [,2]
## [1,] 21 23
## [2,] 22 24
##
## , , 3
##
## [,1] [,2]
## [1,] 31 33
## [2,] 32 34
```
**Exercise 5\.3 (Make a Matrix)** Create the following matrix, which stores the name and suit of every card in a royal flush.
```
## [,1] [,2]
## [1,] "ace" "spades"
## [2,] "king" "spades"
## [3,] "queen" "spades"
## [4,] "jack" "spades"
## [5,] "ten" "spades"
```
*Solution.* There is more than one way to build this matrix, but in every case, you will need to start by making a character vector with 10 values. If you start with the following character vector, you can turn it into a matrix with any of the following three commands:
```
hand1 <- c("ace", "king", "queen", "jack", "ten", "spades", "spades",
"spades", "spades", "spades")
matrix(hand1, nrow = 5)
matrix(hand1, ncol = 2)
dim(hand1) <- c(5, 2)
```
You can also start with a character vector that lists the cards in a slightly different order. In this case, you will need to ask R to fill the matrix row by row instead of column by column:
```
hand2 <- c("ace", "spades", "king", "spades", "queen", "spades", "jack",
"spades", "ten", "spades")
matrix(hand2, nrow = 5, byrow = TRUE)
matrix(hand2, ncol = 2, byrow = TRUE)
```
5\.5 Class
----------
Notice that changing the dimensions of your object will not change the type of the object, but it *will* change the object’s `class` attribute:
```
dim(die) <- c(2, 3)
typeof(die)
## "double"
class(die)
## "matrix"
```
A matrix is a special case of an atomic vector. For example, the `die` matrix is a special case of a double vector. Every element in the matrix is still a double, but the elements have been arranged into a new structure. R added a `class` attribute to `die` when you changed its dimensions. This class describes `die`’s new format. Many R functions will specifically look for an object’s `class` attribute, and then handle the object in a predetermined way based on the attribute.
Note that an object’s `class` attribute will not always appear when you run `attributes`; you may need to specifically search for it with `class`:
```
attributes(die)
## $dim
## [1] 2 3
```
You can apply `class` to objects that do not have a `class` attribute. `class` will return a value based on the object’s atomic type. Notice that the “class” of a double is “numeric,” an odd deviation, but one I am thankful for. I think that the most important property of a double vector is that it contains numbers, a property that “numeric” makes obvious:
```
class("Hello")
## "character"
class(5)
## "numeric"
```
You can also use `class` to set an object’s `class` attribute, but this is usually a bad idea. R will expect objects of a class to share certain traits, such as attributes, that your object may not possess. You’ll learn how to make and use your own classes in [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
### 5\.5\.1 Dates and Times
The attribute system lets R represent more types of data than just doubles, integers, characters, logicals, complexes, and raws. The time looks like a character string when you display it, but its data type is actually `"double"`, and its class is `"POSIXct"` `"POSIXt"` (it has two classes):
```
now <- Sys.time()
now
## "2014-03-17 12:00:00 UTC"
typeof(now)
## "double"
class(now)
## "POSIXct" "POSIXt"
```
POSIXct is a widely used framework for representing dates and times. In the POSIXct framework, each time is represented by the number of seconds that have passed between the time and 12:00 AM January 1st 1970 (in the Universal Time Coordinated (UTC) zone). For example, the time above occurs 1,395,057,600 seconds after then. So in the POSIXct system, the time would be saved as 1395057600\.
R creates the time object by building a double vector with one element, `1395057600`. You can see this vector by removing the `class` attribute of `now`, or by using the `unclass` function, which does the same thing:
```
unclass(now)
## 1395057600
```
R then gives the double vector a `class` attribute that contains two classes, `"POSIXct"` and `"POSIXt"`. This attribute alerts R functions that they are dealing with a POSIXct time, so they can treat it in a special way. For example, R functions will use the POSIXct standard to convert the time into a user\-friendly character string before displaying it.
You can take advantage of this system by giving the `POSIXct` class to random R objects. For example, have you ever wondered what day it was a million seconds after 12:00 a.m. Jan. 1, 1970?
```
mil <- 1000000
mil
## 1e+06
class(mil) <- c("POSIXct", "POSIXt")
mil
## "1970-01-12 13:46:40 UTC"
```
Jan. 12, 1970\. Yikes. A million seconds goes by faster than you would think. This conversion worked well because the `POSIXct` class does not rely on any additional attributes, but in general, forcing the class of an object is a bad idea.
There are many different classes of data in R and its packages, and new classes are invented every day. It would be difficult to learn about every class, but you do not have to. Most classes are only useful in specific situations. Since each class comes with its own help page, you can wait to learn about a class until you encounter it. However, there is one class of data that is so ubiquitous in R that you should learn about it alongside the atomic data types. That class is `factors`.
### 5\.5\.2 Factors
Factors are R’s way of storing categorical information, like ethnicity or eye color. Think of a factor as something like a gender; it can only have certain values (male or female), and these values may have their own idiosyncratic order (ladies first). This arrangement makes factors very useful for recording the treatment levels of a study and other categorical variables.
To make a factor, pass an atomic vector into the `factor` function. R will recode the data in the vector as integers and store the results in an integer vector. R will also add a `levels` attribute to the integer, which contains a set of labels for displaying the factor values, and a `class` attribute, which contains the class `factor`:
```
gender <- factor(c("male", "female", "female", "male"))
typeof(gender)
## "integer"
attributes(gender)
## $levels
## [1] "female" "male"
##
## $class
## [1] "factor"
```
You can see exactly how R is storing your factor with `unclass`:
```
unclass(gender)
## [1] 2 1 1 2
## attr(,"levels")
## [1] "female" "male"
```
R uses the levels attribute when it displays the factor, as you will see. R will display each `1` as `female`, the first label in the levels vector, and each `2` as `male`, the second label. If the factor included `3`s, they would be displayed as the third label, and so on:
```
gender
## male female female male
## Levels: female male
```
Factors make it easy to put categorical variables into a statistical model because the variables are already coded as numbers. However, factors can be confusing since they look like character strings but behave like integers.
R will often try to convert character strings to factors when you load and create data. In general, you will have a smoother experience if you do not let R make factors until you ask for them. I’ll show you how to do this when we start reading in data.
You can convert a factor to a character string with the `as.character` function. R will retain the display version of the factor, not the integers stored in memory:
```
as.character(gender)
## "male" "female" "female" "male"
```
Now that you understand the possibilities provided by R’s atomic vectors, let’s make a more complicated type of playing card.
**Exercise 5\.4 (Write a Card)** Many card games assign a numerical value to each card. For example, in blackjack, each face card is worth 10 points, each number card is worth between 2 and 10 points, and each ace is worth 1 or 11 points, depending on the final score.
Make a virtual playing card by combining “ace,” “heart,” and 1 into a vector. What type of atomic vector will result? Check if you are right.
*Solution.* You may have guessed that this exercise would not go well. Each atomic vector can only store one type of data. As a result, R coerces all of your values to character strings:
```
card <- c("ace", "hearts", 1)
card
## "ace" "hearts" "1"
```
This will cause trouble if you want to do math with that point value, for example, to see who won your game of blackjack.
**Data types in vectors**
If you try to put multiple types of data into a vector, R will convert the elements to a single type of data.
Since matrices and arrays are special cases of atomic vectors, they suffer from the same behavior. Each can only store one type of data.
This creates a couple of problems. First, many data sets contain multiple types of data. Simple programs like Excel and Numbers can save multiple types of data in the same data set, and you should hope that R can too. Don’t worry, it can.
Second, coercion is a common behavior in R, so you’ll want to know how it works.
5\.6 Coercion
-------------
R’s coercion behavior may seem inconvenient, but it is not arbitrary. R always follows the same rules when it coerces data types. Once you are familiar with these rules, you can use R’s coercion behavior to do surprisingly useful things.
So how does R coerce data types? If a character string is present in an atomic vector, R will convert everything else in the vector to character strings. If a vector only contains logicals and numbers, R will convert the logicals to numbers; every `TRUE` becomes a 1, and every `FALSE` becomes a 0, as shown in Figure [5\.1](r-objects.html#fig:coercion).
Figure 5\.1: R always uses the same rules to coerce data to a single type. If character strings are present, everything will be coerced to a character string. Otherwise, logicals are coerced to numerics.
This arrangement preserves information. It is easy to look at a character string and tell what information it used to contain. For example, you can easily spot the origins of `"TRUE"` and `"5"`. You can also easily back\-transform a vector of 1s and 0s to `TRUE`s and `FALSE`s.
R uses the same coercion rules when you try to do math with logical values. So the following code:
```
sum(c(TRUE, TRUE, FALSE, FALSE))
```
will become:
```
sum(c(1, 1, 0, 0))
## 2
```
This means that `sum` will count the number of `TRUE`s in a logical vector (and `mean` will calculate the proportion of `TRUE`s). Neat, huh?
You can explicitly ask R to convert data from one type to another with the `as` functions. R will convert the data whenever there is a sensible way to do so:
```
as.character(1)
## "1"
as.logical(1)
## TRUE
as.numeric(FALSE)
## 0
```
You now know how R coerces data types, but this won’t help you save a playing card. To do that, you will need to avoid coercion altogether. You can do this by using a new type of object, a *list*.
Before we look at lists, let’s address a question that might be on your mind.
Many data sets contain multiple types of information. The inability of vectors, matrices, and arrays to store multiple data types seems like a major limitation. So why bother with them?
In some cases, using only a single type of data is a huge advantage. Vectors, matrices, and arrays make it very easy to do math on large sets of numbers because R knows that it can manipulate each value the same way. Operations with vectors, matrices, and arrays also tend to be fast because the objects are so simple to store in memory.
In other cases, allowing only a single type of data is not a disadvantage. Vectors are the most common data structure in R because they store variables very well. Each value in a variable measures the same property, so there’s no need to use different types of data.
5\.7 Lists
----------
Lists are like atomic vectors because they group data into a one\-dimensional set. However, lists do not group together individual values; lists group together R objects, such as atomic vectors and other lists. For example, you can make a list that contains a numeric vector of length 31 in its first element, a character vector of length 1 in its second element, and a new list of length 2 in its third element. To do this, use the `list` function.
`list` creates a list the same way `c` creates a vector. Separate each element in the list with a comma:
```
list1 <- list(100:130, "R", list(TRUE, FALSE))
list1
## [[1]]
## [1] 100 101 102 103 104 105 106 107 108 109 110 111 112
## [14] 113 114 115 116 117 118 119 120 121 122 123 124 125
## [27] 126 127 128 129 130
##
## [[2]]
## [1] "R"
##
## [[3]]
## [[3]][[1]]
## [1] TRUE
##
## [[3]][[2]]
## [1] FALSE
```
I left the `[1]` notation in the output so you can see how it changes for lists. The double\-bracketed indexes tell you which element of the list is being displayed. The single\-bracket indexes tell you which subelement of an element is being displayed. For example, `100` is the first subelement of the first element in the list. `"R"` is the first sub\-element of the second element. This two\-system notation arises because each element of a list can be *any* R object, including a new vector (or list) with its own indexes.
Lists are a basic type of object in R, on par with atomic vectors. Like atomic vectors, they are used as building blocks to create many more spohisticated types of R objects.
As you can imagine, the structure of lists can become quite complicated, but this flexibility makes lists a useful all\-purpose storage tool in R: you can group together anything with a list.
However, not every list needs to be complicated. You can store a playing card in a very simple list.
**Exercise 5\.5 (Use a List to Make a Card)** Use a list to store a single playing card, like the ace of hearts, which has a point value of one. The list should save the face of the card, the suit, and the point value in separate elements.
*Solution.* You can create your card like this. In the following example, the first element of the list is a character vector (of length 1\). The second element is also a character vector, and the third element is a numeric vector:
```
card <- list("ace", "hearts", 1)
card
## [[1]]
## [1] "ace"
##
## [[2]]
## [1] "hearts"
##
## [[3]]
## [1] 1
```
You can also use a list to store a whole deck of playing cards. Since you can save a single playing card as a list, you can save a deck of playing cards as a list of 52 sublists (one for each card). But let’s not bother—there’s a much cleaner way to do the same thing. You can use a special class of list, known as a *data frame*.
5\.8 Data Frames
----------------
Data frames are the two\-dimensional version of a list. They are far and away the most useful storage structure for data analysis, and they provide an ideal way to store an entire deck of cards. You can think of a data frame as R’s equivalent to the Excel spreadsheet because it stores data in a similar format.
Data frames group vectors together into a two\-dimensional table. Each vector becomes a column in the table. As a result, each column of a data frame can contain a different type of data; but within a column, every cell must be the same type of data, as in Figure [5\.2](r-objects.html#fig:data-frame).
Figure 5\.2: Data frames store data as a sequence of columns. Each column can be a different data type. Every column in a data frame must be the same length.
Creating a data frame by hand takes a lot of typing, but you can do it (if you like) with the `data.frame` function. Give `data.frame` any number of vectors, each separated with a comma. Each vector should be set equal to a name that describes the vector. `data.frame` will turn each vector into a column of the new data frame:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3))
df
## face suit value
## ace clubs 1
## two clubs 2
## six clubs 3
```
You’ll need to make sure that each vector is the same length (or can be made so with R’s recycling rules; see Figure [2\.4](basics.html#fig:recycle), as data frames cannot combine columns of different lengths.
In the previous code, I named the arguments in `data.frame` `face`, `suit`, and `value`, but you can name the arguments whatever you like. `data.frame` will use your argument names to label the columns of the data frame.
**Names**
You can also give names to a list or vector when you create one of these objects. Use the same syntax as with `data.frame`:
`list(face = "ace", suit = "hearts", value = 1)`
`c(face = "ace", suit = "hearts", value = "one")`
The names will be stored in the object’s `names` attribute.
If you look at the type of a data frame, you will see that it is a list. In fact, each data frame is a list with class `data.frame`. You can see what types of objects are grouped together by a list (or data frame) with the `str` function:
```
typeof(df)
## "list"
class(df)
## "data.frame"
str(df)
## 'data.frame': 3 obs. of 3 variables:
## $ face : Factor w/ 3 levels "ace","six","two": 1 3 2
## $ suit : Factor w/ 1 level "clubs": 1 1 1
## $ value: num 1 2 3
```
Notice that R saved your character strings as factors. I told you that R likes factors! It is not a very big deal here, but you can prevent this behavior by adding the argument `stringsAsFactors = FALSE` to `data.frame`:
```
df <- data.frame(face = c("ace", "two", "six"),
suit = c("clubs", "clubs", "clubs"), value = c(1, 2, 3),
stringsAsFactors = FALSE)
```
A data frame is a great way to build an entire deck of cards. You can make each row in the data frame a playing card, and each column a type of value—each with its own appropriate data type. The data frame would look something like this:
```
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
## king clubs 13
## queen clubs 12
## jack clubs 11
## ten clubs 10
## ... and so on.
```
You could create this data frame with `data.frame`, but look at the typing involved! You need to write three vectors, each with 52 elements:
```
deck <- data.frame(
face = c("king", "queen", "jack", "ten", "nine", "eight", "seven", "six",
"five", "four", "three", "two", "ace", "king", "queen", "jack", "ten",
"nine", "eight", "seven", "six", "five", "four", "three", "two", "ace",
"king", "queen", "jack", "ten", "nine", "eight", "seven", "six", "five",
"four", "three", "two", "ace", "king", "queen", "jack", "ten", "nine",
"eight", "seven", "six", "five", "four", "three", "two", "ace"),
suit = c("spades", "spades", "spades", "spades", "spades", "spades",
"spades", "spades", "spades", "spades", "spades", "spades", "spades",
"clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs", "clubs",
"clubs", "clubs", "clubs", "clubs", "clubs", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "diamonds",
"diamonds", "diamonds", "diamonds", "diamonds", "diamonds", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts", "hearts", "hearts",
"hearts", "hearts", "hearts", "hearts", "hearts"),
value = c(13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8,
7, 6, 5, 4, 3, 2, 1, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 13, 12, 11,
10, 9, 8, 7, 6, 5, 4, 3, 2, 1)
)
```
You should avoid typing large data sets in by hand whenever possible. Typing invites typos and errors, not to mention RSI. It is always better to acquire large data sets as a computer file. You can then ask R to read the file and store the contents as an object.
I’ve created a file for you to load that contains a data frame of playing\-card information, so don’t worry about typing in the code. Instead, turn your attention toward loading data into R.
5\.9 Loading Data
-----------------
You can load the `deck` data frame from the file [*deck.csv*](http://bit.ly/deck_CSV). Please take a moment to download the file before reading on. Visit the website, click “Download Zip,” and then unzip and open the folder that your web browser downloads. *deck.csv* will be inside.
*deck.csv* is a comma\-separated values file, or CSV for short. CSVs are plain\-text files, which means you can open them in a text editor (as well as many other programs). If you open *desk.csv*, you’ll notice that it contains a table of data that looks like the following table. Each row of the table is saved on its own line, and a comma is used to separate the cells within each row. Every CSV file shares this basic format:
```
"face","suit,"value"
"king","spades",13
"queen","spades,12
"jack","spades,11
"ten","spades,10
"nine","spades,9
... and so on.
```
Most data\-science applications can open plain\-text files and export data as plain\-text files. This makes plain\-text files a sort of lingua franca for data science.
To load a plain\-text file into R, click the Import Dataset icon in RStudio, shown in Figure [5\.3](r-objects.html#fig:import). Then select “From text file.”
Figure 5\.3: You can import data from plain\-text files with RStudio’s Import Dataset.
RStudio will ask you to select the file you want to import, then it will open a wizard to help you import the data, as in Figure [5\.4](r-objects.html#fig:wizard). Use the wizard to tell RStudio what name to give the data set. You can also use the wizard to tell RStudio which character the data set uses as a separator, which character it uses to represent decimals (usually a period in the United States and a comma in Europe), and whether or not the data set comes with a row of column names (known as a *header*). To help you out, the wizard shows you what the raw file looks like, as well as what your loaded data will look like based on the input settings.
You can also unclick the box “Strings as factors” in the wizard. I recommend doing this. If you do, R will load all of your character strings as character strings. If you do not, R will convert them to factors.
Figure 5\.4: RStudio’s import wizard.
Once everything looks right, click Import. RStudio will read in the data and save it to a data frame. RStudio will also open a data viewer, so you can see your new data in a spreadsheet format. This is a good way to check that everything came through as expected. If all worked well, your file should appear in a View tab of RStudio, like in Figure [5\.5](r-objects.html#fig:view). You can examine the data frame in the console with *`head(deck)`*.
**Online data**
You can load a plain\-text file straight from the Internet by clicking the “From Web URL…” option under Import Dataset. The file will need to have its own URL, and you will need to be connected.
Figure 5\.5: When you import a data set, RStudio will save the data to a data frame and then display the data frame in a View tab. You can open any data frame in a View tab at any time with the View function.
Now it is your turn. Download *deck.csv* and import it into RStudio. Be sure to save the output to an R object called `deck`: you’ll use it in the next few chapters. If everything goes correctly, the first few lines of your data frame should look like this:
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
`head` and `tail` are two functions that provide an easy way to peek at large data sets. `head` will return just the first six rows of the data set, and `tail` will return just the last six rows. To see a different number of rows, give `head` or `tails` a second argument, the number of rows you would like to view, for example, `head(deck, 10)`.
R can open many types of files—not just CSVs. Visit [Loading and Saving Data in R](dataio.html#dataio) to learn how to open other common types of files in R.
5\.10 Saving Data
-----------------
Before we go any further, let’s save a copy of `deck` as a new *.csv* file. That way you can email it to a colleague, store it on a thumb drive, or open it in a different program. You can save any data frame in R to a *.csv* file with the command `write.csv`. To save `deck`, run:
```
write.csv(deck, file = "cards.csv", row.names = FALSE)
```
R will turn your data frame into a plain\-text file with the comma\-separated values format and save the file to your working directory. To see where your working directory is, run *`getwd()`*. To change the location of your working directory, visit Session \> Set Working Directory \> Choose Directory in the RStudio menu bar.
You can customize the save process with `write.csv`’s large set of optional arguments (see `?write.csv` for details). However, there are three arguments that you should use *every* time you run `write.csv`.
First, you should give `write.csv` the name of the data frame that you wish to save. Next, you should provide a file name to give your file. R will take this name quite literally, so be sure to provide an extension.
Finally, you should add the argument `row.names = FALSE`. This will prevent R from adding a column of numbers at the start of your data frame. These numbers will identify your rows from 1 to 52, but it is unlikely that whatever program you open *cards.csv* in will understand the row name system. More than likely, the program will assume that the row names are the first column of data in your data frame. In fact, this is exactly what R will assume if you reopen *cards.csv*. If you save and open *cards.csv* several times in R, you’ll notice duplicate columns of row numbers forming at the start of your data frame. I can’t explain why R does this, but I can explain how to avoid it: use `row.names = FALSE` whenever you save data with `write.csv`.
For more details about saving files, including how to compress saved files and how to save files in other formats, see [Loading and Saving Data in R](dataio.html#dataio).
Good work. You now have a virtual deck of cards to work with. Take a breather, and when you come back, we’ll start writing some functions to use on your deck.
5\.11 Summary
-------------
You can save data in R with five different objects, which let you store different types of values in different types of relationships, as in Figure [5\.6](r-objects.html#fig:structures). Of these objects, data frames are by far the most useful for data science. Data frames store one of the most common forms of data used in data science, tabular data.
Figure 5\.6: R’s most common data structures are vectors, matrices, arrays, lists, and data frames.
You can load tabular data into a data frame with RStudio’s Import Dataset button—so long as the data is saved as a plain\-text file. This requirement is not as limiting as it sounds. Most software programs can export data as a plain\-text file. So if you have an Excel file (for example) you can open the file in Excel and export the data as a CSV to use with R. In fact, opening a file in its original program is good practice. Excel files use metadata, like sheets and formulas, that help Excel work with the file. R can try to extract raw data from the file, but it won’t be as good at doing this as Microsoft Excel is. No program is better at converting Excel files than Excel. Similarly, no program is better at converting SAS Xport files than SAS, and so on.
However, you may find yourself with a program\-specific file, but not the program that created it. You wouldn’t want to buy a multi\-thousand\-dollar SAS license just to open a SAS file. Thankfully R *can* open many types of files, including files from other programs and databases. R even has its own program\-specific formats that can help you save memory and time if you know that you will be working entirely in R. If you’d like to know more about all of your options for loading and saving data in R, see [Loading and Saving Data in R](dataio.html#dataio).
[R Notation](r-notation.html#r-notation) will build upon the skills you learned in this chapter. Here, you learned how to store data in R. In [R Notation](r-notation.html#r-notation), you will learn how to access values once they’ve been stored. You’ll also write two functions that will let you start using your deck, a shuffle function and a deal function.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/r-notation.html |
6 R Notation
============
Now that you have a deck of cards, you need a way to do card\-like things with it. First, you’ll want to reshuffle the deck from time to time. And next, you’ll want to deal cards from the deck (one card at a time, whatever card is on top—we’re not cheaters).
To do these things, you’ll need to work with the individual values inside your data frame, a task essential to data science. For example, to deal a card from the top of your deck, you’ll need to write a function that selects the first row of values in your data frame, like this
```
deal(deck)
## face suit value
## king spades 13
```
You can select values within an R object with R’s notation system.
6\.1 Selecting Values
---------------------
R has a notation system that lets you extract values from R objects. To extract a value or set of values from a data frame, write the data frame’s name followed by a pair of hard brackets:
```
deck[ , ]
```
Between the brackets will go two indexes separated by a comma. The indexes tell R which values to return. R will use the first index to subset the rows of the data frame and the second index to subset the columns.
You have a choice when it comes to writing indexes. There are six different ways to write an index for R, and each does something slightly different. They are all very simple and quite handy, so let’s take a look at each of them. You can create indexes with:
* Positive integers
* Negative integers
* Zero
* Blank spaces
* Logical values
* Names
The simplest of these to use is positive integers.
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
6\.2 Deal a Card
----------------
Now that you know the basics of R’s notation system, let’s put it to use.
**Exercise 6\.1 (Deal a Card)** Complete the following code to make a function that returns the first row of a data frame:
```
deal <- function(cards) {
# ?
}
```
*Solution.* You can use any of the systems that return the first row of your data frame to write a `deal` function. I’ll use positive integers and blanks because I think they are easy to understand:
```
deal <- function(cards) {
cards[1, ]
}
```
The function does exactly what you want: it deals the top card from your data set. However, the function becomes less impressive if you run `deal` over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
`deal` always returns the king of spades because `deck` doesn’t know that we’ve dealt the card away. Hence, the king of spades stays where it is, at the top of the deck ready to be dealt again. This is a difficult problem to solve, and we will *deal* with it in [Environments](environments.html#environments-1). In the meantime, you can fix the problem by shuffling your deck after every deal. Then a new card will always be at the top.
Shuffling is a temporary compromise: the probabilities at play in your deck will not match the probabilities that occur when you play a game with a single deck of cards. For example, there will still be a probability that the king of spades appears twice in a row. However, things are not as bad as they may seem. Most casinos use five or six decks at a time in card games to prevent card counting. The probabilities that you would encounter in those situations are very close to the ones we will create here.
6\.3 Shuffle the Deck
---------------------
When you shuffle a real deck of cards, you randomly rearrange the order of the cards. In your virtual deck, each card is a row in a data frame. To shuffle the deck, you need to randomly reorder the rows in the data frame. Can this be done? You bet! And you already know everything you need to do it.
This may sound silly, but start by extracting every row in your data frame:
```
deck2 <- deck[1:52, ]
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
What do you get? A new data frame whose order hasn’t changed at all. What if you asked R to extract the rows in a different order? For example, you could ask for row 2, *then* row 1, and then the rest of the cards:
```
deck3 <- deck[c(2, 1, 3:52), ]
head(deck3)
## face suit value
## queen spades 12
## king spades 13
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
R complies. You’ll get all the rows back, and they’ll come in the order you ask for them. If you want the rows to come in a random order, then you need to sort the integers from 1 to 52 into a random order and use the results as a row index. How could you generate such a random collection of integers? With our friendly neighborhood `sample` function:
```
random <- sample(1:52, size = 52)
random
## 35 28 39 9 18 29 26 45 47 48 23 22 21 16 32 38 1 15 20
## 11 2 4 14 49 34 25 8 6 10 41 46 17 33 5 7 44 3 27
## 50 12 51 40 52 24 19 13 42 37 43 36 31 30
deck4 <- deck[random, ]
head(deck4)
## face suit value
## five diamonds 5
## queen diamonds 12
## ace diamonds 1
## five spades 5
## nine clubs 9
## jack diamonds 11
```
Now the new set is truly shuffled. You’ll be finished once you wrap these steps into a function.
**Exercise 6\.2 (Shuffle a Deck)** Use the preceding ideas to write a `shuffle` function. `shuffle` should take a data frame and return a shuffled copy of the data frame.
*Solution.* Your `shuffle` function will look like the one that follows:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
Nice work! Now you can shuffle your cards between each deal:
```
deal(deck)
## face suit value
## king spades 13
deck2 <- shuffle(deck)
deal(deck2)
## face suit value
## jack clubs 11
```
6\.4 Dollar Signs and Double Brackets
-------------------------------------
Two types of object in R obey an optional second system of notation. You can extract values from data frames and lists with the `$` syntax. You will encounter the `$` syntax again and again as an R programmer, so let’s examine how it works.
To select a column from a data frame, write the data frame’s name and the column name separated by a `$`. Notice that no quotes should go around the column name:
```
deck$value
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8 7
## 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2 1 13
## 12 11 10 9 8 7 6 5 4 3 2 1
```
R will return all of the values in the column as a vector. This `$` notation is incredibly useful because you will often store the variables of your data sets as columns in a data frame. From time to time, you’ll want to run a function like `mean` or `median` on the values in a variable. In R, these functions expect a vector of values as input, and `deck$value` delivers your data in just the right format:
```
mean(deck$value)
## 7
median(deck$value)
## 7
```
You can use the same `$` notation with the elements of a list, if they have names. This notation has an advantage with lists, too. If you subset a list in the usual way, R will return a *new* list that has the elements you requested. This is true even if you only request a single element.
To see this, make a list:
```
lst <- list(numbers = c(1, 2), logical = TRUE, strings = c("a", "b", "c"))
lst
## $numbers
## [1] 1 2
## $logical
## [1] TRUE
## $strings
## [1] "a" "b" "c"
```
And then subset it:
```
lst[1]
## $numbers
## [1] 1 2
```
The result is a smaller *list* with one element. That element is the vector `c(1, 2)`. This can be annoying because many R functions do not work with lists. For example, `sum(lst[1])` will return an error. It would be horrible if once you stored a vector in a list, you could only ever get it back as a list:
```
sum(lst[1])
## Error in sum(lst[1]) : invalid 'type' (list) of argument
```
When you use the `$` notation, R will return the selected values as they are, with no list structure around them:
```
lst$numbers
## 1 2
```
You can then immediately feed the results to a function:
```
sum(lst$numbers)
## 3
```
If the elements in your list do not have names (or you do not wish to use the names), you can use two brackets, instead of one, to subset the list. This notation will do the same thing as the `$` notation:
```
lst[[1]]
## 1 2
```
In other words, if you subset a list with single\-bracket notation, R will return a smaller list. If you subset a list with double\-bracket notation, R will return just the values that were inside an element of the list. You can combine this feature with any of R’s indexing methods:
```
lst["numbers"]
## $numbers
## [1] 1 2
lst[["numbers"]]
## 1 2
```
This difference is subtle but important. In the R community, there is a popular, and helpful, way to think about it, Figure [6\.3](r-notation.html#fig:trains). Imagine that each list is a train and each element is a train car. When you use single brackets, R selects individual train cars and returns them as a new train. Each car keeps its contents, but those contents are still inside a train car (i.e., a list). When you use double brackets, R actually unloads the car and gives you back the contents.
Figure 6\.3: It can be helpful to think of your list as a train. Use single brackets to select train cars, double brackets to select the contents inside of a car.
**Never attach**
In R’s early days, it became popular to use `attach()` on a data set once you had it loaded. Don’t do this! `attach` recreates a computing environment similar to those used in other statistics applications like Stata and SPSS, which crossover users liked. However, R is not Stata or SPSS. R is optimized to use the R computing environment, and running `attach()` can cause confusion with some R functions.
What does `attach()` do? On the surface, `attach` saves you typing. If you attach the `deck` data set, you can refer to each of its variables by name; instead of typing `deck$face`, you can just type `face`. But typing isn’t bad. It gives you a chance to be explicit, and in computer programming, explicit is good. Attaching a data set creates the possibility that R will confuse two variable names. If this occurs within a function, you’re likely to get unusable results and an unhelpful error message to explain what happened.
Now that you are an expert at retrieving values stored in R, let’s summarize what you’ve accomplished.
6\.5 Summary
------------
You have learned how to access values that have been stored in R. You can retrieve a copy of values that live inside a data frame and use the copies for new computations.
In fact, you can use R’s notation system to access values in any R object. To use it, write the name of an object followed by brackets and indexes. If your object is one\-dimensional, like a vector, you only need to supply one index. If it is two\-dimensional, like a data frame, you need to supply two indexes separated by a comma. And, if it is *n*\-dimensional, you need to supply *n* indexes, each separated by a comma.
In [Modifying Values](modify.html#modify), you’ll take this system a step further and learn how to change the actual values that are stored inside your data frame. This is all adding up to something special: complete control of your data. You can now store your data in your computer, retrieve individual values at will, and use your computer to perform correct calculations with those values.
Does this sound basic? It may be, but it is also powerful and essential for efficient data science. You no longer need to memorize everything in your head, nor worry about doing mental arithmetic wrong. This low\-level control over your data is also a prerequisite for more efficient R programs, the subject of [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
6\.1 Selecting Values
---------------------
R has a notation system that lets you extract values from R objects. To extract a value or set of values from a data frame, write the data frame’s name followed by a pair of hard brackets:
```
deck[ , ]
```
Between the brackets will go two indexes separated by a comma. The indexes tell R which values to return. R will use the first index to subset the rows of the data frame and the second index to subset the columns.
You have a choice when it comes to writing indexes. There are six different ways to write an index for R, and each does something slightly different. They are all very simple and quite handy, so let’s take a look at each of them. You can create indexes with:
* Positive integers
* Negative integers
* Zero
* Blank spaces
* Logical values
* Names
The simplest of these to use is positive integers.
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
6\.2 Deal a Card
----------------
Now that you know the basics of R’s notation system, let’s put it to use.
**Exercise 6\.1 (Deal a Card)** Complete the following code to make a function that returns the first row of a data frame:
```
deal <- function(cards) {
# ?
}
```
*Solution.* You can use any of the systems that return the first row of your data frame to write a `deal` function. I’ll use positive integers and blanks because I think they are easy to understand:
```
deal <- function(cards) {
cards[1, ]
}
```
The function does exactly what you want: it deals the top card from your data set. However, the function becomes less impressive if you run `deal` over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
`deal` always returns the king of spades because `deck` doesn’t know that we’ve dealt the card away. Hence, the king of spades stays where it is, at the top of the deck ready to be dealt again. This is a difficult problem to solve, and we will *deal* with it in [Environments](environments.html#environments-1). In the meantime, you can fix the problem by shuffling your deck after every deal. Then a new card will always be at the top.
Shuffling is a temporary compromise: the probabilities at play in your deck will not match the probabilities that occur when you play a game with a single deck of cards. For example, there will still be a probability that the king of spades appears twice in a row. However, things are not as bad as they may seem. Most casinos use five or six decks at a time in card games to prevent card counting. The probabilities that you would encounter in those situations are very close to the ones we will create here.
6\.3 Shuffle the Deck
---------------------
When you shuffle a real deck of cards, you randomly rearrange the order of the cards. In your virtual deck, each card is a row in a data frame. To shuffle the deck, you need to randomly reorder the rows in the data frame. Can this be done? You bet! And you already know everything you need to do it.
This may sound silly, but start by extracting every row in your data frame:
```
deck2 <- deck[1:52, ]
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
What do you get? A new data frame whose order hasn’t changed at all. What if you asked R to extract the rows in a different order? For example, you could ask for row 2, *then* row 1, and then the rest of the cards:
```
deck3 <- deck[c(2, 1, 3:52), ]
head(deck3)
## face suit value
## queen spades 12
## king spades 13
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
R complies. You’ll get all the rows back, and they’ll come in the order you ask for them. If you want the rows to come in a random order, then you need to sort the integers from 1 to 52 into a random order and use the results as a row index. How could you generate such a random collection of integers? With our friendly neighborhood `sample` function:
```
random <- sample(1:52, size = 52)
random
## 35 28 39 9 18 29 26 45 47 48 23 22 21 16 32 38 1 15 20
## 11 2 4 14 49 34 25 8 6 10 41 46 17 33 5 7 44 3 27
## 50 12 51 40 52 24 19 13 42 37 43 36 31 30
deck4 <- deck[random, ]
head(deck4)
## face suit value
## five diamonds 5
## queen diamonds 12
## ace diamonds 1
## five spades 5
## nine clubs 9
## jack diamonds 11
```
Now the new set is truly shuffled. You’ll be finished once you wrap these steps into a function.
**Exercise 6\.2 (Shuffle a Deck)** Use the preceding ideas to write a `shuffle` function. `shuffle` should take a data frame and return a shuffled copy of the data frame.
*Solution.* Your `shuffle` function will look like the one that follows:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
Nice work! Now you can shuffle your cards between each deal:
```
deal(deck)
## face suit value
## king spades 13
deck2 <- shuffle(deck)
deal(deck2)
## face suit value
## jack clubs 11
```
6\.4 Dollar Signs and Double Brackets
-------------------------------------
Two types of object in R obey an optional second system of notation. You can extract values from data frames and lists with the `$` syntax. You will encounter the `$` syntax again and again as an R programmer, so let’s examine how it works.
To select a column from a data frame, write the data frame’s name and the column name separated by a `$`. Notice that no quotes should go around the column name:
```
deck$value
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8 7
## 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2 1 13
## 12 11 10 9 8 7 6 5 4 3 2 1
```
R will return all of the values in the column as a vector. This `$` notation is incredibly useful because you will often store the variables of your data sets as columns in a data frame. From time to time, you’ll want to run a function like `mean` or `median` on the values in a variable. In R, these functions expect a vector of values as input, and `deck$value` delivers your data in just the right format:
```
mean(deck$value)
## 7
median(deck$value)
## 7
```
You can use the same `$` notation with the elements of a list, if they have names. This notation has an advantage with lists, too. If you subset a list in the usual way, R will return a *new* list that has the elements you requested. This is true even if you only request a single element.
To see this, make a list:
```
lst <- list(numbers = c(1, 2), logical = TRUE, strings = c("a", "b", "c"))
lst
## $numbers
## [1] 1 2
## $logical
## [1] TRUE
## $strings
## [1] "a" "b" "c"
```
And then subset it:
```
lst[1]
## $numbers
## [1] 1 2
```
The result is a smaller *list* with one element. That element is the vector `c(1, 2)`. This can be annoying because many R functions do not work with lists. For example, `sum(lst[1])` will return an error. It would be horrible if once you stored a vector in a list, you could only ever get it back as a list:
```
sum(lst[1])
## Error in sum(lst[1]) : invalid 'type' (list) of argument
```
When you use the `$` notation, R will return the selected values as they are, with no list structure around them:
```
lst$numbers
## 1 2
```
You can then immediately feed the results to a function:
```
sum(lst$numbers)
## 3
```
If the elements in your list do not have names (or you do not wish to use the names), you can use two brackets, instead of one, to subset the list. This notation will do the same thing as the `$` notation:
```
lst[[1]]
## 1 2
```
In other words, if you subset a list with single\-bracket notation, R will return a smaller list. If you subset a list with double\-bracket notation, R will return just the values that were inside an element of the list. You can combine this feature with any of R’s indexing methods:
```
lst["numbers"]
## $numbers
## [1] 1 2
lst[["numbers"]]
## 1 2
```
This difference is subtle but important. In the R community, there is a popular, and helpful, way to think about it, Figure [6\.3](r-notation.html#fig:trains). Imagine that each list is a train and each element is a train car. When you use single brackets, R selects individual train cars and returns them as a new train. Each car keeps its contents, but those contents are still inside a train car (i.e., a list). When you use double brackets, R actually unloads the car and gives you back the contents.
Figure 6\.3: It can be helpful to think of your list as a train. Use single brackets to select train cars, double brackets to select the contents inside of a car.
**Never attach**
In R’s early days, it became popular to use `attach()` on a data set once you had it loaded. Don’t do this! `attach` recreates a computing environment similar to those used in other statistics applications like Stata and SPSS, which crossover users liked. However, R is not Stata or SPSS. R is optimized to use the R computing environment, and running `attach()` can cause confusion with some R functions.
What does `attach()` do? On the surface, `attach` saves you typing. If you attach the `deck` data set, you can refer to each of its variables by name; instead of typing `deck$face`, you can just type `face`. But typing isn’t bad. It gives you a chance to be explicit, and in computer programming, explicit is good. Attaching a data set creates the possibility that R will confuse two variable names. If this occurs within a function, you’re likely to get unusable results and an unhelpful error message to explain what happened.
Now that you are an expert at retrieving values stored in R, let’s summarize what you’ve accomplished.
6\.5 Summary
------------
You have learned how to access values that have been stored in R. You can retrieve a copy of values that live inside a data frame and use the copies for new computations.
In fact, you can use R’s notation system to access values in any R object. To use it, write the name of an object followed by brackets and indexes. If your object is one\-dimensional, like a vector, you only need to supply one index. If it is two\-dimensional, like a data frame, you need to supply two indexes separated by a comma. And, if it is *n*\-dimensional, you need to supply *n* indexes, each separated by a comma.
In [Modifying Values](modify.html#modify), you’ll take this system a step further and learn how to change the actual values that are stored inside your data frame. This is all adding up to something special: complete control of your data. You can now store your data in your computer, retrieve individual values at will, and use your computer to perform correct calculations with those values.
Does this sound basic? It may be, but it is also powerful and essential for efficient data science. You no longer need to memorize everything in your head, nor worry about doing mental arithmetic wrong. This low\-level control over your data is also a prerequisite for more efficient R programs, the subject of [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/r-notation.html |
6 R Notation
============
Now that you have a deck of cards, you need a way to do card\-like things with it. First, you’ll want to reshuffle the deck from time to time. And next, you’ll want to deal cards from the deck (one card at a time, whatever card is on top—we’re not cheaters).
To do these things, you’ll need to work with the individual values inside your data frame, a task essential to data science. For example, to deal a card from the top of your deck, you’ll need to write a function that selects the first row of values in your data frame, like this
```
deal(deck)
## face suit value
## king spades 13
```
You can select values within an R object with R’s notation system.
6\.1 Selecting Values
---------------------
R has a notation system that lets you extract values from R objects. To extract a value or set of values from a data frame, write the data frame’s name followed by a pair of hard brackets:
```
deck[ , ]
```
Between the brackets will go two indexes separated by a comma. The indexes tell R which values to return. R will use the first index to subset the rows of the data frame and the second index to subset the columns.
You have a choice when it comes to writing indexes. There are six different ways to write an index for R, and each does something slightly different. They are all very simple and quite handy, so let’s take a look at each of them. You can create indexes with:
* Positive integers
* Negative integers
* Zero
* Blank spaces
* Logical values
* Names
The simplest of these to use is positive integers.
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
6\.2 Deal a Card
----------------
Now that you know the basics of R’s notation system, let’s put it to use.
**Exercise 6\.1 (Deal a Card)** Complete the following code to make a function that returns the first row of a data frame:
```
deal <- function(cards) {
# ?
}
```
*Solution.* You can use any of the systems that return the first row of your data frame to write a `deal` function. I’ll use positive integers and blanks because I think they are easy to understand:
```
deal <- function(cards) {
cards[1, ]
}
```
The function does exactly what you want: it deals the top card from your data set. However, the function becomes less impressive if you run `deal` over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
`deal` always returns the king of spades because `deck` doesn’t know that we’ve dealt the card away. Hence, the king of spades stays where it is, at the top of the deck ready to be dealt again. This is a difficult problem to solve, and we will *deal* with it in [Environments](environments.html#environments-1). In the meantime, you can fix the problem by shuffling your deck after every deal. Then a new card will always be at the top.
Shuffling is a temporary compromise: the probabilities at play in your deck will not match the probabilities that occur when you play a game with a single deck of cards. For example, there will still be a probability that the king of spades appears twice in a row. However, things are not as bad as they may seem. Most casinos use five or six decks at a time in card games to prevent card counting. The probabilities that you would encounter in those situations are very close to the ones we will create here.
6\.3 Shuffle the Deck
---------------------
When you shuffle a real deck of cards, you randomly rearrange the order of the cards. In your virtual deck, each card is a row in a data frame. To shuffle the deck, you need to randomly reorder the rows in the data frame. Can this be done? You bet! And you already know everything you need to do it.
This may sound silly, but start by extracting every row in your data frame:
```
deck2 <- deck[1:52, ]
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
What do you get? A new data frame whose order hasn’t changed at all. What if you asked R to extract the rows in a different order? For example, you could ask for row 2, *then* row 1, and then the rest of the cards:
```
deck3 <- deck[c(2, 1, 3:52), ]
head(deck3)
## face suit value
## queen spades 12
## king spades 13
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
R complies. You’ll get all the rows back, and they’ll come in the order you ask for them. If you want the rows to come in a random order, then you need to sort the integers from 1 to 52 into a random order and use the results as a row index. How could you generate such a random collection of integers? With our friendly neighborhood `sample` function:
```
random <- sample(1:52, size = 52)
random
## 35 28 39 9 18 29 26 45 47 48 23 22 21 16 32 38 1 15 20
## 11 2 4 14 49 34 25 8 6 10 41 46 17 33 5 7 44 3 27
## 50 12 51 40 52 24 19 13 42 37 43 36 31 30
deck4 <- deck[random, ]
head(deck4)
## face suit value
## five diamonds 5
## queen diamonds 12
## ace diamonds 1
## five spades 5
## nine clubs 9
## jack diamonds 11
```
Now the new set is truly shuffled. You’ll be finished once you wrap these steps into a function.
**Exercise 6\.2 (Shuffle a Deck)** Use the preceding ideas to write a `shuffle` function. `shuffle` should take a data frame and return a shuffled copy of the data frame.
*Solution.* Your `shuffle` function will look like the one that follows:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
Nice work! Now you can shuffle your cards between each deal:
```
deal(deck)
## face suit value
## king spades 13
deck2 <- shuffle(deck)
deal(deck2)
## face suit value
## jack clubs 11
```
6\.4 Dollar Signs and Double Brackets
-------------------------------------
Two types of object in R obey an optional second system of notation. You can extract values from data frames and lists with the `$` syntax. You will encounter the `$` syntax again and again as an R programmer, so let’s examine how it works.
To select a column from a data frame, write the data frame’s name and the column name separated by a `$`. Notice that no quotes should go around the column name:
```
deck$value
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8 7
## 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2 1 13
## 12 11 10 9 8 7 6 5 4 3 2 1
```
R will return all of the values in the column as a vector. This `$` notation is incredibly useful because you will often store the variables of your data sets as columns in a data frame. From time to time, you’ll want to run a function like `mean` or `median` on the values in a variable. In R, these functions expect a vector of values as input, and `deck$value` delivers your data in just the right format:
```
mean(deck$value)
## 7
median(deck$value)
## 7
```
You can use the same `$` notation with the elements of a list, if they have names. This notation has an advantage with lists, too. If you subset a list in the usual way, R will return a *new* list that has the elements you requested. This is true even if you only request a single element.
To see this, make a list:
```
lst <- list(numbers = c(1, 2), logical = TRUE, strings = c("a", "b", "c"))
lst
## $numbers
## [1] 1 2
## $logical
## [1] TRUE
## $strings
## [1] "a" "b" "c"
```
And then subset it:
```
lst[1]
## $numbers
## [1] 1 2
```
The result is a smaller *list* with one element. That element is the vector `c(1, 2)`. This can be annoying because many R functions do not work with lists. For example, `sum(lst[1])` will return an error. It would be horrible if once you stored a vector in a list, you could only ever get it back as a list:
```
sum(lst[1])
## Error in sum(lst[1]) : invalid 'type' (list) of argument
```
When you use the `$` notation, R will return the selected values as they are, with no list structure around them:
```
lst$numbers
## 1 2
```
You can then immediately feed the results to a function:
```
sum(lst$numbers)
## 3
```
If the elements in your list do not have names (or you do not wish to use the names), you can use two brackets, instead of one, to subset the list. This notation will do the same thing as the `$` notation:
```
lst[[1]]
## 1 2
```
In other words, if you subset a list with single\-bracket notation, R will return a smaller list. If you subset a list with double\-bracket notation, R will return just the values that were inside an element of the list. You can combine this feature with any of R’s indexing methods:
```
lst["numbers"]
## $numbers
## [1] 1 2
lst[["numbers"]]
## 1 2
```
This difference is subtle but important. In the R community, there is a popular, and helpful, way to think about it, Figure [6\.3](r-notation.html#fig:trains). Imagine that each list is a train and each element is a train car. When you use single brackets, R selects individual train cars and returns them as a new train. Each car keeps its contents, but those contents are still inside a train car (i.e., a list). When you use double brackets, R actually unloads the car and gives you back the contents.
Figure 6\.3: It can be helpful to think of your list as a train. Use single brackets to select train cars, double brackets to select the contents inside of a car.
**Never attach**
In R’s early days, it became popular to use `attach()` on a data set once you had it loaded. Don’t do this! `attach` recreates a computing environment similar to those used in other statistics applications like Stata and SPSS, which crossover users liked. However, R is not Stata or SPSS. R is optimized to use the R computing environment, and running `attach()` can cause confusion with some R functions.
What does `attach()` do? On the surface, `attach` saves you typing. If you attach the `deck` data set, you can refer to each of its variables by name; instead of typing `deck$face`, you can just type `face`. But typing isn’t bad. It gives you a chance to be explicit, and in computer programming, explicit is good. Attaching a data set creates the possibility that R will confuse two variable names. If this occurs within a function, you’re likely to get unusable results and an unhelpful error message to explain what happened.
Now that you are an expert at retrieving values stored in R, let’s summarize what you’ve accomplished.
6\.5 Summary
------------
You have learned how to access values that have been stored in R. You can retrieve a copy of values that live inside a data frame and use the copies for new computations.
In fact, you can use R’s notation system to access values in any R object. To use it, write the name of an object followed by brackets and indexes. If your object is one\-dimensional, like a vector, you only need to supply one index. If it is two\-dimensional, like a data frame, you need to supply two indexes separated by a comma. And, if it is *n*\-dimensional, you need to supply *n* indexes, each separated by a comma.
In [Modifying Values](modify.html#modify), you’ll take this system a step further and learn how to change the actual values that are stored inside your data frame. This is all adding up to something special: complete control of your data. You can now store your data in your computer, retrieve individual values at will, and use your computer to perform correct calculations with those values.
Does this sound basic? It may be, but it is also powerful and essential for efficient data science. You no longer need to memorize everything in your head, nor worry about doing mental arithmetic wrong. This low\-level control over your data is also a prerequisite for more efficient R programs, the subject of [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
6\.1 Selecting Values
---------------------
R has a notation system that lets you extract values from R objects. To extract a value or set of values from a data frame, write the data frame’s name followed by a pair of hard brackets:
```
deck[ , ]
```
Between the brackets will go two indexes separated by a comma. The indexes tell R which values to return. R will use the first index to subset the rows of the data frame and the second index to subset the columns.
You have a choice when it comes to writing indexes. There are six different ways to write an index for R, and each does something slightly different. They are all very simple and quite handy, so let’s take a look at each of them. You can create indexes with:
* Positive integers
* Negative integers
* Zero
* Blank spaces
* Logical values
* Names
The simplest of these to use is positive integers.
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
### 6\.1\.1 Positive Integers
R treats positive integers just like *ij* notation in linear algebra: `deck[i,j]` will return the value of `deck` that is in the *ith* row and the *jth* column, Figure [6\.1](r-notation.html#fig:positive). Notice that *i* and *j* only need to be integers in the mathematical sense. They can be saved as numerics in R
```
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
deck[1, 1]
## "king"
```
To extract more than one value, use a vector of positive integers. For example, you can return the first row of `deck` with `deck[1, c(1, 2, 3)]` or `deck[1, 1:3]`:
```
deck[1, c(1, 2, 3)]
## face suit value
## king spades 13
```
R will return the values of `deck` that are in both the first row and the first, second, and third columns. Note that R won’t actually remove these values from `deck`. R will give you a new set of values which are copies of the original values. You can then save this new set to an R object with R’s assignment operator:
```
new <- deck[1, c(1, 2, 3)]
new
## face suit value
## king spades 13
```
**Repetition**
If you repeat a number in your index, R will return the corresponding value(s) more than once in your “subset.” This code will return the first row of `deck` twice:
```
deck[c(1, 1), c(1, 2, 3)]
## face suit value
## king spades 13
## king spades 13
```
Figure 6\.1: R uses the *ij* notation system of linear algebra. The commands in this figure will return the shaded values.
R’s notation system is not limited to data frames. You can use the same syntax to select values in any R object, as long as you supply one index for each dimension of the object. So, for example, you can subset a vector (which has one dimension) with a single index:
```
vec <- c(6, 1, 3, 6, 10, 5)
vec[1:3]
## 6 1 3
```
**Indexing begins at 1**
In some programming languages, indexing begins with 0\. This means that 0 returns the first element of a vector, 1 returns the second element, and so on.
This isn’t the case with R. Indexing in R behaves just like indexing in linear algebra. The first element is always indexed by 1\. Why is R different? Maybe because it was written for mathematicians. Those of us who learned indexing from a linear algebra course wonder why computers programmers start with 0\.
**drop \= FALSE**
If you select two or more columns from a data frame, R will return a new data frame:
```
deck[1:2, 1:2]
## face suit
## king spades
## queen spades
```
However, if you select a single column, R will return a vector:
```
deck[1:2, 1]
## "king" "queen"
```
If you would prefer a data frame instead, you can add the optional argument `drop = FALSE` between the brackets:
```
deck[1:2, 1, drop = FALSE]
## face
## king
## queen
```
This method also works for selecting a single column from a matrix or an array.
### 6\.1\.2 Negative Integers
Negative integers do the exact opposite of positive integers when indexing. R will return every element *except* the elements in a negative index. For example, `deck[-1, 1:3]` will return everything *but* the first row of `deck`. `deck[-(2:52), 1:3]` will return the first row (and exclude everything else):
```
deck[-(2:52), 1:3]
## face suit value
## king spades 13
```
Negative integers are a more efficient way to subset than positive integers if you want to include the majority of a data frame’s rows or columns.
R will return an error if you try to pair a negative integer with a positive integer in the *same* index:
```
deck[c(-1, 1), 1]
## Error in xj[i] : only 0's may be mixed with negative subscripts
```
However, you can use both negative and positive integers to subset an object if you use them in *different* indexes (e.g., if you use one in the rows index and one in the columns index, like `deck[-1, 1]`).
### 6\.1\.3 Zero
What would happen if you used zero as an index? Zero is neither a positive integer nor a negative integer, but R will still use it to do a type of subsetting. R will return nothing from a dimension when you use zero as an index. This creates an empty object:
```
deck[0, 0]
## data frame with 0 columns and 0 rows
```
To be honest, indexing with zero is not very helpful.
### 6\.1\.4 Blank Spaces
You can use a blank space to tell R to extract *every* value in a dimension. This lets you subset an object on one dimension but not the others, which is useful for extracting entire rows or columns from a data frame:
```
deck[1, ]
## face suit value
## king spades 13
```
### 6\.1\.5 Logical Values
If you supply a vector of `TRUE`s and `FALSE`s as your index, R will match each `TRUE` and `FALSE` to a row in your data frame (or a column depending on where you place the index). R will then return each row that corresponds to a `TRUE`, Figure [6\.2](r-notation.html#fig:logicals).
It may help to imagine R reading through the data frame and asking, "Should I return the \_i\_th row of the data structure?" and then consulting the \_i\_th value of the index for its answer. For this system to work, your vector must be as long as the dimension you are trying to subset:
```
deck[1, c(TRUE, TRUE, FALSE)]
## face suit
## king spades
rows <- c(TRUE, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F,
F, F, F, F, F, F, F, F, F, F, F, F, F, F)
deck[rows, ]
## face suit value
## king spades 13
```
Figure 6\.2: You can use vectors of TRUEs and FALSEs to tell R exactly which values you want to extract and which you do not. The command would return just the numbers 1, 6, and 5\.
This system may seem odd—who wants to type so many `TRUE`s and `FALSE`s?—but it will become very powerful in [Modifying Values](modify.html#modify).
### 6\.1\.6 Names
Finally, you can ask for the elements you want by name—if your object has names (see [Names](r-objects.html#names)). This is a common way to extract the columns of a data frame, since columns almost always have names:
```
deck[1, c("face", "suit", "value")]
## face suit value
## king spades 13
# the entire value column
deck[ , "value"]
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8
## 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2
## 1 13 12 11 10 9 8 7 6 5 4 3 2 1
```
6\.2 Deal a Card
----------------
Now that you know the basics of R’s notation system, let’s put it to use.
**Exercise 6\.1 (Deal a Card)** Complete the following code to make a function that returns the first row of a data frame:
```
deal <- function(cards) {
# ?
}
```
*Solution.* You can use any of the systems that return the first row of your data frame to write a `deal` function. I’ll use positive integers and blanks because I think they are easy to understand:
```
deal <- function(cards) {
cards[1, ]
}
```
The function does exactly what you want: it deals the top card from your data set. However, the function becomes less impressive if you run `deal` over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
`deal` always returns the king of spades because `deck` doesn’t know that we’ve dealt the card away. Hence, the king of spades stays where it is, at the top of the deck ready to be dealt again. This is a difficult problem to solve, and we will *deal* with it in [Environments](environments.html#environments-1). In the meantime, you can fix the problem by shuffling your deck after every deal. Then a new card will always be at the top.
Shuffling is a temporary compromise: the probabilities at play in your deck will not match the probabilities that occur when you play a game with a single deck of cards. For example, there will still be a probability that the king of spades appears twice in a row. However, things are not as bad as they may seem. Most casinos use five or six decks at a time in card games to prevent card counting. The probabilities that you would encounter in those situations are very close to the ones we will create here.
6\.3 Shuffle the Deck
---------------------
When you shuffle a real deck of cards, you randomly rearrange the order of the cards. In your virtual deck, each card is a row in a data frame. To shuffle the deck, you need to randomly reorder the rows in the data frame. Can this be done? You bet! And you already know everything you need to do it.
This may sound silly, but start by extracting every row in your data frame:
```
deck2 <- deck[1:52, ]
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
What do you get? A new data frame whose order hasn’t changed at all. What if you asked R to extract the rows in a different order? For example, you could ask for row 2, *then* row 1, and then the rest of the cards:
```
deck3 <- deck[c(2, 1, 3:52), ]
head(deck3)
## face suit value
## queen spades 12
## king spades 13
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
R complies. You’ll get all the rows back, and they’ll come in the order you ask for them. If you want the rows to come in a random order, then you need to sort the integers from 1 to 52 into a random order and use the results as a row index. How could you generate such a random collection of integers? With our friendly neighborhood `sample` function:
```
random <- sample(1:52, size = 52)
random
## 35 28 39 9 18 29 26 45 47 48 23 22 21 16 32 38 1 15 20
## 11 2 4 14 49 34 25 8 6 10 41 46 17 33 5 7 44 3 27
## 50 12 51 40 52 24 19 13 42 37 43 36 31 30
deck4 <- deck[random, ]
head(deck4)
## face suit value
## five diamonds 5
## queen diamonds 12
## ace diamonds 1
## five spades 5
## nine clubs 9
## jack diamonds 11
```
Now the new set is truly shuffled. You’ll be finished once you wrap these steps into a function.
**Exercise 6\.2 (Shuffle a Deck)** Use the preceding ideas to write a `shuffle` function. `shuffle` should take a data frame and return a shuffled copy of the data frame.
*Solution.* Your `shuffle` function will look like the one that follows:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
Nice work! Now you can shuffle your cards between each deal:
```
deal(deck)
## face suit value
## king spades 13
deck2 <- shuffle(deck)
deal(deck2)
## face suit value
## jack clubs 11
```
6\.4 Dollar Signs and Double Brackets
-------------------------------------
Two types of object in R obey an optional second system of notation. You can extract values from data frames and lists with the `$` syntax. You will encounter the `$` syntax again and again as an R programmer, so let’s examine how it works.
To select a column from a data frame, write the data frame’s name and the column name separated by a `$`. Notice that no quotes should go around the column name:
```
deck$value
## 13 12 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8 7
## 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2 1 13
## 12 11 10 9 8 7 6 5 4 3 2 1
```
R will return all of the values in the column as a vector. This `$` notation is incredibly useful because you will often store the variables of your data sets as columns in a data frame. From time to time, you’ll want to run a function like `mean` or `median` on the values in a variable. In R, these functions expect a vector of values as input, and `deck$value` delivers your data in just the right format:
```
mean(deck$value)
## 7
median(deck$value)
## 7
```
You can use the same `$` notation with the elements of a list, if they have names. This notation has an advantage with lists, too. If you subset a list in the usual way, R will return a *new* list that has the elements you requested. This is true even if you only request a single element.
To see this, make a list:
```
lst <- list(numbers = c(1, 2), logical = TRUE, strings = c("a", "b", "c"))
lst
## $numbers
## [1] 1 2
## $logical
## [1] TRUE
## $strings
## [1] "a" "b" "c"
```
And then subset it:
```
lst[1]
## $numbers
## [1] 1 2
```
The result is a smaller *list* with one element. That element is the vector `c(1, 2)`. This can be annoying because many R functions do not work with lists. For example, `sum(lst[1])` will return an error. It would be horrible if once you stored a vector in a list, you could only ever get it back as a list:
```
sum(lst[1])
## Error in sum(lst[1]) : invalid 'type' (list) of argument
```
When you use the `$` notation, R will return the selected values as they are, with no list structure around them:
```
lst$numbers
## 1 2
```
You can then immediately feed the results to a function:
```
sum(lst$numbers)
## 3
```
If the elements in your list do not have names (or you do not wish to use the names), you can use two brackets, instead of one, to subset the list. This notation will do the same thing as the `$` notation:
```
lst[[1]]
## 1 2
```
In other words, if you subset a list with single\-bracket notation, R will return a smaller list. If you subset a list with double\-bracket notation, R will return just the values that were inside an element of the list. You can combine this feature with any of R’s indexing methods:
```
lst["numbers"]
## $numbers
## [1] 1 2
lst[["numbers"]]
## 1 2
```
This difference is subtle but important. In the R community, there is a popular, and helpful, way to think about it, Figure [6\.3](r-notation.html#fig:trains). Imagine that each list is a train and each element is a train car. When you use single brackets, R selects individual train cars and returns them as a new train. Each car keeps its contents, but those contents are still inside a train car (i.e., a list). When you use double brackets, R actually unloads the car and gives you back the contents.
Figure 6\.3: It can be helpful to think of your list as a train. Use single brackets to select train cars, double brackets to select the contents inside of a car.
**Never attach**
In R’s early days, it became popular to use `attach()` on a data set once you had it loaded. Don’t do this! `attach` recreates a computing environment similar to those used in other statistics applications like Stata and SPSS, which crossover users liked. However, R is not Stata or SPSS. R is optimized to use the R computing environment, and running `attach()` can cause confusion with some R functions.
What does `attach()` do? On the surface, `attach` saves you typing. If you attach the `deck` data set, you can refer to each of its variables by name; instead of typing `deck$face`, you can just type `face`. But typing isn’t bad. It gives you a chance to be explicit, and in computer programming, explicit is good. Attaching a data set creates the possibility that R will confuse two variable names. If this occurs within a function, you’re likely to get unusable results and an unhelpful error message to explain what happened.
Now that you are an expert at retrieving values stored in R, let’s summarize what you’ve accomplished.
6\.5 Summary
------------
You have learned how to access values that have been stored in R. You can retrieve a copy of values that live inside a data frame and use the copies for new computations.
In fact, you can use R’s notation system to access values in any R object. To use it, write the name of an object followed by brackets and indexes. If your object is one\-dimensional, like a vector, you only need to supply one index. If it is two\-dimensional, like a data frame, you need to supply two indexes separated by a comma. And, if it is *n*\-dimensional, you need to supply *n* indexes, each separated by a comma.
In [Modifying Values](modify.html#modify), you’ll take this system a step further and learn how to change the actual values that are stored inside your data frame. This is all adding up to something special: complete control of your data. You can now store your data in your computer, retrieve individual values at will, and use your computer to perform correct calculations with those values.
Does this sound basic? It may be, but it is also powerful and essential for efficient data science. You no longer need to memorize everything in your head, nor worry about doing mental arithmetic wrong. This low\-level control over your data is also a prerequisite for more efficient R programs, the subject of [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/modify.html |
7 Modifying Values
==================
Are you ready to play some games with your virtual deck? Not so fast! The point system in your deck of cards doesn’t align well with many card games. For example, in war and poker, aces are usually scored higher than kings. They’d have a point value of 14, not 1\.
In this task, you will change the point system of your deck three times to match three different games: war, hearts, and blackjack. Each of these games will teach you something different about modifying the values inside of a data set. Start by making a copy of `deck` that you can manipulate. This will ensure that you always have a pristine copy of `deck` to fall back on (should things go awry):
```
deck2 <- deck
```
### 7\.0\.1 Changing Values in Place
You can use R’s notation system to modify values within an R object. First, describe the value (or values) you wish to modify. Then use the assignment operator `<-` to overwrite those values. R will update the selected values *in the original object*. Let’s put this into action with a real example:
```
vec <- c(0, 0, 0, 0, 0, 0)
vec
## 0 0 0 0 0 0
```
Here’s how you can select the first value of `vec`:
```
vec[1]
## 0
```
And here is how you can modify it:
```
vec[1] <- 1000
vec
## 1000 0 0 0 0 0
```
You can replace multiple values at once as long as the number of new values equals the number of selected values:
```
vec[c(1, 3, 5)] <- c(1, 1, 1)
vec
## 1 0 1 0 1 0
vec[4:6] <- vec[4:6] + 1
vec
## 1 0 1 1 2 1
```
You can also create values that do not yet exist in your object. R will expand the object to accommodate the new values:
```
vec[7] <- 0
vec
## 1 0 1 1 2 1 0
```
This provides a great way to add new variables to your data set:
```
deck2$new <- 1:52
head(deck2)
## face suit value new
## king spades 13 1
## queen spades 12 2
## jack spades 11 3
## ten spades 10 4
## nine spades 9 5
## eight spades 8 6
```
You can also remove columns from a data frame (and elements from a list) by assigning them the symbol `NULL`:
```
deck2$new <- NULL
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
In the game of war, aces are king (figuratively speaking). They receive the highest value of all the cards, which would be something like 14\. Every other card gets the value that it already has in `deck`. To play war, you just need to change the values of your aces from 1 to 14\.
As long as you haven’t shuffled your deck, you know just where the aces are. They appear every 13 cards. Hence, you can describe them with R’s notation system:
```
deck2[c(13, 26, 39, 52), ]
## face suit value
## ace spades 1
## ace clubs 1
## ace diamonds 1
## ace hearts 1
```
You can single out just the *values* of the aces by subsetting the columns dimension of `deck2`. Or, even better, you can subset the column vector `deck2$value`:
```
deck2[c(13, 26, 39, 52), 3]
## 1 1 1 1
deck2$value[c(13, 26, 39, 52)]
## 1 1 1 1
```
Now all you have to do is assign a new set of values to these old values. The set of new values will have to be the same size as the set of values that you are replacing. So you could save `c(14, 14, 14, 14)` into the ace values, or you could just save *`14`* and rely on R’s recycling rules to expand `14` to `c(14, 14, 14, 14)`:
```
deck2$value[c(13, 26, 39, 52)] <- c(14, 14, 14, 14)
# or
deck2$value[c(13, 26, 39, 52)] <- 14
```
Notice that the values change *in place*. You don’t end up with a modified *copy* of `deck2`; the new values will appear inside `deck2`:
```
head(deck2, 13)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 14
```
The same technique will work whether you store your data in a vector, matrix, array, list, or data frame. Just describe the values that you want to change with R’s notation system, then assign over those values with R’s assignment operator.
Things worked very easily in this example because you knew exactly where each ace was. The cards were sorted in an orderly manner and an ace appeared every 13 rows.
But what if the deck had been shuffled? You could look through all the cards and note the locations of the aces, but that would be tedious. If your data frame were larger, it might be impossible:
```
deck3 <- shuffle(deck)
```
Where are the aces now?
```
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 1 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
Why not ask R to find the aces for you? You can do this with logical subsetting. Logical subsetting provides a way to do targeted extraction and modification with R objects, a sort of search\-and\-destroy mission inside your own data sets.
### 7\.0\.2 Logical Subsetting
Do you remember R’s logical index system, [logicals](r-objects.html#logicals)? To recap, you can select values with a vector of `TRUE`s and `FALSE`s. The vector must be the same length as the dimension that you wish to subset. R will return every element that matches a TRUE:
```
vec
## 1 0 1 1 2 1 0
vec[c(FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE)]
## 2
```
At first glance, this system might seem impractical. Who wants to type out long vectors of TRUEs and FALSEs? No one. But you don’t have to. You can let a logical test create a vector of TRUEs and FALSEs for you.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
### 7\.0\.3 Missing Information
Missing information problems happen frequently in data science. Usually, they are more straightforward: you don’t know a value because the measurement was lost, corrupted, or never taken to begin with. R has a way to help you manage these missing values.
The `NA` character is a special symbol in R. It stands for “not available” and can be used as a placeholder for missing information. R will treat NA exactly as you should want missing information treated. For example, what result would you expect if you add 1 to a piece of missing information?
```
1 + NA
## NA
```
R will return a second piece of missing information. It would not be correct to say that `1 + NA = 1` because there is a good chance that the missing quantity is not zero. You do not have enough information to determine the result.
What if you tested whether a piece of missing information is equal to 1?
```
NA == 1
## NA
```
Again, your answer would be something like “I do not know if this is equal to one,” that is, `NA`. Generally, `NA`s will propagate whenever you use them in an R operation or function. This can save you from making errors based on missing data.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
### 7\.0\.4 Summary
You can modify values in place inside an R object when you combine R’s notation syntax with the assignment operator, `<-`. This lets you update your data and clean your data sets
When you work with large data sets, modifying and retrieving values creates a logistical problem of its own. How can you search through the data to find the values that you want to modify or retrieve? As an R user, you can do this with logical subsetting. Create a logical test with logical and Boolean operators and then use the test as an index in R’s bracket notation. R will return the values that you are looking for, even if you do not know where they are.
Retrieving individual values will not be your only concern as an R programmer. You’ll also need to retrieve entire data sets themselves; for example, you may call one in a function. [Environments](environments.html#environments-1) will teach you how R looks up and saves data sets and other R objects in its environment system. You’ll then use this knowledge to fix the `deal` and `shuffle` functions.
### 7\.0\.1 Changing Values in Place
You can use R’s notation system to modify values within an R object. First, describe the value (or values) you wish to modify. Then use the assignment operator `<-` to overwrite those values. R will update the selected values *in the original object*. Let’s put this into action with a real example:
```
vec <- c(0, 0, 0, 0, 0, 0)
vec
## 0 0 0 0 0 0
```
Here’s how you can select the first value of `vec`:
```
vec[1]
## 0
```
And here is how you can modify it:
```
vec[1] <- 1000
vec
## 1000 0 0 0 0 0
```
You can replace multiple values at once as long as the number of new values equals the number of selected values:
```
vec[c(1, 3, 5)] <- c(1, 1, 1)
vec
## 1 0 1 0 1 0
vec[4:6] <- vec[4:6] + 1
vec
## 1 0 1 1 2 1
```
You can also create values that do not yet exist in your object. R will expand the object to accommodate the new values:
```
vec[7] <- 0
vec
## 1 0 1 1 2 1 0
```
This provides a great way to add new variables to your data set:
```
deck2$new <- 1:52
head(deck2)
## face suit value new
## king spades 13 1
## queen spades 12 2
## jack spades 11 3
## ten spades 10 4
## nine spades 9 5
## eight spades 8 6
```
You can also remove columns from a data frame (and elements from a list) by assigning them the symbol `NULL`:
```
deck2$new <- NULL
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
In the game of war, aces are king (figuratively speaking). They receive the highest value of all the cards, which would be something like 14\. Every other card gets the value that it already has in `deck`. To play war, you just need to change the values of your aces from 1 to 14\.
As long as you haven’t shuffled your deck, you know just where the aces are. They appear every 13 cards. Hence, you can describe them with R’s notation system:
```
deck2[c(13, 26, 39, 52), ]
## face suit value
## ace spades 1
## ace clubs 1
## ace diamonds 1
## ace hearts 1
```
You can single out just the *values* of the aces by subsetting the columns dimension of `deck2`. Or, even better, you can subset the column vector `deck2$value`:
```
deck2[c(13, 26, 39, 52), 3]
## 1 1 1 1
deck2$value[c(13, 26, 39, 52)]
## 1 1 1 1
```
Now all you have to do is assign a new set of values to these old values. The set of new values will have to be the same size as the set of values that you are replacing. So you could save `c(14, 14, 14, 14)` into the ace values, or you could just save *`14`* and rely on R’s recycling rules to expand `14` to `c(14, 14, 14, 14)`:
```
deck2$value[c(13, 26, 39, 52)] <- c(14, 14, 14, 14)
# or
deck2$value[c(13, 26, 39, 52)] <- 14
```
Notice that the values change *in place*. You don’t end up with a modified *copy* of `deck2`; the new values will appear inside `deck2`:
```
head(deck2, 13)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 14
```
The same technique will work whether you store your data in a vector, matrix, array, list, or data frame. Just describe the values that you want to change with R’s notation system, then assign over those values with R’s assignment operator.
Things worked very easily in this example because you knew exactly where each ace was. The cards were sorted in an orderly manner and an ace appeared every 13 rows.
But what if the deck had been shuffled? You could look through all the cards and note the locations of the aces, but that would be tedious. If your data frame were larger, it might be impossible:
```
deck3 <- shuffle(deck)
```
Where are the aces now?
```
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 1 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
Why not ask R to find the aces for you? You can do this with logical subsetting. Logical subsetting provides a way to do targeted extraction and modification with R objects, a sort of search\-and\-destroy mission inside your own data sets.
### 7\.0\.2 Logical Subsetting
Do you remember R’s logical index system, [logicals](r-objects.html#logicals)? To recap, you can select values with a vector of `TRUE`s and `FALSE`s. The vector must be the same length as the dimension that you wish to subset. R will return every element that matches a TRUE:
```
vec
## 1 0 1 1 2 1 0
vec[c(FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE)]
## 2
```
At first glance, this system might seem impractical. Who wants to type out long vectors of TRUEs and FALSEs? No one. But you don’t have to. You can let a logical test create a vector of TRUEs and FALSEs for you.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
### 7\.0\.3 Missing Information
Missing information problems happen frequently in data science. Usually, they are more straightforward: you don’t know a value because the measurement was lost, corrupted, or never taken to begin with. R has a way to help you manage these missing values.
The `NA` character is a special symbol in R. It stands for “not available” and can be used as a placeholder for missing information. R will treat NA exactly as you should want missing information treated. For example, what result would you expect if you add 1 to a piece of missing information?
```
1 + NA
## NA
```
R will return a second piece of missing information. It would not be correct to say that `1 + NA = 1` because there is a good chance that the missing quantity is not zero. You do not have enough information to determine the result.
What if you tested whether a piece of missing information is equal to 1?
```
NA == 1
## NA
```
Again, your answer would be something like “I do not know if this is equal to one,” that is, `NA`. Generally, `NA`s will propagate whenever you use them in an R operation or function. This can save you from making errors based on missing data.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
### 7\.0\.4 Summary
You can modify values in place inside an R object when you combine R’s notation syntax with the assignment operator, `<-`. This lets you update your data and clean your data sets
When you work with large data sets, modifying and retrieving values creates a logistical problem of its own. How can you search through the data to find the values that you want to modify or retrieve? As an R user, you can do this with logical subsetting. Create a logical test with logical and Boolean operators and then use the test as an index in R’s bracket notation. R will return the values that you are looking for, even if you do not know where they are.
Retrieving individual values will not be your only concern as an R programmer. You’ll also need to retrieve entire data sets themselves; for example, you may call one in a function. [Environments](environments.html#environments-1) will teach you how R looks up and saves data sets and other R objects in its environment system. You’ll then use this knowledge to fix the `deal` and `shuffle` functions.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/modify.html |
7 Modifying Values
==================
Are you ready to play some games with your virtual deck? Not so fast! The point system in your deck of cards doesn’t align well with many card games. For example, in war and poker, aces are usually scored higher than kings. They’d have a point value of 14, not 1\.
In this task, you will change the point system of your deck three times to match three different games: war, hearts, and blackjack. Each of these games will teach you something different about modifying the values inside of a data set. Start by making a copy of `deck` that you can manipulate. This will ensure that you always have a pristine copy of `deck` to fall back on (should things go awry):
```
deck2 <- deck
```
### 7\.0\.1 Changing Values in Place
You can use R’s notation system to modify values within an R object. First, describe the value (or values) you wish to modify. Then use the assignment operator `<-` to overwrite those values. R will update the selected values *in the original object*. Let’s put this into action with a real example:
```
vec <- c(0, 0, 0, 0, 0, 0)
vec
## 0 0 0 0 0 0
```
Here’s how you can select the first value of `vec`:
```
vec[1]
## 0
```
And here is how you can modify it:
```
vec[1] <- 1000
vec
## 1000 0 0 0 0 0
```
You can replace multiple values at once as long as the number of new values equals the number of selected values:
```
vec[c(1, 3, 5)] <- c(1, 1, 1)
vec
## 1 0 1 0 1 0
vec[4:6] <- vec[4:6] + 1
vec
## 1 0 1 1 2 1
```
You can also create values that do not yet exist in your object. R will expand the object to accommodate the new values:
```
vec[7] <- 0
vec
## 1 0 1 1 2 1 0
```
This provides a great way to add new variables to your data set:
```
deck2$new <- 1:52
head(deck2)
## face suit value new
## king spades 13 1
## queen spades 12 2
## jack spades 11 3
## ten spades 10 4
## nine spades 9 5
## eight spades 8 6
```
You can also remove columns from a data frame (and elements from a list) by assigning them the symbol `NULL`:
```
deck2$new <- NULL
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
In the game of war, aces are king (figuratively speaking). They receive the highest value of all the cards, which would be something like 14\. Every other card gets the value that it already has in `deck`. To play war, you just need to change the values of your aces from 1 to 14\.
As long as you haven’t shuffled your deck, you know just where the aces are. They appear every 13 cards. Hence, you can describe them with R’s notation system:
```
deck2[c(13, 26, 39, 52), ]
## face suit value
## ace spades 1
## ace clubs 1
## ace diamonds 1
## ace hearts 1
```
You can single out just the *values* of the aces by subsetting the columns dimension of `deck2`. Or, even better, you can subset the column vector `deck2$value`:
```
deck2[c(13, 26, 39, 52), 3]
## 1 1 1 1
deck2$value[c(13, 26, 39, 52)]
## 1 1 1 1
```
Now all you have to do is assign a new set of values to these old values. The set of new values will have to be the same size as the set of values that you are replacing. So you could save `c(14, 14, 14, 14)` into the ace values, or you could just save *`14`* and rely on R’s recycling rules to expand `14` to `c(14, 14, 14, 14)`:
```
deck2$value[c(13, 26, 39, 52)] <- c(14, 14, 14, 14)
# or
deck2$value[c(13, 26, 39, 52)] <- 14
```
Notice that the values change *in place*. You don’t end up with a modified *copy* of `deck2`; the new values will appear inside `deck2`:
```
head(deck2, 13)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 14
```
The same technique will work whether you store your data in a vector, matrix, array, list, or data frame. Just describe the values that you want to change with R’s notation system, then assign over those values with R’s assignment operator.
Things worked very easily in this example because you knew exactly where each ace was. The cards were sorted in an orderly manner and an ace appeared every 13 rows.
But what if the deck had been shuffled? You could look through all the cards and note the locations of the aces, but that would be tedious. If your data frame were larger, it might be impossible:
```
deck3 <- shuffle(deck)
```
Where are the aces now?
```
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 1 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
Why not ask R to find the aces for you? You can do this with logical subsetting. Logical subsetting provides a way to do targeted extraction and modification with R objects, a sort of search\-and\-destroy mission inside your own data sets.
### 7\.0\.2 Logical Subsetting
Do you remember R’s logical index system, [logicals](r-objects.html#logicals)? To recap, you can select values with a vector of `TRUE`s and `FALSE`s. The vector must be the same length as the dimension that you wish to subset. R will return every element that matches a TRUE:
```
vec
## 1 0 1 1 2 1 0
vec[c(FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE)]
## 2
```
At first glance, this system might seem impractical. Who wants to type out long vectors of TRUEs and FALSEs? No one. But you don’t have to. You can let a logical test create a vector of TRUEs and FALSEs for you.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
### 7\.0\.3 Missing Information
Missing information problems happen frequently in data science. Usually, they are more straightforward: you don’t know a value because the measurement was lost, corrupted, or never taken to begin with. R has a way to help you manage these missing values.
The `NA` character is a special symbol in R. It stands for “not available” and can be used as a placeholder for missing information. R will treat NA exactly as you should want missing information treated. For example, what result would you expect if you add 1 to a piece of missing information?
```
1 + NA
## NA
```
R will return a second piece of missing information. It would not be correct to say that `1 + NA = 1` because there is a good chance that the missing quantity is not zero. You do not have enough information to determine the result.
What if you tested whether a piece of missing information is equal to 1?
```
NA == 1
## NA
```
Again, your answer would be something like “I do not know if this is equal to one,” that is, `NA`. Generally, `NA`s will propagate whenever you use them in an R operation or function. This can save you from making errors based on missing data.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
### 7\.0\.4 Summary
You can modify values in place inside an R object when you combine R’s notation syntax with the assignment operator, `<-`. This lets you update your data and clean your data sets
When you work with large data sets, modifying and retrieving values creates a logistical problem of its own. How can you search through the data to find the values that you want to modify or retrieve? As an R user, you can do this with logical subsetting. Create a logical test with logical and Boolean operators and then use the test as an index in R’s bracket notation. R will return the values that you are looking for, even if you do not know where they are.
Retrieving individual values will not be your only concern as an R programmer. You’ll also need to retrieve entire data sets themselves; for example, you may call one in a function. [Environments](environments.html#environments-1) will teach you how R looks up and saves data sets and other R objects in its environment system. You’ll then use this knowledge to fix the `deal` and `shuffle` functions.
### 7\.0\.1 Changing Values in Place
You can use R’s notation system to modify values within an R object. First, describe the value (or values) you wish to modify. Then use the assignment operator `<-` to overwrite those values. R will update the selected values *in the original object*. Let’s put this into action with a real example:
```
vec <- c(0, 0, 0, 0, 0, 0)
vec
## 0 0 0 0 0 0
```
Here’s how you can select the first value of `vec`:
```
vec[1]
## 0
```
And here is how you can modify it:
```
vec[1] <- 1000
vec
## 1000 0 0 0 0 0
```
You can replace multiple values at once as long as the number of new values equals the number of selected values:
```
vec[c(1, 3, 5)] <- c(1, 1, 1)
vec
## 1 0 1 0 1 0
vec[4:6] <- vec[4:6] + 1
vec
## 1 0 1 1 2 1
```
You can also create values that do not yet exist in your object. R will expand the object to accommodate the new values:
```
vec[7] <- 0
vec
## 1 0 1 1 2 1 0
```
This provides a great way to add new variables to your data set:
```
deck2$new <- 1:52
head(deck2)
## face suit value new
## king spades 13 1
## queen spades 12 2
## jack spades 11 3
## ten spades 10 4
## nine spades 9 5
## eight spades 8 6
```
You can also remove columns from a data frame (and elements from a list) by assigning them the symbol `NULL`:
```
deck2$new <- NULL
head(deck2)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
```
In the game of war, aces are king (figuratively speaking). They receive the highest value of all the cards, which would be something like 14\. Every other card gets the value that it already has in `deck`. To play war, you just need to change the values of your aces from 1 to 14\.
As long as you haven’t shuffled your deck, you know just where the aces are. They appear every 13 cards. Hence, you can describe them with R’s notation system:
```
deck2[c(13, 26, 39, 52), ]
## face suit value
## ace spades 1
## ace clubs 1
## ace diamonds 1
## ace hearts 1
```
You can single out just the *values* of the aces by subsetting the columns dimension of `deck2`. Or, even better, you can subset the column vector `deck2$value`:
```
deck2[c(13, 26, 39, 52), 3]
## 1 1 1 1
deck2$value[c(13, 26, 39, 52)]
## 1 1 1 1
```
Now all you have to do is assign a new set of values to these old values. The set of new values will have to be the same size as the set of values that you are replacing. So you could save `c(14, 14, 14, 14)` into the ace values, or you could just save *`14`* and rely on R’s recycling rules to expand `14` to `c(14, 14, 14, 14)`:
```
deck2$value[c(13, 26, 39, 52)] <- c(14, 14, 14, 14)
# or
deck2$value[c(13, 26, 39, 52)] <- 14
```
Notice that the values change *in place*. You don’t end up with a modified *copy* of `deck2`; the new values will appear inside `deck2`:
```
head(deck2, 13)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 14
```
The same technique will work whether you store your data in a vector, matrix, array, list, or data frame. Just describe the values that you want to change with R’s notation system, then assign over those values with R’s assignment operator.
Things worked very easily in this example because you knew exactly where each ace was. The cards were sorted in an orderly manner and an ace appeared every 13 rows.
But what if the deck had been shuffled? You could look through all the cards and note the locations of the aces, but that would be tedious. If your data frame were larger, it might be impossible:
```
deck3 <- shuffle(deck)
```
Where are the aces now?
```
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 1 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
Why not ask R to find the aces for you? You can do this with logical subsetting. Logical subsetting provides a way to do targeted extraction and modification with R objects, a sort of search\-and\-destroy mission inside your own data sets.
### 7\.0\.2 Logical Subsetting
Do you remember R’s logical index system, [logicals](r-objects.html#logicals)? To recap, you can select values with a vector of `TRUE`s and `FALSE`s. The vector must be the same length as the dimension that you wish to subset. R will return every element that matches a TRUE:
```
vec
## 1 0 1 1 2 1 0
vec[c(FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE)]
## 2
```
At first glance, this system might seem impractical. Who wants to type out long vectors of TRUEs and FALSEs? No one. But you don’t have to. You can let a logical test create a vector of TRUEs and FALSEs for you.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
#### 7\.0\.2\.1 Logical Tests
A logical test is a comparison like “is one less than two?”, `1 < 2`, or “is three greater than four?”, `3 > 4`. R provides seven logical operators that you can use to make comparisons, shown in Table [7\.1](modify.html#tab:logop).
Table 7\.1: R’s Logical Operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `>` | `a > b` | Is a greater than b? |
| `>=` | `a >= b` | Is a greater than or equal to b? |
| `<` | `a < b` | Is a less than b? |
| `<=` | `a <= b` | Is a less than or equal to b? |
| `==` | `a == b` | Is a equal to b? |
| `!=` | `a != b` | Is a not equal to b? |
| `%in%` | `a %in% c(a, b, c)` | Is a in the group c(a, b, c)? |
Each operator returns a `TRUE` or a `FALSE`. If you use an operator to compare vectors, R will do element\-wise comparisons—just like it does with the arithmetic operators:
```
1 > 2
## FALSE
1 > c(0, 1, 2)
## TRUE FALSE FALSE
c(1, 2, 3) == c(3, 2, 1)
## FALSE TRUE FALSE
```
`%in%` is the only operator that does not do normal element\-wise execution. `%in%` tests whether the value(s) on the left side are in the vector on the right side. If you provide a vector on the left side, `%in%` will *not* pair up the values on the left with the values on the right and then do element\-wise tests. Instead, `%in%` will independently test whether each value on the left is *somewhere* in the vector on the right:
```
1 %in% c(3, 4, 5)
## FALSE
c(1, 2) %in% c(3, 4, 5)
## FALSE FALSE
c(1, 2, 3) %in% c(3, 4, 5)
## FALSE FALSE TRUE
c(1, 2, 3, 4) %in% c(3, 4, 5)
## FALSE FALSE TRUE TRUE
```
Notice that you test for equality with a double equals sign, `==`, and not a single equals sign, `=`, which is another way to write `<-`. It is easy to forget and use `a = b` to test if `a` equals `b`. Unfortunately, you’ll be in for a nasty surprise. R won’t return a `TRUE` or `FALSE`, because it won’t have to: `a` *will* equal `b`, because you just ran the equivalent of `a <- b`.
**`=` is an assignment operator**
Be careful not to confuse `=` with `==`. `=` does the same thing as `<-`: it assigns a value to an object.
You can compare any two R objects with a logical operator; however, logical operators make the most sense if you compare two objects of the same data type. If you compare objects of different data types, R will use its coercion rules to coerce the objects to the same type before it makes the comparison.
**Exercise 7\.1 (How many Aces?)** Extract the `face` column of `deck2` and test whether each value is equal to `ace`. As a challenge, use R to quickly count how many cards are equal to `ace`.
*Solution.* You can extract the `face` column with R’s `$` notation:
```
deck2$face
## "king" "queen" "jack" "ten" "nine"
## "eight" "seven" "six" "five" "four"
## "three" "two" "ace" "king" "queen"
## "jack" "ten" "nine" "eight" "seven"
## "six" "five" "four" "three" "two"
## "ace" "king" "queen" "jack" "ten"
## "nine" "eight" "seven" "six" "five"
## "four" "three" "two" "ace" "king"
## "queen" "jack" "ten" "nine" "eight"
## "seven" "six" "five" "four" "three"
## "two" "ace"
```
Next, you can use the `==` operator to test whether each value is equal to `ace`. In the following code, R will use its recycling rules to indivuidually compare every value of `deck2$face` to `"ace"`. Notice that the quotation marks are important. If you leave them out, R will try to find an object named `ace` to compare against `deck2$face`:
```
deck2$face == "ace"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE TRUE
```
You can use `sum` to quickly count the number of `TRUE`s in the previous vector. Remember that R will coerce logicals to numerics when you do math with them. R will turn `TRUE`s into ones and `FALSE`s into zeroes. As a result, sum will count the number of `TRUE`s:
```
sum(deck2$face == "ace")
## 4
```
You can use this method to spot and then change the aces in your deck—even if you’ve shuffled your cards. First, build a logical test that identifies the aces in your shuffled deck:
```
deck3$face == "ace"
```
Then use the test to single out the ace point values. Since the test returns a logical vector, you can use it as an index:
```
deck3$value[deck3$face == "ace"]
## 1 1 1 1
```
Finally, use assignment to change the ace values in `deck3`:
```
deck3$value[deck3$face == "ace"] <- 14
head(deck3)
## face suit value
## queen clubs 12
## king clubs 13
## ace spades 14 # an ace
## nine clubs 9
## seven spades 7
## queen diamonds 12
```
To summarize, you can use a logical test to select values within an object.
Logical subsetting is a powerful technique because it lets you quickly identify, extract, and modify individual values in your data set. When you work with logical subsetting, you do not need to know *where* in your data set a value exists. You only need to know how to describe the value with a logical test.
Logical subsetting is one of the things R does best. In fact, logical subsetting is a key component of vectorized programming, a coding style that lets you write fast and efficient R code, which we will study in [Speed](speed.html#speed).
Let’s put logical subsetting to use with a new game: hearts. In hearts, every card has a value of zero:
```
deck4 <- deck
deck4$value <- 0
head(deck4, 13)
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
except cards in the suit of hearts and the queen of spades. Each card in the suit of hearts has a value of 1\. Can you find these cards and replace their values? Give it a try.
**Exercise 7\.2 (Score the Deck for Hearts)** Assign a value of `1` to every card in `deck4` that has a suit of hearts.
*Solution.* To do this, first write a test that identifies cards in the `hearts` suit:
```
deck4$suit == "hearts"
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE
```
Then use your test to select the values of these cards:
```
deck4$value[deck4$suit == "hearts"]
## 0 0 0 0 0 0 0 0 0 0 0 0 0
```
Finally, assign a new number to these values:
```
deck4$value[deck4$suit == "hearts"] <- 1
```
Now all of your `hearts` cards have been updated:
```
deck4$value[deck4$suit == "hearts"]
## 1 1 1 1 1 1 1 1 1 1 1 1 1
```
In hearts, the queen of spades has the most unusual value of all: she’s worth 13 points. It should be simple to change her value, but she’s surprisingly hard to find. You could find all of the *queens*:
```
deck4[deck4$face == "queen", ]
## face suit value
## queen spades 0
## queen clubs 0
## queen diamonds 0
## queen hearts 1
```
But that’s three cards too many. On the other hand, you could find all of the cards in *spades*:
```
deck4[deck4$suit == "spades", ]
## face suit value
## king spades 0
## queen spades 0
## jack spades 0
## ten spades 0
## nine spades 0
## eight spades 0
## seven spades 0
## six spades 0
## five spades 0
## four spades 0
## three spades 0
## two spades 0
## ace spades 0
```
But that’s 12 cards too many. What you really want to find is all of the cards that have both a face value equal to queen and a suit value equal to spades. You can do that with a *Boolean operator*. Boolean operators combine multiple logical tests together into a single test.
#### 7\.0\.2\.2 Boolean Operators
Boolean operators are things like *and* (`&`) and *or* (`|`). They collapse the results of multiple logical tests into a single `TRUE` or `FALSE`. R has six boolean operators, shown in Table [7\.2](modify.html#tab:boole).
Table 7\.2: Boolean operators
| Operator | Syntax | Tests |
| --- | --- | --- |
| `&` | `cond1 & cond2` | Are both `cond1` and `cond2` true? |
| `|` | `cond1 | cond2` | Is one or more of `cond1` and `cond2` true? |
| `xor` | `xor(cond1, cond2)` | Is exactly one of `cond1` and `cond2` true? |
| `!` | `!cond1` | Is `cond1` false? (e.g., `!` flips the results of a logical test) |
| `any` | `any(cond1, cond2, cond3, ...)` | Are any of the conditions true? |
| `all` | `all(cond1, cond2, cond3, ...)` | Are all of the conditions true? |
To use a Boolean operator, place it between two *complete* logical tests. R will execute each logical test and then use the Boolean operator to combine the results into a single `TRUE` or `FALSE`, Figure [7\.1](modify.html#fig:boolean).
**The most common mistake with Boolean operators**
It is easy to forget to put a complete test on either side of a Boolean operator. In English, it is efficient to say “Is *x* greater than two and less than nine?” But in R, you need to write the equivalent of “Is *x* greater than two and *is x* less than nine?” This is shown in Figure [7\.1](modify.html#fig:boolean).
Figure 7\.1: R will evaluate the expressions on each side of a Boolean operator separately, and then combine the results into a single TRUE or FALSE. If you do not supply a complete test to each side of the operator, R will return an error.
When used with vectors, Boolean operators will follow the same element\-wise execution as arithmetic and logical operators:
```
a <- c(1, 2, 3)
b <- c(1, 2, 3)
c <- c(1, 2, 4)
a == b
## TRUE TRUE TRUE
b == c
## TRUE TRUE FALSE
a == b & b == c
## TRUE TRUE FALSE
```
Could you use a Boolean operator to locate the queen of spades in your deck? Of course you can. You want to test each card to see if it is both a queen *and* a spade. You can write this test in R with:
```
deck4$face == "queen" & deck4$suit == "spades"
## FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## FALSE FALSE FALSE
```
I’ll save the results of this test to its own object. That will make the results easier to work with:
```
queenOfSpades <- deck4$face == "queen" & deck4$suit == "spades"
```
Next, you can use the test as an index to select the value of the queen of spades. Make sure the test actually selects the correct value:
```
deck4[queenOfSpades, ]
## face suit value
## queen spades 0
deck4$value[queenOfSpades]
## 0
```
Now that you’ve found the queen of spades, you can update her value:
```
deck4$value[queenOfSpades] <- 13
deck4[queenOfSpades, ]
## face suit value
## queen spades 13
```
Your deck is now ready to play hearts.
**Exercise 7\.3 (Practice with Tests)** If you think you have the hang of logical tests, try converting these sentences into tests written with R code. To help you out, I’ve defined some R objects after the sentences that you can use to test your answers:
* Is w positive?
* Is x greater than 10 and less than 20?
* Is object y the word February?
* Is *every* value in z a day of the week?
```
w <- c(-1, 0, 1)
x <- c(5, 15)
y <- "February"
z <- c("Monday", "Tuesday", "Friday")
```
*Solution.* Here are some model answers. If you got stuck, be sure to re\-read how R evaluates logical tests that use Boolean values:
```
w > 0
10 < x & x < 20
y == "February"
all(z %in% c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"))
```
Let’s consider one last game, blackjack. In blackjack, each number card has a value equal to its face value. Each face card (king, queen, or jack) has a value of 10\. Finally, each ace has a value of 11 or 1, depending on the final results of the game.
Let’s begin with a fresh copy of `deck`—that way the number cards (`two` through `ten`) will start off with the correct value:
```
deck5 <- deck
head(deck5, 13)
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
You can change the value of the face cards in one fell swoop with `%in%`:
```
facecard <- deck5$face %in% c("king", "queen", "jack")
deck5[facecard, ]
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## king clubs 13
## queen clubs 12
## jack clubs 11
## king diamonds 13
## queen diamonds 12
## jack diamonds 11
## king hearts 13
## queen hearts 12
## jack hearts 11
deck5$value[facecard] <- 10
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades 1
```
Now you just need to fix the ace values—or do you? It is hard to decide what value to give the aces because their exact value will change from hand to hand. At the end of each hand, an ace will equal 11 if the sum of the player’s cards does not exceed 21\. Otherwise, the ace will equal 1\. The actual value of the ace will depend on the other cards in the player’s hand. This is a case of missing information. At the moment, you do not have enough information to assign a correct point value to the ace cards.
### 7\.0\.3 Missing Information
Missing information problems happen frequently in data science. Usually, they are more straightforward: you don’t know a value because the measurement was lost, corrupted, or never taken to begin with. R has a way to help you manage these missing values.
The `NA` character is a special symbol in R. It stands for “not available” and can be used as a placeholder for missing information. R will treat NA exactly as you should want missing information treated. For example, what result would you expect if you add 1 to a piece of missing information?
```
1 + NA
## NA
```
R will return a second piece of missing information. It would not be correct to say that `1 + NA = 1` because there is a good chance that the missing quantity is not zero. You do not have enough information to determine the result.
What if you tested whether a piece of missing information is equal to 1?
```
NA == 1
## NA
```
Again, your answer would be something like “I do not know if this is equal to one,” that is, `NA`. Generally, `NA`s will propagate whenever you use them in an R operation or function. This can save you from making errors based on missing data.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
#### 7\.0\.3\.1 na.rm
Missing values can help you work around holes in your data sets, but they can also create some frustrating problems. Suppose, for example, that you’ve collected 1,000 pass:\[observations] and wish to take their average with R’s `mean` function. If even one of the values is `NA`, your result will be `NA`:
```
c(NA, 1:50)
## NA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
## 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
mean(c(NA, 1:50))
## NA
```
Understandably, you may prefer a different behavior. Most R functions come with the optional argument, `na.rm`, which stands for `NA` remove. R will ignore `NA`s when it evaluates a function if you add the argument `na.rm = TRUE`:
```
mean(c(NA, 1:50), na.rm = TRUE)
## 25.5
```
#### 7\.0\.3\.2 is.na
On occasion, you may want to identify the `NA`s in your data set with a logical test, but that too creates a problem. How would you go about it? If something is a missing value, any logical test that uses it will return a missing value, even this test:
```
NA == NA
## NA
```
Which means that tests like this won’t help you find missing values:
```
c(1, 2, 3, NA) == NA
## NA NA NA NA
```
But don’t worry too hard; R supplies a special function that can test whether a value is an `NA`. The function is sensibly named `is.na`:
```
is.na(NA)
## TRUE
vec <- c(1, 2, 3, NA)
is.na(vec)
## FALSE FALSE FALSE TRUE
```
Let’s set all of your ace values to `NA`. This will accomplish two things. First, it will remind you that you do not know the final value of each ace. Second, it will prevent you from accidentally scoring a hand that has an ace before you determine the ace’s final value.
You can set your ace values to `NA` in the same way that you would set them to a number:
```
deck5$value[deck5$face == "ace"] <- NA
head(deck5, 13)
## face suit value
## king spades 10
## queen spades 10
## jack spades 10
## ten spades 10
## nine spades 9
## eight spades 8
## seven spades 7
## six spades 6
## five spades 5
## four spades 4
## three spades 3
## two spades 2
## ace spades NA
```
Congratulations. Your deck is now ready for a game of blackjack.
### 7\.0\.4 Summary
You can modify values in place inside an R object when you combine R’s notation syntax with the assignment operator, `<-`. This lets you update your data and clean your data sets
When you work with large data sets, modifying and retrieving values creates a logistical problem of its own. How can you search through the data to find the values that you want to modify or retrieve? As an R user, you can do this with logical subsetting. Create a logical test with logical and Boolean operators and then use the test as an index in R’s bracket notation. R will return the values that you are looking for, even if you do not know where they are.
Retrieving individual values will not be your only concern as an R programmer. You’ll also need to retrieve entire data sets themselves; for example, you may call one in a function. [Environments](environments.html#environments-1) will teach you how R looks up and saves data sets and other R objects in its environment system. You’ll then use this knowledge to fix the `deal` and `shuffle` functions.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/environments.html |
8 Environments
==============
Your deck is now ready for a game of blackjack (or hearts or war), but are your `shuffle` and `deal` functions up to snuff? Definitely not. For example, `deal` deals the same card over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
And the `shuffle` function doesn’t actually shuffle `deck` (it returns a copy of `deck` that has been shuffled). In short, both of these functions use `deck`, but neither manipulates `deck`—and we would like them to.
To fix these functions, you will need to learn how R stores, looks up, and manipulates objects like `deck`. R does all of these things with the help of an environment system.
8\.1 Environments
-----------------
Consider for a moment how your computer stores files. Every file is saved in a folder, and each folder is saved in another folder, which forms a hierarchical file system. If your computer wants to open up a file, it must first look up the file in this file system.
You can see your file system by opening a finder window. For example, Figure [8\.1](environments.html#fig:folders) shows part of the file system on my computer. I have tons of folders. Inside one of them is a subfolder named *Documents*, inside of that subfolder is a sub\-subfolder named *ggsubplot*, inside of that folder is a folder named *inst*, inside of that is a folder named *doc*, and inside of that is a file named *manual.pdf*.
Figure 8\.1: Your computer arranges files into a hierarchy of folders and subfolders. To look at a file, you need to find where it is saved in the file system.
R uses a similar system to save R objects. Each object is saved inside of an environment, a list\-like object that resembles a folder on your computer. Each environment is connected to a *parent environment*, a higher\-level environment, which creates a hierarchy of environments.
You can see R’s environment system with the `parenvs` function in the pryr package (note `parenvs` came in the pryr package when this book was first published). `parenvs(all = TRUE)` will return a list of the environments that your R session is using. The actual output will vary from session to session depending on which packages you have loaded. Here’s the output from my current session:
```
library(pryr)
parenvs(all = TRUE)
## label name
## 1 <environment: R_GlobalEnv> ""
## 2 <environment: package:pryr> "package:pryr"
## 3 <environment: 0x7fff3321c388> "tools:rstudio"
## 4 <environment: package:stats> "package:stats"
## 5 <environment: package:graphics> "package:graphics"
## 6 <environment: package:grDevices> "package:grDevices"
## 7 <environment: package:utils> "package:utils"
## 8 <environment: package:datasets> "package:datasets"
## 9 <environment: package:methods> "package:methods"
## 10 <environment: 0x7fff3193dab0> "Autoloads"
## 11 <environment: base> ""
## 12 <environment: R_EmptyEnv> ""
```
It takes some imagination to interpret this output, so let’s visualize the environments as a system of folders, Figure [8\.2](environments.html#fig:environments). You can think of the environment tree like this. The lowest\-level environment is named `R_GlobalEnv` and is saved inside an environment named `package:pryr`, which is saved inside the environment named `0x7fff3321c388`, and so on, until you get to the final, highest\-level environment, `R_EmptyEnv`. `R_EmptyEnv` is the only R environment that does not have a parent environment.
Figure 8\.2: R stores R objects in an environment tree that resembles your computer’s folder system.
Remember that this example is just a metaphor. R’s environments exist in your RAM memory, and not in your file system. Also, R environments aren’t technically saved inside one another. Each environment is connected to a parent environment, which makes it easy to search up R’s environment tree. But this connection is one\-way: there’s no way to look at one environment and tell what its “children” are. So you cannot search down R’s environment tree. In other ways, though, R’s environment system works similar to a file system.
8\.2 Working with Environments
------------------------------
R comes with some helper functions that you can use to explore your environment tree. First, you can refer to any of the environments in your tree with `as.environment`. `as.environment` takes an environment name (as a character string) and returns the corresponding environment:
```
as.environment("package:stats")
## <environment: package:stats>
## attr(,"name")
## [1] "package:stats"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/stats"
```
Three environments in your tree also come with their own accessor functions. These are the global environment (`R_GlobalEnv`), the base environment (`base`), and the empty environment (`R_EmptyEnv`). You can refer to them with:
```
globalenv()
## <environment: R_GlobalEnv>
baseenv()
## <environment: base>
emptyenv()
##<environment: R_EmptyEnv>
```
Next, you can look up an environment’s parent with `parent.env`:
```
parent.env(globalenv())
## <environment: package:pryr>
## attr(,"name")
## [1] "package:pryr"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/pryr"
```
Notice that the empty environment is the only R environment without a parent:
```
parent.env(emptyenv())
## Error in parent.env(emptyenv()) : the empty environment has no parent
```
You can view the objects saved in an environment with `ls` or `ls.str`. `ls` will return just the object names, but `ls.str` will display a little about each object’s structure:
```
ls(emptyenv())
## character(0)
ls(globalenv())
## "deal" "deck" "deck2" "deck3" "deck4" "deck5"
## "die" "gender" "hand" "lst" "mat" "mil"
## "new" "now" "shuffle" "vec"
```
The empty environment is—not surprisingly—empty; the base environment has too many objects to list here; and the global environment has some familiar faces. It is where R has saved all of the objects that you’ve created so far.
RStudio’s environment pane displays all of the objects in your global environment.
You can use R’s `$` syntax to access an object in a specific environment. For example, you can access `deck` from the global environment:
```
head(globalenv()$deck, 3)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
```
And you can use the `assign` function to save an object into a particular environment. First give `assign` the name of the new object (as a character string). Then give `assign` the value of the new object, and finally the environment to save the object in:
```
assign("new", "Hello Global", envir = globalenv())
globalenv()$new
## "Hello Global"
```
Notice that `assign` works similar to `<-`. If an object already exists with the given name in the given environment, `assign` will overwrite it without asking for permission. This makes `assign` useful for updating objects but creates the potential for heartache.
Now that you can explore R’s environment tree, let’s examine how R uses it. R works closely with the environment tree to look up objects, store objects, and evaluate functions. How R does each of these tasks will depend on the current active environment.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
8\.3 Scoping Rules
------------------
R follows a special set of rules to look up objects. These rules are known as R’s scoping rules, and you’ve already met a couple of them:
1. R looks for objects in the current active environment.
2. When you work at the command line, the active environment is the global environment. Hence, R looks up objects that you call at the command line in the global environment.
Here is a third rule that explains how R finds objects that are not in the active environment
3. When R does not find an object in an environment, R looks in the environment’s parent environment, then the parent of the parent, and so on, until R finds the object or reaches the empty environment.
So, if you call an object at the command line, R will look for it in the global environment. If R can’t find it there, R will look in the parent of the global environment, and then the parent of the parent, and so on, working its way up the environment tree until it finds the object, as in Figure [8\.3](environments.html#fig:path). If R cannot find the object in any environment, it will return an error that says the object is not found.
Figure 8\.3: R will search for an object by name in the active environment, here the global environment. If R does not find the object there, it will search in the active environment’s parent, and then the parent’s parent, and so on until R finds the object or runs out of environments.
Remember that functions are a type of object in R. R will store and look up functions the same way it stores and looks up other objects, by searching for them by name in the environment tree.
8\.4 Assignment
---------------
When you assign a value to an object, R saves the value in the active environment under the object’s name. If an object with the same name already exists in the active environment, R will overwrite it.
For example, an object named `new` exists in the global environment:
```
new
## "Hello Global"
```
You can save a new object named `new` to the global environment with this command. R will overwrite the old object as a result:
```
new <- "Hello Active"
new
## "Hello Active"
```
This arrangement creates a quandary for R whenever R runs a function. Many functions save temporary objects that help them do their jobs. For example, the `roll` function from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) saved an object named `die` and an object named `dice`:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
R must save these temporary objects in the active environment; but if R does that, it may overwrite existing objects. Function authors cannot guess ahead of time which names may already exist in your active environment. How does R avoid this risk? Every time R runs a function, it creates a new active environment to evaluate the function in.
8\.5 Evaluation
---------------
R creates a new environment *each* time it evaluates a function. R will use the new environment as the active environment while it runs the function, and then R will return to the environment that you called the function from, bringing the function’s result with it. Let’s call these new environments *runtime environments* because R creates them at runtime to evaluate functions.
We’ll use the following function to explore R’s runtime environments. We want to know what the environments look like: what are their parent environments, and what objects do they contain? `show_env` is designed to tell us:
```
show_env <- function(){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
`show_env` is itself a function, so when we call `show_env()`, R will create a runtime environment to evaluate the function in. The results of `show_env` will tell us the name of the runtime environment, its parent, and which objects the runtime environment contains:
```
show_env()
## $ran.in
## <environment: 0x7ff711d12e28>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
The results reveal that R created a new environment named `0x7ff711d12e28` to run `show_env()` in. The environment had no objects in it, and its parent was the `global environment`. So for purposes of running `show_env`, R’s environment tree looked like Figure [8\.4](environments.html#fig:tree).
Let’s run `show_env` again:
```
show_env()
## $ran.in
## <environment: 0x7ff715f49808>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
This time `show_env` ran in a new environment, `0x7ff715f49808`. R creates a new environment *each* time you run a function. The `0x7ff715f49808` environment looks exactly the same as `0x7ff711d12e28`. It is empty and has the same global environment as its parent.
Figure 8\.4: R creates a new environment to run show\_env in. The environment is a child of the global environment.
Now let’s consider which environment R will use as the parent of the runtime environment.
R will connect a function’s runtime environment to the environment that the function *was first created in*. This environment plays an important role in the function’s life—because all of the function’s runtime environments will use it as a parent. Let’s call this environment the *origin environment*. You can look up a function’s origin environment by running `environment` on the function:
```
environment(show_env)
## <environment: R_GlobalEnv>
```
The origin environment of `show_env` is the global environment because we created `show_env` at the command line, but the origin environment does not need to be the global environment. For example, the environment of `parenvs` is the `pryr` package:
```
environment(parenvs)
## <environment: namespace:pryr>
```
In other words, the parent of a runtime environment will not always be the global environment; it will be whichever environment the function was first created in.
Finally, let’s look at the objects contained in a runtime environment. At the moment, `show_env`’s runtime environments do not contain any objects, but that is easy to fix. Just have `show_env` create some objects in its body of code. R will store any objects created by `show_env` in its runtime environment. Why? Because the runtime environment will be the active environment when those objects are created:
```
show_env <- function(){
a <- 1
b <- 2
c <- 3
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
This time when we run `show_env`, R stores `a`, `b`, and `c` in the runtime environment:
```
show_env()
## $ran.in
## <environment: 0x7ff712312cd0>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## a : num 1
## b : num 2
## c : num 3
```
This is how R ensures that a function does not overwrite anything that it shouldn’t. Any objects created by the function are stored in a safe, out\-of\-the\-way runtime environment.
R will also put a second type of object in a runtime environment. If a function has arguments, R will copy over each argument to the runtime environment. The argument will appear as an object that has the name of the argument but the value of whatever input the user provided for the argument. This ensures that a function will be able to find and use each of its arguments:
```
foo <- "take me to your runtime"
show_env <- function(x = foo){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
show_env()
## $ran.in
## <environment: 0x7ff712398958>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## x : chr "take me to your runtime"
```
Let’s put this all together to see how R evaluates a function. Before you call a function, R is working in an active environment; let’s call this the *calling environment*. It is the environment R calls the function from.
Then you call the function. R responds by setting up a new runtime environment. This environment will be a child of the function’s origin enviornment. R will copy each of the function’s arguments into the runtime environment and then make the runtime environment the new active environment.
Next, R runs the code in the body of the function. If the code creates any objects, R stores them in the active, that is, runtime environment. If the code calls any objects, R uses its scoping rules to look them up. R will search the runtime environment, then the parent of the runtime environment (which will be the origin environment), then the parent of the origin environment, and so on. Notice that the calling environment might not be on the search path. Usually, a function will only call its arguments, which R can find in the active runtime environment.
Finally, R finishes running the function. It switches the active environment back to the calling environment. Now R executes any other commands in the line of code that called the function. So if you save the result of the function to an object with `<-`, the new object will be stored in the calling environment.
To recap, R stores its objects in an environment system. At any moment of time, R is working closely with a single active environment. It stores new objects in this environment, and it uses the environment as a starting point when it searches for existing objects. R’s active environment is usually the global environment, but R will adjust the active environment to do things like run functions in a safe manner.
How can you use this knowledge to fix the `deal` and `shuffle` functions?
First, let’s start with a warm\-up question. Suppose I redefine `deal` at the command line like this:
```
deal <- function() {
deck[1, ]
}
```
Notice that `deal` no longer takes an argument, and it calls the `deck` object, which lives in the global environment.
**Exercise 8\.1 (Will deal work?)** Will R be able to find `deck` and return an answer when I call the new version of `deal`, such as `deal()`?
*Solution.* Yes. `deal` will still work the same as before. R will run `deal` in a runtime environment that is a child of the global environment. Why will it be a child of the global environment? Because the global environment is the origin environment of `deal` (we defined `deal` in the global environment):
```
environment(deal)
## <environment: R_GlobalEnv>
```
When `deal` calls `deck`, R will need to look up the `deck` object. R’s scoping rules will lead it to the version of `deck` in the global environment, as in Figure [8\.5](environments.html#fig:deal). `deal` works as expected as a result:
```
deal()
## face suit value
## king spades 13
```
Figure 8\.5: R finds deck by looking in the parent of deal’s runtime environment. The parent is the global environment, deal’s origin environment. Here, R finds the copy of deck.
Now let’s fix the `deal` function to remove the cards it has dealt from `deck`. Recall that `deal` returns the top card of `deck` but does not remove the card from the deck. As a result, `deal` always returns the same card:
```
deal()
## face suit value
## king spades 13
deal()
## face suit value
## king spades 13
```
You know enough R syntax to remove the top card of `deck`. The following code will save a prisitine copy of `deck` and then remove the top card:
```
DECK <- deck
deck <- deck[-1, ]
head(deck, 3)
## face suit value
## queen spades 12
## jack spades 11
## ten spades 10
```
Now let’s add the code to `deal`. Here `deal` saves (and then returns) the top card of `deck`. In between, it removes the card from `deck`…or does it?
```
deal <- function() {
card <- deck[1, ]
deck <- deck[-1, ]
card
}
```
This code won’t work because R will be in a runtime environment when it executes `deck <- deck[-1, ]`. Instead of overwriting the global copy of `deck` with `deck[-1, ]`, `deal` will just create a slightly altered copy of `deck` in its runtime environment, as in Figure [8\.6](environments.html#fig:second-deck).
Figure 8\.6: The deal function looks up deck in the global environment but saves deck\[\-1, ] in the runtime environment as a new object named deck.
**Exercise 8\.2 (Overwrite deck)** Rewrite the `deck <- deck[-1, ]` line of `deal` to *assign* `deck[-1, ]` to an object named `deck` in the global environment. Hint: consider the `assign` function.
*Solution.* You can assign an object to a specific environment with the `assign` function:
```
deal <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
```
Now `deal` will finally clean up the global copy of `deck`, and we can `deal` cards just as we would in real life:
```
deal()
## face suit value
## queen spades 12
deal()
## face suit value
## jack spades 11
deal()
## face suit value
## ten spades 10
```
Let’s turn our attention to the `shuffle` function:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
`shuffle(deck)` doesn’t shuffle the `deck` object; it returns a shuffled copy of the `deck` object:
```
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
a <- shuffle(deck)
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
head(a, 3)
## face suit value
## ace diamonds 1
## seven clubs 7
## two clubs 2
```
This behavior is now undesirable in two ways. First, `shuffle` fails to shuffle `deck`. Second, `shuffle` returns a copy of `deck`, which may be missing the cards that have been dealt away. It would be better if `shuffle` returned the dealt cards to the deck and then shuffled. This is what happens when you shuffle a deck of cards in real life.
**Exercise 8\.3 (Rewrite shuffle)** Rewrite `shuffle` so that it replaces the copy of `deck` that lives in the global environment with a shuffled version of `DECK`, the intact copy of `deck` that also lives in the global environment. The new version of `shuffle` should have no arguments and return no output.
*Solution.* You can update `shuffle` in the same way that you updated `deck`. The following version will do the job:
```
shuffle <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
```
Since `DECK` lives in the global environment, `shuffle`’s environment of origin, `shuffle` will be able to find `DECK` at runtime. R will search for `DECK` first in `shuffle`’s runtime environment, and then in `shuffle`’s origin environment—the global environment—which is where `DECK` is stored.
The second line of `shuffle` will create a reordered copy of `DECK` and save it as `deck` in the global environment. This will overwrite the previous, nonshuffled version of `deck`.
8\.6 Closures
-------------
Our system finally works. For example, you can shuffle the cards and then deal a hand of blackjack:
```
shuffle()
deal()
## face suit value
## queen hearts 12
deal()
## face suit value
## eight hearts 8
```
But the system requires `deck` and `DECK` to exist in the global environment. Lots of things happen in this environment, and it is possible that `deck` may get modified or erased by accident.
It would be better if we could store `deck` in a safe, out\-of\-the\-way place, like one of those safe, out\-of\-the\-way environments that R creates to run functions in. In fact, storing `deck` in a runtime environment is not such a bad idea.
You could create a function that takes `deck` as an argument and saves a copy of `deck` as `DECK`. The function could also save its own copies of `deal` and `shuffle`:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
}
```
When you run `setup`, R will create a runtime environment to store these objects in. The environment will look like Figure [8\.7](environments.html#fig:closure1).
Now all of these things are safely out of the way in a child of the global environment. That makes them safe but hard to use. Let’s ask `setup` to return `DEAL` and `SHUFFLE` so we can use them. The best way to do this is to return the functions as a list:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
```
Figure 8\.7: Running setup will store deck and DECK in an out\-of\-the\-way place, and create a DEAL and SHUFFLE function. Each of these objects will be stored in an environment whose parent is the global environment.
Then you can save each of the elements of the list to a dedicated object in the global environment:
```
deal <- cards$deal
shuffle <- cards$shuffle
```
Now you can run `deal` and `shuffle` just as before. Each object contains the same code as the original `deal` and `shuffle`:
```
deal
## function() {
## card <- deck[1, ]
## assign("deck", deck[-1, ], envir = globalenv())
## card
## }
## <environment: 0x7ff7169c3390>
shuffle
## function(){
## random <- sample(1:52, size = 52)
## assign("deck", DECK[random, ], envir = globalenv())
## }
## <environment: 0x7ff7169c3390>
```
However, the functions now have one important difference. Their origin environment is no longer the global environment (although `deal` and `shuffle` *are* currently saved there). Their origin environment is the runtime environment that R made when you ran `setup`. That’s where R created `DEAL` and `SHUFFLE`, the functions copied into the new `deal` and `shuffle`, as shown in:
```
environment(deal)
## <environment: 0x7ff7169c3390>
environment(shuffle)
## <environment: 0x7ff7169c3390>
```
Why does this matter? Because now when you run `deal` or `shuffle`, R will evaluate the functions in a runtime environment that uses `0x7ff7169c3390` as its parent. `DECK` and `deck` will be in this parent environment, which means that `deal` and `shuffle` will be able to find them at runtime. `DECK` and `deck` will be in the functions’ search path but still out of the way in every other respect, as shown in Figure [8\.8](environments.html#fig:closure2).
Figure 8\.8: Now deal and shuffle will be run in an environment that has the protected deck and DECK in its search path.
This arrangement is called a *closure*. `setup`’s runtime environment “encloses” the `deal` and `shuffle` functions. Both `deal` and `shuffle` can work closely with the objects contained in the enclosing environment, but almost nothing else can. The enclosing environment is not on the search path for any other R function or environment.
You may have noticed that `deal` and `shuffle` still update the `deck` object in the global environment. Don’t worry, we’re about to change that. We want `deal` and `shuffle` to work exclusively with the objects in the parent (enclosing) environment of their runtime environments. Instead of having each function reference the global environment to update `deck`, you can have them reference their parent environment at runtime, as shown in Figure [8\.9](environments.html#fig:closure3):
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = parent.env(environment()))
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = parent.env(environment()))
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
deal <- cards$deal
shuffle <- cards$shuffle
```
Figure 8\.9: When you change your code, deal and shuffle will go from updating the global environment (left) to updating their parent environment (right).
We finally have a self\-contained card game. You can delete (or modify) the global copy of `deck` as much as you want and still play cards. `deal` and `shuffle` will use the pristine, protected copy of `deck`:
```
rm(deck)
shuffle()
deal()
## face suit value
## ace hearts 1
deal()
## face suit value
## jack clubs 11
```
Blackjack!
8\.7 Summary
------------
R saves its objects in an environment system that resembles your computer’s file system. If you understand this system, you can predict how R will look up objects. If you call an object at the command line, R will look for the object in the global environment and then the parents of the global environment, working its way up the environment tree one environment at a time.
R will use a slightly different search path when you call an object from inside of a function. When you run a function, R creates a new environment to execute commands in. This environment will be a child of the environment where the function was originally defined. This may be the global environment, but it also may not be. You can use this behavior to create closures, which are functions linked to objects in protected environments.
As you become familiar with R’s environment system, you can use it to produce elegant results, like we did here. However, the real value of understanding the environment system comes from knowing how R functions do their job. You can use this knowledge to figure out what is going wrong when a function does not perform as expected.
8\.8 Project 2 Wrap\-up
-----------------------
You now have full control over the data sets and values that you load into R. You can store data as R objects, you can retrieve and manipulate data values at will, and you can even predict how R will store and look up your objects in your computer’s memory.
You may not realize it yet, but your expertise makes you a powerful, computer\-augmented data user. You can use R to save and work with larger data sets than you could otherwise handle. So far we’ve only worked with `deck`, a small data set; but you can use the same techniques to work with any data set that fits in your computer’s memory.
However, storing data is not the only logistical task that you will face as a data scientist. You will often want to do tasks with your data that are so complex or repetitive that they are difficult to do without a computer. Some of the things can be done with functions that already exist in R and its packages, but others cannot. You will be the most versatile as a data scientist if you can write your own programs for computers to follow. R can help you do this. When you are ready, [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine) will teach you the most useful skills for writing programs in R.
8\.1 Environments
-----------------
Consider for a moment how your computer stores files. Every file is saved in a folder, and each folder is saved in another folder, which forms a hierarchical file system. If your computer wants to open up a file, it must first look up the file in this file system.
You can see your file system by opening a finder window. For example, Figure [8\.1](environments.html#fig:folders) shows part of the file system on my computer. I have tons of folders. Inside one of them is a subfolder named *Documents*, inside of that subfolder is a sub\-subfolder named *ggsubplot*, inside of that folder is a folder named *inst*, inside of that is a folder named *doc*, and inside of that is a file named *manual.pdf*.
Figure 8\.1: Your computer arranges files into a hierarchy of folders and subfolders. To look at a file, you need to find where it is saved in the file system.
R uses a similar system to save R objects. Each object is saved inside of an environment, a list\-like object that resembles a folder on your computer. Each environment is connected to a *parent environment*, a higher\-level environment, which creates a hierarchy of environments.
You can see R’s environment system with the `parenvs` function in the pryr package (note `parenvs` came in the pryr package when this book was first published). `parenvs(all = TRUE)` will return a list of the environments that your R session is using. The actual output will vary from session to session depending on which packages you have loaded. Here’s the output from my current session:
```
library(pryr)
parenvs(all = TRUE)
## label name
## 1 <environment: R_GlobalEnv> ""
## 2 <environment: package:pryr> "package:pryr"
## 3 <environment: 0x7fff3321c388> "tools:rstudio"
## 4 <environment: package:stats> "package:stats"
## 5 <environment: package:graphics> "package:graphics"
## 6 <environment: package:grDevices> "package:grDevices"
## 7 <environment: package:utils> "package:utils"
## 8 <environment: package:datasets> "package:datasets"
## 9 <environment: package:methods> "package:methods"
## 10 <environment: 0x7fff3193dab0> "Autoloads"
## 11 <environment: base> ""
## 12 <environment: R_EmptyEnv> ""
```
It takes some imagination to interpret this output, so let’s visualize the environments as a system of folders, Figure [8\.2](environments.html#fig:environments). You can think of the environment tree like this. The lowest\-level environment is named `R_GlobalEnv` and is saved inside an environment named `package:pryr`, which is saved inside the environment named `0x7fff3321c388`, and so on, until you get to the final, highest\-level environment, `R_EmptyEnv`. `R_EmptyEnv` is the only R environment that does not have a parent environment.
Figure 8\.2: R stores R objects in an environment tree that resembles your computer’s folder system.
Remember that this example is just a metaphor. R’s environments exist in your RAM memory, and not in your file system. Also, R environments aren’t technically saved inside one another. Each environment is connected to a parent environment, which makes it easy to search up R’s environment tree. But this connection is one\-way: there’s no way to look at one environment and tell what its “children” are. So you cannot search down R’s environment tree. In other ways, though, R’s environment system works similar to a file system.
8\.2 Working with Environments
------------------------------
R comes with some helper functions that you can use to explore your environment tree. First, you can refer to any of the environments in your tree with `as.environment`. `as.environment` takes an environment name (as a character string) and returns the corresponding environment:
```
as.environment("package:stats")
## <environment: package:stats>
## attr(,"name")
## [1] "package:stats"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/stats"
```
Three environments in your tree also come with their own accessor functions. These are the global environment (`R_GlobalEnv`), the base environment (`base`), and the empty environment (`R_EmptyEnv`). You can refer to them with:
```
globalenv()
## <environment: R_GlobalEnv>
baseenv()
## <environment: base>
emptyenv()
##<environment: R_EmptyEnv>
```
Next, you can look up an environment’s parent with `parent.env`:
```
parent.env(globalenv())
## <environment: package:pryr>
## attr(,"name")
## [1] "package:pryr"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/pryr"
```
Notice that the empty environment is the only R environment without a parent:
```
parent.env(emptyenv())
## Error in parent.env(emptyenv()) : the empty environment has no parent
```
You can view the objects saved in an environment with `ls` or `ls.str`. `ls` will return just the object names, but `ls.str` will display a little about each object’s structure:
```
ls(emptyenv())
## character(0)
ls(globalenv())
## "deal" "deck" "deck2" "deck3" "deck4" "deck5"
## "die" "gender" "hand" "lst" "mat" "mil"
## "new" "now" "shuffle" "vec"
```
The empty environment is—not surprisingly—empty; the base environment has too many objects to list here; and the global environment has some familiar faces. It is where R has saved all of the objects that you’ve created so far.
RStudio’s environment pane displays all of the objects in your global environment.
You can use R’s `$` syntax to access an object in a specific environment. For example, you can access `deck` from the global environment:
```
head(globalenv()$deck, 3)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
```
And you can use the `assign` function to save an object into a particular environment. First give `assign` the name of the new object (as a character string). Then give `assign` the value of the new object, and finally the environment to save the object in:
```
assign("new", "Hello Global", envir = globalenv())
globalenv()$new
## "Hello Global"
```
Notice that `assign` works similar to `<-`. If an object already exists with the given name in the given environment, `assign` will overwrite it without asking for permission. This makes `assign` useful for updating objects but creates the potential for heartache.
Now that you can explore R’s environment tree, let’s examine how R uses it. R works closely with the environment tree to look up objects, store objects, and evaluate functions. How R does each of these tasks will depend on the current active environment.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
8\.3 Scoping Rules
------------------
R follows a special set of rules to look up objects. These rules are known as R’s scoping rules, and you’ve already met a couple of them:
1. R looks for objects in the current active environment.
2. When you work at the command line, the active environment is the global environment. Hence, R looks up objects that you call at the command line in the global environment.
Here is a third rule that explains how R finds objects that are not in the active environment
3. When R does not find an object in an environment, R looks in the environment’s parent environment, then the parent of the parent, and so on, until R finds the object or reaches the empty environment.
So, if you call an object at the command line, R will look for it in the global environment. If R can’t find it there, R will look in the parent of the global environment, and then the parent of the parent, and so on, working its way up the environment tree until it finds the object, as in Figure [8\.3](environments.html#fig:path). If R cannot find the object in any environment, it will return an error that says the object is not found.
Figure 8\.3: R will search for an object by name in the active environment, here the global environment. If R does not find the object there, it will search in the active environment’s parent, and then the parent’s parent, and so on until R finds the object or runs out of environments.
Remember that functions are a type of object in R. R will store and look up functions the same way it stores and looks up other objects, by searching for them by name in the environment tree.
8\.4 Assignment
---------------
When you assign a value to an object, R saves the value in the active environment under the object’s name. If an object with the same name already exists in the active environment, R will overwrite it.
For example, an object named `new` exists in the global environment:
```
new
## "Hello Global"
```
You can save a new object named `new` to the global environment with this command. R will overwrite the old object as a result:
```
new <- "Hello Active"
new
## "Hello Active"
```
This arrangement creates a quandary for R whenever R runs a function. Many functions save temporary objects that help them do their jobs. For example, the `roll` function from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) saved an object named `die` and an object named `dice`:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
R must save these temporary objects in the active environment; but if R does that, it may overwrite existing objects. Function authors cannot guess ahead of time which names may already exist in your active environment. How does R avoid this risk? Every time R runs a function, it creates a new active environment to evaluate the function in.
8\.5 Evaluation
---------------
R creates a new environment *each* time it evaluates a function. R will use the new environment as the active environment while it runs the function, and then R will return to the environment that you called the function from, bringing the function’s result with it. Let’s call these new environments *runtime environments* because R creates them at runtime to evaluate functions.
We’ll use the following function to explore R’s runtime environments. We want to know what the environments look like: what are their parent environments, and what objects do they contain? `show_env` is designed to tell us:
```
show_env <- function(){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
`show_env` is itself a function, so when we call `show_env()`, R will create a runtime environment to evaluate the function in. The results of `show_env` will tell us the name of the runtime environment, its parent, and which objects the runtime environment contains:
```
show_env()
## $ran.in
## <environment: 0x7ff711d12e28>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
The results reveal that R created a new environment named `0x7ff711d12e28` to run `show_env()` in. The environment had no objects in it, and its parent was the `global environment`. So for purposes of running `show_env`, R’s environment tree looked like Figure [8\.4](environments.html#fig:tree).
Let’s run `show_env` again:
```
show_env()
## $ran.in
## <environment: 0x7ff715f49808>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
This time `show_env` ran in a new environment, `0x7ff715f49808`. R creates a new environment *each* time you run a function. The `0x7ff715f49808` environment looks exactly the same as `0x7ff711d12e28`. It is empty and has the same global environment as its parent.
Figure 8\.4: R creates a new environment to run show\_env in. The environment is a child of the global environment.
Now let’s consider which environment R will use as the parent of the runtime environment.
R will connect a function’s runtime environment to the environment that the function *was first created in*. This environment plays an important role in the function’s life—because all of the function’s runtime environments will use it as a parent. Let’s call this environment the *origin environment*. You can look up a function’s origin environment by running `environment` on the function:
```
environment(show_env)
## <environment: R_GlobalEnv>
```
The origin environment of `show_env` is the global environment because we created `show_env` at the command line, but the origin environment does not need to be the global environment. For example, the environment of `parenvs` is the `pryr` package:
```
environment(parenvs)
## <environment: namespace:pryr>
```
In other words, the parent of a runtime environment will not always be the global environment; it will be whichever environment the function was first created in.
Finally, let’s look at the objects contained in a runtime environment. At the moment, `show_env`’s runtime environments do not contain any objects, but that is easy to fix. Just have `show_env` create some objects in its body of code. R will store any objects created by `show_env` in its runtime environment. Why? Because the runtime environment will be the active environment when those objects are created:
```
show_env <- function(){
a <- 1
b <- 2
c <- 3
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
This time when we run `show_env`, R stores `a`, `b`, and `c` in the runtime environment:
```
show_env()
## $ran.in
## <environment: 0x7ff712312cd0>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## a : num 1
## b : num 2
## c : num 3
```
This is how R ensures that a function does not overwrite anything that it shouldn’t. Any objects created by the function are stored in a safe, out\-of\-the\-way runtime environment.
R will also put a second type of object in a runtime environment. If a function has arguments, R will copy over each argument to the runtime environment. The argument will appear as an object that has the name of the argument but the value of whatever input the user provided for the argument. This ensures that a function will be able to find and use each of its arguments:
```
foo <- "take me to your runtime"
show_env <- function(x = foo){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
show_env()
## $ran.in
## <environment: 0x7ff712398958>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## x : chr "take me to your runtime"
```
Let’s put this all together to see how R evaluates a function. Before you call a function, R is working in an active environment; let’s call this the *calling environment*. It is the environment R calls the function from.
Then you call the function. R responds by setting up a new runtime environment. This environment will be a child of the function’s origin enviornment. R will copy each of the function’s arguments into the runtime environment and then make the runtime environment the new active environment.
Next, R runs the code in the body of the function. If the code creates any objects, R stores them in the active, that is, runtime environment. If the code calls any objects, R uses its scoping rules to look them up. R will search the runtime environment, then the parent of the runtime environment (which will be the origin environment), then the parent of the origin environment, and so on. Notice that the calling environment might not be on the search path. Usually, a function will only call its arguments, which R can find in the active runtime environment.
Finally, R finishes running the function. It switches the active environment back to the calling environment. Now R executes any other commands in the line of code that called the function. So if you save the result of the function to an object with `<-`, the new object will be stored in the calling environment.
To recap, R stores its objects in an environment system. At any moment of time, R is working closely with a single active environment. It stores new objects in this environment, and it uses the environment as a starting point when it searches for existing objects. R’s active environment is usually the global environment, but R will adjust the active environment to do things like run functions in a safe manner.
How can you use this knowledge to fix the `deal` and `shuffle` functions?
First, let’s start with a warm\-up question. Suppose I redefine `deal` at the command line like this:
```
deal <- function() {
deck[1, ]
}
```
Notice that `deal` no longer takes an argument, and it calls the `deck` object, which lives in the global environment.
**Exercise 8\.1 (Will deal work?)** Will R be able to find `deck` and return an answer when I call the new version of `deal`, such as `deal()`?
*Solution.* Yes. `deal` will still work the same as before. R will run `deal` in a runtime environment that is a child of the global environment. Why will it be a child of the global environment? Because the global environment is the origin environment of `deal` (we defined `deal` in the global environment):
```
environment(deal)
## <environment: R_GlobalEnv>
```
When `deal` calls `deck`, R will need to look up the `deck` object. R’s scoping rules will lead it to the version of `deck` in the global environment, as in Figure [8\.5](environments.html#fig:deal). `deal` works as expected as a result:
```
deal()
## face suit value
## king spades 13
```
Figure 8\.5: R finds deck by looking in the parent of deal’s runtime environment. The parent is the global environment, deal’s origin environment. Here, R finds the copy of deck.
Now let’s fix the `deal` function to remove the cards it has dealt from `deck`. Recall that `deal` returns the top card of `deck` but does not remove the card from the deck. As a result, `deal` always returns the same card:
```
deal()
## face suit value
## king spades 13
deal()
## face suit value
## king spades 13
```
You know enough R syntax to remove the top card of `deck`. The following code will save a prisitine copy of `deck` and then remove the top card:
```
DECK <- deck
deck <- deck[-1, ]
head(deck, 3)
## face suit value
## queen spades 12
## jack spades 11
## ten spades 10
```
Now let’s add the code to `deal`. Here `deal` saves (and then returns) the top card of `deck`. In between, it removes the card from `deck`…or does it?
```
deal <- function() {
card <- deck[1, ]
deck <- deck[-1, ]
card
}
```
This code won’t work because R will be in a runtime environment when it executes `deck <- deck[-1, ]`. Instead of overwriting the global copy of `deck` with `deck[-1, ]`, `deal` will just create a slightly altered copy of `deck` in its runtime environment, as in Figure [8\.6](environments.html#fig:second-deck).
Figure 8\.6: The deal function looks up deck in the global environment but saves deck\[\-1, ] in the runtime environment as a new object named deck.
**Exercise 8\.2 (Overwrite deck)** Rewrite the `deck <- deck[-1, ]` line of `deal` to *assign* `deck[-1, ]` to an object named `deck` in the global environment. Hint: consider the `assign` function.
*Solution.* You can assign an object to a specific environment with the `assign` function:
```
deal <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
```
Now `deal` will finally clean up the global copy of `deck`, and we can `deal` cards just as we would in real life:
```
deal()
## face suit value
## queen spades 12
deal()
## face suit value
## jack spades 11
deal()
## face suit value
## ten spades 10
```
Let’s turn our attention to the `shuffle` function:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
`shuffle(deck)` doesn’t shuffle the `deck` object; it returns a shuffled copy of the `deck` object:
```
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
a <- shuffle(deck)
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
head(a, 3)
## face suit value
## ace diamonds 1
## seven clubs 7
## two clubs 2
```
This behavior is now undesirable in two ways. First, `shuffle` fails to shuffle `deck`. Second, `shuffle` returns a copy of `deck`, which may be missing the cards that have been dealt away. It would be better if `shuffle` returned the dealt cards to the deck and then shuffled. This is what happens when you shuffle a deck of cards in real life.
**Exercise 8\.3 (Rewrite shuffle)** Rewrite `shuffle` so that it replaces the copy of `deck` that lives in the global environment with a shuffled version of `DECK`, the intact copy of `deck` that also lives in the global environment. The new version of `shuffle` should have no arguments and return no output.
*Solution.* You can update `shuffle` in the same way that you updated `deck`. The following version will do the job:
```
shuffle <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
```
Since `DECK` lives in the global environment, `shuffle`’s environment of origin, `shuffle` will be able to find `DECK` at runtime. R will search for `DECK` first in `shuffle`’s runtime environment, and then in `shuffle`’s origin environment—the global environment—which is where `DECK` is stored.
The second line of `shuffle` will create a reordered copy of `DECK` and save it as `deck` in the global environment. This will overwrite the previous, nonshuffled version of `deck`.
8\.6 Closures
-------------
Our system finally works. For example, you can shuffle the cards and then deal a hand of blackjack:
```
shuffle()
deal()
## face suit value
## queen hearts 12
deal()
## face suit value
## eight hearts 8
```
But the system requires `deck` and `DECK` to exist in the global environment. Lots of things happen in this environment, and it is possible that `deck` may get modified or erased by accident.
It would be better if we could store `deck` in a safe, out\-of\-the\-way place, like one of those safe, out\-of\-the\-way environments that R creates to run functions in. In fact, storing `deck` in a runtime environment is not such a bad idea.
You could create a function that takes `deck` as an argument and saves a copy of `deck` as `DECK`. The function could also save its own copies of `deal` and `shuffle`:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
}
```
When you run `setup`, R will create a runtime environment to store these objects in. The environment will look like Figure [8\.7](environments.html#fig:closure1).
Now all of these things are safely out of the way in a child of the global environment. That makes them safe but hard to use. Let’s ask `setup` to return `DEAL` and `SHUFFLE` so we can use them. The best way to do this is to return the functions as a list:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
```
Figure 8\.7: Running setup will store deck and DECK in an out\-of\-the\-way place, and create a DEAL and SHUFFLE function. Each of these objects will be stored in an environment whose parent is the global environment.
Then you can save each of the elements of the list to a dedicated object in the global environment:
```
deal <- cards$deal
shuffle <- cards$shuffle
```
Now you can run `deal` and `shuffle` just as before. Each object contains the same code as the original `deal` and `shuffle`:
```
deal
## function() {
## card <- deck[1, ]
## assign("deck", deck[-1, ], envir = globalenv())
## card
## }
## <environment: 0x7ff7169c3390>
shuffle
## function(){
## random <- sample(1:52, size = 52)
## assign("deck", DECK[random, ], envir = globalenv())
## }
## <environment: 0x7ff7169c3390>
```
However, the functions now have one important difference. Their origin environment is no longer the global environment (although `deal` and `shuffle` *are* currently saved there). Their origin environment is the runtime environment that R made when you ran `setup`. That’s where R created `DEAL` and `SHUFFLE`, the functions copied into the new `deal` and `shuffle`, as shown in:
```
environment(deal)
## <environment: 0x7ff7169c3390>
environment(shuffle)
## <environment: 0x7ff7169c3390>
```
Why does this matter? Because now when you run `deal` or `shuffle`, R will evaluate the functions in a runtime environment that uses `0x7ff7169c3390` as its parent. `DECK` and `deck` will be in this parent environment, which means that `deal` and `shuffle` will be able to find them at runtime. `DECK` and `deck` will be in the functions’ search path but still out of the way in every other respect, as shown in Figure [8\.8](environments.html#fig:closure2).
Figure 8\.8: Now deal and shuffle will be run in an environment that has the protected deck and DECK in its search path.
This arrangement is called a *closure*. `setup`’s runtime environment “encloses” the `deal` and `shuffle` functions. Both `deal` and `shuffle` can work closely with the objects contained in the enclosing environment, but almost nothing else can. The enclosing environment is not on the search path for any other R function or environment.
You may have noticed that `deal` and `shuffle` still update the `deck` object in the global environment. Don’t worry, we’re about to change that. We want `deal` and `shuffle` to work exclusively with the objects in the parent (enclosing) environment of their runtime environments. Instead of having each function reference the global environment to update `deck`, you can have them reference their parent environment at runtime, as shown in Figure [8\.9](environments.html#fig:closure3):
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = parent.env(environment()))
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = parent.env(environment()))
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
deal <- cards$deal
shuffle <- cards$shuffle
```
Figure 8\.9: When you change your code, deal and shuffle will go from updating the global environment (left) to updating their parent environment (right).
We finally have a self\-contained card game. You can delete (or modify) the global copy of `deck` as much as you want and still play cards. `deal` and `shuffle` will use the pristine, protected copy of `deck`:
```
rm(deck)
shuffle()
deal()
## face suit value
## ace hearts 1
deal()
## face suit value
## jack clubs 11
```
Blackjack!
8\.7 Summary
------------
R saves its objects in an environment system that resembles your computer’s file system. If you understand this system, you can predict how R will look up objects. If you call an object at the command line, R will look for the object in the global environment and then the parents of the global environment, working its way up the environment tree one environment at a time.
R will use a slightly different search path when you call an object from inside of a function. When you run a function, R creates a new environment to execute commands in. This environment will be a child of the environment where the function was originally defined. This may be the global environment, but it also may not be. You can use this behavior to create closures, which are functions linked to objects in protected environments.
As you become familiar with R’s environment system, you can use it to produce elegant results, like we did here. However, the real value of understanding the environment system comes from knowing how R functions do their job. You can use this knowledge to figure out what is going wrong when a function does not perform as expected.
8\.8 Project 2 Wrap\-up
-----------------------
You now have full control over the data sets and values that you load into R. You can store data as R objects, you can retrieve and manipulate data values at will, and you can even predict how R will store and look up your objects in your computer’s memory.
You may not realize it yet, but your expertise makes you a powerful, computer\-augmented data user. You can use R to save and work with larger data sets than you could otherwise handle. So far we’ve only worked with `deck`, a small data set; but you can use the same techniques to work with any data set that fits in your computer’s memory.
However, storing data is not the only logistical task that you will face as a data scientist. You will often want to do tasks with your data that are so complex or repetitive that they are difficult to do without a computer. Some of the things can be done with functions that already exist in R and its packages, but others cannot. You will be the most versatile as a data scientist if you can write your own programs for computers to follow. R can help you do this. When you are ready, [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine) will teach you the most useful skills for writing programs in R.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/environments.html |
8 Environments
==============
Your deck is now ready for a game of blackjack (or hearts or war), but are your `shuffle` and `deal` functions up to snuff? Definitely not. For example, `deal` deals the same card over and over again:
```
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
deal(deck)
## face suit value
## king spades 13
```
And the `shuffle` function doesn’t actually shuffle `deck` (it returns a copy of `deck` that has been shuffled). In short, both of these functions use `deck`, but neither manipulates `deck`—and we would like them to.
To fix these functions, you will need to learn how R stores, looks up, and manipulates objects like `deck`. R does all of these things with the help of an environment system.
8\.1 Environments
-----------------
Consider for a moment how your computer stores files. Every file is saved in a folder, and each folder is saved in another folder, which forms a hierarchical file system. If your computer wants to open up a file, it must first look up the file in this file system.
You can see your file system by opening a finder window. For example, Figure [8\.1](environments.html#fig:folders) shows part of the file system on my computer. I have tons of folders. Inside one of them is a subfolder named *Documents*, inside of that subfolder is a sub\-subfolder named *ggsubplot*, inside of that folder is a folder named *inst*, inside of that is a folder named *doc*, and inside of that is a file named *manual.pdf*.
Figure 8\.1: Your computer arranges files into a hierarchy of folders and subfolders. To look at a file, you need to find where it is saved in the file system.
R uses a similar system to save R objects. Each object is saved inside of an environment, a list\-like object that resembles a folder on your computer. Each environment is connected to a *parent environment*, a higher\-level environment, which creates a hierarchy of environments.
You can see R’s environment system with the `parenvs` function in the pryr package (note `parenvs` came in the pryr package when this book was first published). `parenvs(all = TRUE)` will return a list of the environments that your R session is using. The actual output will vary from session to session depending on which packages you have loaded. Here’s the output from my current session:
```
library(pryr)
parenvs(all = TRUE)
## label name
## 1 <environment: R_GlobalEnv> ""
## 2 <environment: package:pryr> "package:pryr"
## 3 <environment: 0x7fff3321c388> "tools:rstudio"
## 4 <environment: package:stats> "package:stats"
## 5 <environment: package:graphics> "package:graphics"
## 6 <environment: package:grDevices> "package:grDevices"
## 7 <environment: package:utils> "package:utils"
## 8 <environment: package:datasets> "package:datasets"
## 9 <environment: package:methods> "package:methods"
## 10 <environment: 0x7fff3193dab0> "Autoloads"
## 11 <environment: base> ""
## 12 <environment: R_EmptyEnv> ""
```
It takes some imagination to interpret this output, so let’s visualize the environments as a system of folders, Figure [8\.2](environments.html#fig:environments). You can think of the environment tree like this. The lowest\-level environment is named `R_GlobalEnv` and is saved inside an environment named `package:pryr`, which is saved inside the environment named `0x7fff3321c388`, and so on, until you get to the final, highest\-level environment, `R_EmptyEnv`. `R_EmptyEnv` is the only R environment that does not have a parent environment.
Figure 8\.2: R stores R objects in an environment tree that resembles your computer’s folder system.
Remember that this example is just a metaphor. R’s environments exist in your RAM memory, and not in your file system. Also, R environments aren’t technically saved inside one another. Each environment is connected to a parent environment, which makes it easy to search up R’s environment tree. But this connection is one\-way: there’s no way to look at one environment and tell what its “children” are. So you cannot search down R’s environment tree. In other ways, though, R’s environment system works similar to a file system.
8\.2 Working with Environments
------------------------------
R comes with some helper functions that you can use to explore your environment tree. First, you can refer to any of the environments in your tree with `as.environment`. `as.environment` takes an environment name (as a character string) and returns the corresponding environment:
```
as.environment("package:stats")
## <environment: package:stats>
## attr(,"name")
## [1] "package:stats"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/stats"
```
Three environments in your tree also come with their own accessor functions. These are the global environment (`R_GlobalEnv`), the base environment (`base`), and the empty environment (`R_EmptyEnv`). You can refer to them with:
```
globalenv()
## <environment: R_GlobalEnv>
baseenv()
## <environment: base>
emptyenv()
##<environment: R_EmptyEnv>
```
Next, you can look up an environment’s parent with `parent.env`:
```
parent.env(globalenv())
## <environment: package:pryr>
## attr(,"name")
## [1] "package:pryr"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/pryr"
```
Notice that the empty environment is the only R environment without a parent:
```
parent.env(emptyenv())
## Error in parent.env(emptyenv()) : the empty environment has no parent
```
You can view the objects saved in an environment with `ls` or `ls.str`. `ls` will return just the object names, but `ls.str` will display a little about each object’s structure:
```
ls(emptyenv())
## character(0)
ls(globalenv())
## "deal" "deck" "deck2" "deck3" "deck4" "deck5"
## "die" "gender" "hand" "lst" "mat" "mil"
## "new" "now" "shuffle" "vec"
```
The empty environment is—not surprisingly—empty; the base environment has too many objects to list here; and the global environment has some familiar faces. It is where R has saved all of the objects that you’ve created so far.
RStudio’s environment pane displays all of the objects in your global environment.
You can use R’s `$` syntax to access an object in a specific environment. For example, you can access `deck` from the global environment:
```
head(globalenv()$deck, 3)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
```
And you can use the `assign` function to save an object into a particular environment. First give `assign` the name of the new object (as a character string). Then give `assign` the value of the new object, and finally the environment to save the object in:
```
assign("new", "Hello Global", envir = globalenv())
globalenv()$new
## "Hello Global"
```
Notice that `assign` works similar to `<-`. If an object already exists with the given name in the given environment, `assign` will overwrite it without asking for permission. This makes `assign` useful for updating objects but creates the potential for heartache.
Now that you can explore R’s environment tree, let’s examine how R uses it. R works closely with the environment tree to look up objects, store objects, and evaluate functions. How R does each of these tasks will depend on the current active environment.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
8\.3 Scoping Rules
------------------
R follows a special set of rules to look up objects. These rules are known as R’s scoping rules, and you’ve already met a couple of them:
1. R looks for objects in the current active environment.
2. When you work at the command line, the active environment is the global environment. Hence, R looks up objects that you call at the command line in the global environment.
Here is a third rule that explains how R finds objects that are not in the active environment
3. When R does not find an object in an environment, R looks in the environment’s parent environment, then the parent of the parent, and so on, until R finds the object or reaches the empty environment.
So, if you call an object at the command line, R will look for it in the global environment. If R can’t find it there, R will look in the parent of the global environment, and then the parent of the parent, and so on, working its way up the environment tree until it finds the object, as in Figure [8\.3](environments.html#fig:path). If R cannot find the object in any environment, it will return an error that says the object is not found.
Figure 8\.3: R will search for an object by name in the active environment, here the global environment. If R does not find the object there, it will search in the active environment’s parent, and then the parent’s parent, and so on until R finds the object or runs out of environments.
Remember that functions are a type of object in R. R will store and look up functions the same way it stores and looks up other objects, by searching for them by name in the environment tree.
8\.4 Assignment
---------------
When you assign a value to an object, R saves the value in the active environment under the object’s name. If an object with the same name already exists in the active environment, R will overwrite it.
For example, an object named `new` exists in the global environment:
```
new
## "Hello Global"
```
You can save a new object named `new` to the global environment with this command. R will overwrite the old object as a result:
```
new <- "Hello Active"
new
## "Hello Active"
```
This arrangement creates a quandary for R whenever R runs a function. Many functions save temporary objects that help them do their jobs. For example, the `roll` function from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) saved an object named `die` and an object named `dice`:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
R must save these temporary objects in the active environment; but if R does that, it may overwrite existing objects. Function authors cannot guess ahead of time which names may already exist in your active environment. How does R avoid this risk? Every time R runs a function, it creates a new active environment to evaluate the function in.
8\.5 Evaluation
---------------
R creates a new environment *each* time it evaluates a function. R will use the new environment as the active environment while it runs the function, and then R will return to the environment that you called the function from, bringing the function’s result with it. Let’s call these new environments *runtime environments* because R creates them at runtime to evaluate functions.
We’ll use the following function to explore R’s runtime environments. We want to know what the environments look like: what are their parent environments, and what objects do they contain? `show_env` is designed to tell us:
```
show_env <- function(){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
`show_env` is itself a function, so when we call `show_env()`, R will create a runtime environment to evaluate the function in. The results of `show_env` will tell us the name of the runtime environment, its parent, and which objects the runtime environment contains:
```
show_env()
## $ran.in
## <environment: 0x7ff711d12e28>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
The results reveal that R created a new environment named `0x7ff711d12e28` to run `show_env()` in. The environment had no objects in it, and its parent was the `global environment`. So for purposes of running `show_env`, R’s environment tree looked like Figure [8\.4](environments.html#fig:tree).
Let’s run `show_env` again:
```
show_env()
## $ran.in
## <environment: 0x7ff715f49808>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
This time `show_env` ran in a new environment, `0x7ff715f49808`. R creates a new environment *each* time you run a function. The `0x7ff715f49808` environment looks exactly the same as `0x7ff711d12e28`. It is empty and has the same global environment as its parent.
Figure 8\.4: R creates a new environment to run show\_env in. The environment is a child of the global environment.
Now let’s consider which environment R will use as the parent of the runtime environment.
R will connect a function’s runtime environment to the environment that the function *was first created in*. This environment plays an important role in the function’s life—because all of the function’s runtime environments will use it as a parent. Let’s call this environment the *origin environment*. You can look up a function’s origin environment by running `environment` on the function:
```
environment(show_env)
## <environment: R_GlobalEnv>
```
The origin environment of `show_env` is the global environment because we created `show_env` at the command line, but the origin environment does not need to be the global environment. For example, the environment of `parenvs` is the `pryr` package:
```
environment(parenvs)
## <environment: namespace:pryr>
```
In other words, the parent of a runtime environment will not always be the global environment; it will be whichever environment the function was first created in.
Finally, let’s look at the objects contained in a runtime environment. At the moment, `show_env`’s runtime environments do not contain any objects, but that is easy to fix. Just have `show_env` create some objects in its body of code. R will store any objects created by `show_env` in its runtime environment. Why? Because the runtime environment will be the active environment when those objects are created:
```
show_env <- function(){
a <- 1
b <- 2
c <- 3
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
This time when we run `show_env`, R stores `a`, `b`, and `c` in the runtime environment:
```
show_env()
## $ran.in
## <environment: 0x7ff712312cd0>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## a : num 1
## b : num 2
## c : num 3
```
This is how R ensures that a function does not overwrite anything that it shouldn’t. Any objects created by the function are stored in a safe, out\-of\-the\-way runtime environment.
R will also put a second type of object in a runtime environment. If a function has arguments, R will copy over each argument to the runtime environment. The argument will appear as an object that has the name of the argument but the value of whatever input the user provided for the argument. This ensures that a function will be able to find and use each of its arguments:
```
foo <- "take me to your runtime"
show_env <- function(x = foo){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
show_env()
## $ran.in
## <environment: 0x7ff712398958>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## x : chr "take me to your runtime"
```
Let’s put this all together to see how R evaluates a function. Before you call a function, R is working in an active environment; let’s call this the *calling environment*. It is the environment R calls the function from.
Then you call the function. R responds by setting up a new runtime environment. This environment will be a child of the function’s origin enviornment. R will copy each of the function’s arguments into the runtime environment and then make the runtime environment the new active environment.
Next, R runs the code in the body of the function. If the code creates any objects, R stores them in the active, that is, runtime environment. If the code calls any objects, R uses its scoping rules to look them up. R will search the runtime environment, then the parent of the runtime environment (which will be the origin environment), then the parent of the origin environment, and so on. Notice that the calling environment might not be on the search path. Usually, a function will only call its arguments, which R can find in the active runtime environment.
Finally, R finishes running the function. It switches the active environment back to the calling environment. Now R executes any other commands in the line of code that called the function. So if you save the result of the function to an object with `<-`, the new object will be stored in the calling environment.
To recap, R stores its objects in an environment system. At any moment of time, R is working closely with a single active environment. It stores new objects in this environment, and it uses the environment as a starting point when it searches for existing objects. R’s active environment is usually the global environment, but R will adjust the active environment to do things like run functions in a safe manner.
How can you use this knowledge to fix the `deal` and `shuffle` functions?
First, let’s start with a warm\-up question. Suppose I redefine `deal` at the command line like this:
```
deal <- function() {
deck[1, ]
}
```
Notice that `deal` no longer takes an argument, and it calls the `deck` object, which lives in the global environment.
**Exercise 8\.1 (Will deal work?)** Will R be able to find `deck` and return an answer when I call the new version of `deal`, such as `deal()`?
*Solution.* Yes. `deal` will still work the same as before. R will run `deal` in a runtime environment that is a child of the global environment. Why will it be a child of the global environment? Because the global environment is the origin environment of `deal` (we defined `deal` in the global environment):
```
environment(deal)
## <environment: R_GlobalEnv>
```
When `deal` calls `deck`, R will need to look up the `deck` object. R’s scoping rules will lead it to the version of `deck` in the global environment, as in Figure [8\.5](environments.html#fig:deal). `deal` works as expected as a result:
```
deal()
## face suit value
## king spades 13
```
Figure 8\.5: R finds deck by looking in the parent of deal’s runtime environment. The parent is the global environment, deal’s origin environment. Here, R finds the copy of deck.
Now let’s fix the `deal` function to remove the cards it has dealt from `deck`. Recall that `deal` returns the top card of `deck` but does not remove the card from the deck. As a result, `deal` always returns the same card:
```
deal()
## face suit value
## king spades 13
deal()
## face suit value
## king spades 13
```
You know enough R syntax to remove the top card of `deck`. The following code will save a prisitine copy of `deck` and then remove the top card:
```
DECK <- deck
deck <- deck[-1, ]
head(deck, 3)
## face suit value
## queen spades 12
## jack spades 11
## ten spades 10
```
Now let’s add the code to `deal`. Here `deal` saves (and then returns) the top card of `deck`. In between, it removes the card from `deck`…or does it?
```
deal <- function() {
card <- deck[1, ]
deck <- deck[-1, ]
card
}
```
This code won’t work because R will be in a runtime environment when it executes `deck <- deck[-1, ]`. Instead of overwriting the global copy of `deck` with `deck[-1, ]`, `deal` will just create a slightly altered copy of `deck` in its runtime environment, as in Figure [8\.6](environments.html#fig:second-deck).
Figure 8\.6: The deal function looks up deck in the global environment but saves deck\[\-1, ] in the runtime environment as a new object named deck.
**Exercise 8\.2 (Overwrite deck)** Rewrite the `deck <- deck[-1, ]` line of `deal` to *assign* `deck[-1, ]` to an object named `deck` in the global environment. Hint: consider the `assign` function.
*Solution.* You can assign an object to a specific environment with the `assign` function:
```
deal <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
```
Now `deal` will finally clean up the global copy of `deck`, and we can `deal` cards just as we would in real life:
```
deal()
## face suit value
## queen spades 12
deal()
## face suit value
## jack spades 11
deal()
## face suit value
## ten spades 10
```
Let’s turn our attention to the `shuffle` function:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
`shuffle(deck)` doesn’t shuffle the `deck` object; it returns a shuffled copy of the `deck` object:
```
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
a <- shuffle(deck)
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
head(a, 3)
## face suit value
## ace diamonds 1
## seven clubs 7
## two clubs 2
```
This behavior is now undesirable in two ways. First, `shuffle` fails to shuffle `deck`. Second, `shuffle` returns a copy of `deck`, which may be missing the cards that have been dealt away. It would be better if `shuffle` returned the dealt cards to the deck and then shuffled. This is what happens when you shuffle a deck of cards in real life.
**Exercise 8\.3 (Rewrite shuffle)** Rewrite `shuffle` so that it replaces the copy of `deck` that lives in the global environment with a shuffled version of `DECK`, the intact copy of `deck` that also lives in the global environment. The new version of `shuffle` should have no arguments and return no output.
*Solution.* You can update `shuffle` in the same way that you updated `deck`. The following version will do the job:
```
shuffle <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
```
Since `DECK` lives in the global environment, `shuffle`’s environment of origin, `shuffle` will be able to find `DECK` at runtime. R will search for `DECK` first in `shuffle`’s runtime environment, and then in `shuffle`’s origin environment—the global environment—which is where `DECK` is stored.
The second line of `shuffle` will create a reordered copy of `DECK` and save it as `deck` in the global environment. This will overwrite the previous, nonshuffled version of `deck`.
8\.6 Closures
-------------
Our system finally works. For example, you can shuffle the cards and then deal a hand of blackjack:
```
shuffle()
deal()
## face suit value
## queen hearts 12
deal()
## face suit value
## eight hearts 8
```
But the system requires `deck` and `DECK` to exist in the global environment. Lots of things happen in this environment, and it is possible that `deck` may get modified or erased by accident.
It would be better if we could store `deck` in a safe, out\-of\-the\-way place, like one of those safe, out\-of\-the\-way environments that R creates to run functions in. In fact, storing `deck` in a runtime environment is not such a bad idea.
You could create a function that takes `deck` as an argument and saves a copy of `deck` as `DECK`. The function could also save its own copies of `deal` and `shuffle`:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
}
```
When you run `setup`, R will create a runtime environment to store these objects in. The environment will look like Figure [8\.7](environments.html#fig:closure1).
Now all of these things are safely out of the way in a child of the global environment. That makes them safe but hard to use. Let’s ask `setup` to return `DEAL` and `SHUFFLE` so we can use them. The best way to do this is to return the functions as a list:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
```
Figure 8\.7: Running setup will store deck and DECK in an out\-of\-the\-way place, and create a DEAL and SHUFFLE function. Each of these objects will be stored in an environment whose parent is the global environment.
Then you can save each of the elements of the list to a dedicated object in the global environment:
```
deal <- cards$deal
shuffle <- cards$shuffle
```
Now you can run `deal` and `shuffle` just as before. Each object contains the same code as the original `deal` and `shuffle`:
```
deal
## function() {
## card <- deck[1, ]
## assign("deck", deck[-1, ], envir = globalenv())
## card
## }
## <environment: 0x7ff7169c3390>
shuffle
## function(){
## random <- sample(1:52, size = 52)
## assign("deck", DECK[random, ], envir = globalenv())
## }
## <environment: 0x7ff7169c3390>
```
However, the functions now have one important difference. Their origin environment is no longer the global environment (although `deal` and `shuffle` *are* currently saved there). Their origin environment is the runtime environment that R made when you ran `setup`. That’s where R created `DEAL` and `SHUFFLE`, the functions copied into the new `deal` and `shuffle`, as shown in:
```
environment(deal)
## <environment: 0x7ff7169c3390>
environment(shuffle)
## <environment: 0x7ff7169c3390>
```
Why does this matter? Because now when you run `deal` or `shuffle`, R will evaluate the functions in a runtime environment that uses `0x7ff7169c3390` as its parent. `DECK` and `deck` will be in this parent environment, which means that `deal` and `shuffle` will be able to find them at runtime. `DECK` and `deck` will be in the functions’ search path but still out of the way in every other respect, as shown in Figure [8\.8](environments.html#fig:closure2).
Figure 8\.8: Now deal and shuffle will be run in an environment that has the protected deck and DECK in its search path.
This arrangement is called a *closure*. `setup`’s runtime environment “encloses” the `deal` and `shuffle` functions. Both `deal` and `shuffle` can work closely with the objects contained in the enclosing environment, but almost nothing else can. The enclosing environment is not on the search path for any other R function or environment.
You may have noticed that `deal` and `shuffle` still update the `deck` object in the global environment. Don’t worry, we’re about to change that. We want `deal` and `shuffle` to work exclusively with the objects in the parent (enclosing) environment of their runtime environments. Instead of having each function reference the global environment to update `deck`, you can have them reference their parent environment at runtime, as shown in Figure [8\.9](environments.html#fig:closure3):
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = parent.env(environment()))
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = parent.env(environment()))
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
deal <- cards$deal
shuffle <- cards$shuffle
```
Figure 8\.9: When you change your code, deal and shuffle will go from updating the global environment (left) to updating their parent environment (right).
We finally have a self\-contained card game. You can delete (or modify) the global copy of `deck` as much as you want and still play cards. `deal` and `shuffle` will use the pristine, protected copy of `deck`:
```
rm(deck)
shuffle()
deal()
## face suit value
## ace hearts 1
deal()
## face suit value
## jack clubs 11
```
Blackjack!
8\.7 Summary
------------
R saves its objects in an environment system that resembles your computer’s file system. If you understand this system, you can predict how R will look up objects. If you call an object at the command line, R will look for the object in the global environment and then the parents of the global environment, working its way up the environment tree one environment at a time.
R will use a slightly different search path when you call an object from inside of a function. When you run a function, R creates a new environment to execute commands in. This environment will be a child of the environment where the function was originally defined. This may be the global environment, but it also may not be. You can use this behavior to create closures, which are functions linked to objects in protected environments.
As you become familiar with R’s environment system, you can use it to produce elegant results, like we did here. However, the real value of understanding the environment system comes from knowing how R functions do their job. You can use this knowledge to figure out what is going wrong when a function does not perform as expected.
8\.8 Project 2 Wrap\-up
-----------------------
You now have full control over the data sets and values that you load into R. You can store data as R objects, you can retrieve and manipulate data values at will, and you can even predict how R will store and look up your objects in your computer’s memory.
You may not realize it yet, but your expertise makes you a powerful, computer\-augmented data user. You can use R to save and work with larger data sets than you could otherwise handle. So far we’ve only worked with `deck`, a small data set; but you can use the same techniques to work with any data set that fits in your computer’s memory.
However, storing data is not the only logistical task that you will face as a data scientist. You will often want to do tasks with your data that are so complex or repetitive that they are difficult to do without a computer. Some of the things can be done with functions that already exist in R and its packages, but others cannot. You will be the most versatile as a data scientist if you can write your own programs for computers to follow. R can help you do this. When you are ready, [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine) will teach you the most useful skills for writing programs in R.
8\.1 Environments
-----------------
Consider for a moment how your computer stores files. Every file is saved in a folder, and each folder is saved in another folder, which forms a hierarchical file system. If your computer wants to open up a file, it must first look up the file in this file system.
You can see your file system by opening a finder window. For example, Figure [8\.1](environments.html#fig:folders) shows part of the file system on my computer. I have tons of folders. Inside one of them is a subfolder named *Documents*, inside of that subfolder is a sub\-subfolder named *ggsubplot*, inside of that folder is a folder named *inst*, inside of that is a folder named *doc*, and inside of that is a file named *manual.pdf*.
Figure 8\.1: Your computer arranges files into a hierarchy of folders and subfolders. To look at a file, you need to find where it is saved in the file system.
R uses a similar system to save R objects. Each object is saved inside of an environment, a list\-like object that resembles a folder on your computer. Each environment is connected to a *parent environment*, a higher\-level environment, which creates a hierarchy of environments.
You can see R’s environment system with the `parenvs` function in the pryr package (note `parenvs` came in the pryr package when this book was first published). `parenvs(all = TRUE)` will return a list of the environments that your R session is using. The actual output will vary from session to session depending on which packages you have loaded. Here’s the output from my current session:
```
library(pryr)
parenvs(all = TRUE)
## label name
## 1 <environment: R_GlobalEnv> ""
## 2 <environment: package:pryr> "package:pryr"
## 3 <environment: 0x7fff3321c388> "tools:rstudio"
## 4 <environment: package:stats> "package:stats"
## 5 <environment: package:graphics> "package:graphics"
## 6 <environment: package:grDevices> "package:grDevices"
## 7 <environment: package:utils> "package:utils"
## 8 <environment: package:datasets> "package:datasets"
## 9 <environment: package:methods> "package:methods"
## 10 <environment: 0x7fff3193dab0> "Autoloads"
## 11 <environment: base> ""
## 12 <environment: R_EmptyEnv> ""
```
It takes some imagination to interpret this output, so let’s visualize the environments as a system of folders, Figure [8\.2](environments.html#fig:environments). You can think of the environment tree like this. The lowest\-level environment is named `R_GlobalEnv` and is saved inside an environment named `package:pryr`, which is saved inside the environment named `0x7fff3321c388`, and so on, until you get to the final, highest\-level environment, `R_EmptyEnv`. `R_EmptyEnv` is the only R environment that does not have a parent environment.
Figure 8\.2: R stores R objects in an environment tree that resembles your computer’s folder system.
Remember that this example is just a metaphor. R’s environments exist in your RAM memory, and not in your file system. Also, R environments aren’t technically saved inside one another. Each environment is connected to a parent environment, which makes it easy to search up R’s environment tree. But this connection is one\-way: there’s no way to look at one environment and tell what its “children” are. So you cannot search down R’s environment tree. In other ways, though, R’s environment system works similar to a file system.
8\.2 Working with Environments
------------------------------
R comes with some helper functions that you can use to explore your environment tree. First, you can refer to any of the environments in your tree with `as.environment`. `as.environment` takes an environment name (as a character string) and returns the corresponding environment:
```
as.environment("package:stats")
## <environment: package:stats>
## attr(,"name")
## [1] "package:stats"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/stats"
```
Three environments in your tree also come with their own accessor functions. These are the global environment (`R_GlobalEnv`), the base environment (`base`), and the empty environment (`R_EmptyEnv`). You can refer to them with:
```
globalenv()
## <environment: R_GlobalEnv>
baseenv()
## <environment: base>
emptyenv()
##<environment: R_EmptyEnv>
```
Next, you can look up an environment’s parent with `parent.env`:
```
parent.env(globalenv())
## <environment: package:pryr>
## attr(,"name")
## [1] "package:pryr"
## attr(,"path")
## [1] "/Library/Frameworks/R.framework/Versions/3.0/Resources/library/pryr"
```
Notice that the empty environment is the only R environment without a parent:
```
parent.env(emptyenv())
## Error in parent.env(emptyenv()) : the empty environment has no parent
```
You can view the objects saved in an environment with `ls` or `ls.str`. `ls` will return just the object names, but `ls.str` will display a little about each object’s structure:
```
ls(emptyenv())
## character(0)
ls(globalenv())
## "deal" "deck" "deck2" "deck3" "deck4" "deck5"
## "die" "gender" "hand" "lst" "mat" "mil"
## "new" "now" "shuffle" "vec"
```
The empty environment is—not surprisingly—empty; the base environment has too many objects to list here; and the global environment has some familiar faces. It is where R has saved all of the objects that you’ve created so far.
RStudio’s environment pane displays all of the objects in your global environment.
You can use R’s `$` syntax to access an object in a specific environment. For example, you can access `deck` from the global environment:
```
head(globalenv()$deck, 3)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
```
And you can use the `assign` function to save an object into a particular environment. First give `assign` the name of the new object (as a character string). Then give `assign` the value of the new object, and finally the environment to save the object in:
```
assign("new", "Hello Global", envir = globalenv())
globalenv()$new
## "Hello Global"
```
Notice that `assign` works similar to `<-`. If an object already exists with the given name in the given environment, `assign` will overwrite it without asking for permission. This makes `assign` useful for updating objects but creates the potential for heartache.
Now that you can explore R’s environment tree, let’s examine how R uses it. R works closely with the environment tree to look up objects, store objects, and evaluate functions. How R does each of these tasks will depend on the current active environment.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
### 8\.2\.1 The Active Environment
At any moment of time, R is working closely with a single environment. R will store new objects in this environment (if you create any), and R will use this environment as a starting point to look up existing objects (if you call any). I’ll call this special environment the *active environment*. The active environment is usually the global environment, but this may change when you run a function.
You can use `environment` to see the current active environment:
```
environment()
<environment: R_GlobalEnv>
```
The global environment plays a special role in R. It is the active environment for every command that you run at the command line. As a result, any object that you create at the command line will be saved in the global environment. You can think of the global environment as your user workspace.
When you call an object at the command line, R will look for it first in the global environment. But what if the object is not there? In that case, R will follow a series of rules to look up the object.
8\.3 Scoping Rules
------------------
R follows a special set of rules to look up objects. These rules are known as R’s scoping rules, and you’ve already met a couple of them:
1. R looks for objects in the current active environment.
2. When you work at the command line, the active environment is the global environment. Hence, R looks up objects that you call at the command line in the global environment.
Here is a third rule that explains how R finds objects that are not in the active environment
3. When R does not find an object in an environment, R looks in the environment’s parent environment, then the parent of the parent, and so on, until R finds the object or reaches the empty environment.
So, if you call an object at the command line, R will look for it in the global environment. If R can’t find it there, R will look in the parent of the global environment, and then the parent of the parent, and so on, working its way up the environment tree until it finds the object, as in Figure [8\.3](environments.html#fig:path). If R cannot find the object in any environment, it will return an error that says the object is not found.
Figure 8\.3: R will search for an object by name in the active environment, here the global environment. If R does not find the object there, it will search in the active environment’s parent, and then the parent’s parent, and so on until R finds the object or runs out of environments.
Remember that functions are a type of object in R. R will store and look up functions the same way it stores and looks up other objects, by searching for them by name in the environment tree.
8\.4 Assignment
---------------
When you assign a value to an object, R saves the value in the active environment under the object’s name. If an object with the same name already exists in the active environment, R will overwrite it.
For example, an object named `new` exists in the global environment:
```
new
## "Hello Global"
```
You can save a new object named `new` to the global environment with this command. R will overwrite the old object as a result:
```
new <- "Hello Active"
new
## "Hello Active"
```
This arrangement creates a quandary for R whenever R runs a function. Many functions save temporary objects that help them do their jobs. For example, the `roll` function from [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice) saved an object named `die` and an object named `dice`:
```
roll <- function() {
die <- 1:6
dice <- sample(die, size = 2, replace = TRUE)
sum(dice)
}
```
R must save these temporary objects in the active environment; but if R does that, it may overwrite existing objects. Function authors cannot guess ahead of time which names may already exist in your active environment. How does R avoid this risk? Every time R runs a function, it creates a new active environment to evaluate the function in.
8\.5 Evaluation
---------------
R creates a new environment *each* time it evaluates a function. R will use the new environment as the active environment while it runs the function, and then R will return to the environment that you called the function from, bringing the function’s result with it. Let’s call these new environments *runtime environments* because R creates them at runtime to evaluate functions.
We’ll use the following function to explore R’s runtime environments. We want to know what the environments look like: what are their parent environments, and what objects do they contain? `show_env` is designed to tell us:
```
show_env <- function(){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
`show_env` is itself a function, so when we call `show_env()`, R will create a runtime environment to evaluate the function in. The results of `show_env` will tell us the name of the runtime environment, its parent, and which objects the runtime environment contains:
```
show_env()
## $ran.in
## <environment: 0x7ff711d12e28>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
The results reveal that R created a new environment named `0x7ff711d12e28` to run `show_env()` in. The environment had no objects in it, and its parent was the `global environment`. So for purposes of running `show_env`, R’s environment tree looked like Figure [8\.4](environments.html#fig:tree).
Let’s run `show_env` again:
```
show_env()
## $ran.in
## <environment: 0x7ff715f49808>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
```
This time `show_env` ran in a new environment, `0x7ff715f49808`. R creates a new environment *each* time you run a function. The `0x7ff715f49808` environment looks exactly the same as `0x7ff711d12e28`. It is empty and has the same global environment as its parent.
Figure 8\.4: R creates a new environment to run show\_env in. The environment is a child of the global environment.
Now let’s consider which environment R will use as the parent of the runtime environment.
R will connect a function’s runtime environment to the environment that the function *was first created in*. This environment plays an important role in the function’s life—because all of the function’s runtime environments will use it as a parent. Let’s call this environment the *origin environment*. You can look up a function’s origin environment by running `environment` on the function:
```
environment(show_env)
## <environment: R_GlobalEnv>
```
The origin environment of `show_env` is the global environment because we created `show_env` at the command line, but the origin environment does not need to be the global environment. For example, the environment of `parenvs` is the `pryr` package:
```
environment(parenvs)
## <environment: namespace:pryr>
```
In other words, the parent of a runtime environment will not always be the global environment; it will be whichever environment the function was first created in.
Finally, let’s look at the objects contained in a runtime environment. At the moment, `show_env`’s runtime environments do not contain any objects, but that is easy to fix. Just have `show_env` create some objects in its body of code. R will store any objects created by `show_env` in its runtime environment. Why? Because the runtime environment will be the active environment when those objects are created:
```
show_env <- function(){
a <- 1
b <- 2
c <- 3
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
```
This time when we run `show_env`, R stores `a`, `b`, and `c` in the runtime environment:
```
show_env()
## $ran.in
## <environment: 0x7ff712312cd0>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## a : num 1
## b : num 2
## c : num 3
```
This is how R ensures that a function does not overwrite anything that it shouldn’t. Any objects created by the function are stored in a safe, out\-of\-the\-way runtime environment.
R will also put a second type of object in a runtime environment. If a function has arguments, R will copy over each argument to the runtime environment. The argument will appear as an object that has the name of the argument but the value of whatever input the user provided for the argument. This ensures that a function will be able to find and use each of its arguments:
```
foo <- "take me to your runtime"
show_env <- function(x = foo){
list(ran.in = environment(),
parent = parent.env(environment()),
objects = ls.str(environment()))
}
show_env()
## $ran.in
## <environment: 0x7ff712398958>
##
## $parent
## <environment: R_GlobalEnv>
##
## $objects
## x : chr "take me to your runtime"
```
Let’s put this all together to see how R evaluates a function. Before you call a function, R is working in an active environment; let’s call this the *calling environment*. It is the environment R calls the function from.
Then you call the function. R responds by setting up a new runtime environment. This environment will be a child of the function’s origin enviornment. R will copy each of the function’s arguments into the runtime environment and then make the runtime environment the new active environment.
Next, R runs the code in the body of the function. If the code creates any objects, R stores them in the active, that is, runtime environment. If the code calls any objects, R uses its scoping rules to look them up. R will search the runtime environment, then the parent of the runtime environment (which will be the origin environment), then the parent of the origin environment, and so on. Notice that the calling environment might not be on the search path. Usually, a function will only call its arguments, which R can find in the active runtime environment.
Finally, R finishes running the function. It switches the active environment back to the calling environment. Now R executes any other commands in the line of code that called the function. So if you save the result of the function to an object with `<-`, the new object will be stored in the calling environment.
To recap, R stores its objects in an environment system. At any moment of time, R is working closely with a single active environment. It stores new objects in this environment, and it uses the environment as a starting point when it searches for existing objects. R’s active environment is usually the global environment, but R will adjust the active environment to do things like run functions in a safe manner.
How can you use this knowledge to fix the `deal` and `shuffle` functions?
First, let’s start with a warm\-up question. Suppose I redefine `deal` at the command line like this:
```
deal <- function() {
deck[1, ]
}
```
Notice that `deal` no longer takes an argument, and it calls the `deck` object, which lives in the global environment.
**Exercise 8\.1 (Will deal work?)** Will R be able to find `deck` and return an answer when I call the new version of `deal`, such as `deal()`?
*Solution.* Yes. `deal` will still work the same as before. R will run `deal` in a runtime environment that is a child of the global environment. Why will it be a child of the global environment? Because the global environment is the origin environment of `deal` (we defined `deal` in the global environment):
```
environment(deal)
## <environment: R_GlobalEnv>
```
When `deal` calls `deck`, R will need to look up the `deck` object. R’s scoping rules will lead it to the version of `deck` in the global environment, as in Figure [8\.5](environments.html#fig:deal). `deal` works as expected as a result:
```
deal()
## face suit value
## king spades 13
```
Figure 8\.5: R finds deck by looking in the parent of deal’s runtime environment. The parent is the global environment, deal’s origin environment. Here, R finds the copy of deck.
Now let’s fix the `deal` function to remove the cards it has dealt from `deck`. Recall that `deal` returns the top card of `deck` but does not remove the card from the deck. As a result, `deal` always returns the same card:
```
deal()
## face suit value
## king spades 13
deal()
## face suit value
## king spades 13
```
You know enough R syntax to remove the top card of `deck`. The following code will save a prisitine copy of `deck` and then remove the top card:
```
DECK <- deck
deck <- deck[-1, ]
head(deck, 3)
## face suit value
## queen spades 12
## jack spades 11
## ten spades 10
```
Now let’s add the code to `deal`. Here `deal` saves (and then returns) the top card of `deck`. In between, it removes the card from `deck`…or does it?
```
deal <- function() {
card <- deck[1, ]
deck <- deck[-1, ]
card
}
```
This code won’t work because R will be in a runtime environment when it executes `deck <- deck[-1, ]`. Instead of overwriting the global copy of `deck` with `deck[-1, ]`, `deal` will just create a slightly altered copy of `deck` in its runtime environment, as in Figure [8\.6](environments.html#fig:second-deck).
Figure 8\.6: The deal function looks up deck in the global environment but saves deck\[\-1, ] in the runtime environment as a new object named deck.
**Exercise 8\.2 (Overwrite deck)** Rewrite the `deck <- deck[-1, ]` line of `deal` to *assign* `deck[-1, ]` to an object named `deck` in the global environment. Hint: consider the `assign` function.
*Solution.* You can assign an object to a specific environment with the `assign` function:
```
deal <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
```
Now `deal` will finally clean up the global copy of `deck`, and we can `deal` cards just as we would in real life:
```
deal()
## face suit value
## queen spades 12
deal()
## face suit value
## jack spades 11
deal()
## face suit value
## ten spades 10
```
Let’s turn our attention to the `shuffle` function:
```
shuffle <- function(cards) {
random <- sample(1:52, size = 52)
cards[random, ]
}
```
`shuffle(deck)` doesn’t shuffle the `deck` object; it returns a shuffled copy of the `deck` object:
```
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
a <- shuffle(deck)
head(deck, 3)
## face suit value
## nine spades 9
## eight spades 8
## seven spades 7
head(a, 3)
## face suit value
## ace diamonds 1
## seven clubs 7
## two clubs 2
```
This behavior is now undesirable in two ways. First, `shuffle` fails to shuffle `deck`. Second, `shuffle` returns a copy of `deck`, which may be missing the cards that have been dealt away. It would be better if `shuffle` returned the dealt cards to the deck and then shuffled. This is what happens when you shuffle a deck of cards in real life.
**Exercise 8\.3 (Rewrite shuffle)** Rewrite `shuffle` so that it replaces the copy of `deck` that lives in the global environment with a shuffled version of `DECK`, the intact copy of `deck` that also lives in the global environment. The new version of `shuffle` should have no arguments and return no output.
*Solution.* You can update `shuffle` in the same way that you updated `deck`. The following version will do the job:
```
shuffle <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
```
Since `DECK` lives in the global environment, `shuffle`’s environment of origin, `shuffle` will be able to find `DECK` at runtime. R will search for `DECK` first in `shuffle`’s runtime environment, and then in `shuffle`’s origin environment—the global environment—which is where `DECK` is stored.
The second line of `shuffle` will create a reordered copy of `DECK` and save it as `deck` in the global environment. This will overwrite the previous, nonshuffled version of `deck`.
8\.6 Closures
-------------
Our system finally works. For example, you can shuffle the cards and then deal a hand of blackjack:
```
shuffle()
deal()
## face suit value
## queen hearts 12
deal()
## face suit value
## eight hearts 8
```
But the system requires `deck` and `DECK` to exist in the global environment. Lots of things happen in this environment, and it is possible that `deck` may get modified or erased by accident.
It would be better if we could store `deck` in a safe, out\-of\-the\-way place, like one of those safe, out\-of\-the\-way environments that R creates to run functions in. In fact, storing `deck` in a runtime environment is not such a bad idea.
You could create a function that takes `deck` as an argument and saves a copy of `deck` as `DECK`. The function could also save its own copies of `deal` and `shuffle`:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
}
```
When you run `setup`, R will create a runtime environment to store these objects in. The environment will look like Figure [8\.7](environments.html#fig:closure1).
Now all of these things are safely out of the way in a child of the global environment. That makes them safe but hard to use. Let’s ask `setup` to return `DEAL` and `SHUFFLE` so we can use them. The best way to do this is to return the functions as a list:
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = globalenv())
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = globalenv())
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
```
Figure 8\.7: Running setup will store deck and DECK in an out\-of\-the\-way place, and create a DEAL and SHUFFLE function. Each of these objects will be stored in an environment whose parent is the global environment.
Then you can save each of the elements of the list to a dedicated object in the global environment:
```
deal <- cards$deal
shuffle <- cards$shuffle
```
Now you can run `deal` and `shuffle` just as before. Each object contains the same code as the original `deal` and `shuffle`:
```
deal
## function() {
## card <- deck[1, ]
## assign("deck", deck[-1, ], envir = globalenv())
## card
## }
## <environment: 0x7ff7169c3390>
shuffle
## function(){
## random <- sample(1:52, size = 52)
## assign("deck", DECK[random, ], envir = globalenv())
## }
## <environment: 0x7ff7169c3390>
```
However, the functions now have one important difference. Their origin environment is no longer the global environment (although `deal` and `shuffle` *are* currently saved there). Their origin environment is the runtime environment that R made when you ran `setup`. That’s where R created `DEAL` and `SHUFFLE`, the functions copied into the new `deal` and `shuffle`, as shown in:
```
environment(deal)
## <environment: 0x7ff7169c3390>
environment(shuffle)
## <environment: 0x7ff7169c3390>
```
Why does this matter? Because now when you run `deal` or `shuffle`, R will evaluate the functions in a runtime environment that uses `0x7ff7169c3390` as its parent. `DECK` and `deck` will be in this parent environment, which means that `deal` and `shuffle` will be able to find them at runtime. `DECK` and `deck` will be in the functions’ search path but still out of the way in every other respect, as shown in Figure [8\.8](environments.html#fig:closure2).
Figure 8\.8: Now deal and shuffle will be run in an environment that has the protected deck and DECK in its search path.
This arrangement is called a *closure*. `setup`’s runtime environment “encloses” the `deal` and `shuffle` functions. Both `deal` and `shuffle` can work closely with the objects contained in the enclosing environment, but almost nothing else can. The enclosing environment is not on the search path for any other R function or environment.
You may have noticed that `deal` and `shuffle` still update the `deck` object in the global environment. Don’t worry, we’re about to change that. We want `deal` and `shuffle` to work exclusively with the objects in the parent (enclosing) environment of their runtime environments. Instead of having each function reference the global environment to update `deck`, you can have them reference their parent environment at runtime, as shown in Figure [8\.9](environments.html#fig:closure3):
```
setup <- function(deck) {
DECK <- deck
DEAL <- function() {
card <- deck[1, ]
assign("deck", deck[-1, ], envir = parent.env(environment()))
card
}
SHUFFLE <- function(){
random <- sample(1:52, size = 52)
assign("deck", DECK[random, ], envir = parent.env(environment()))
}
list(deal = DEAL, shuffle = SHUFFLE)
}
cards <- setup(deck)
deal <- cards$deal
shuffle <- cards$shuffle
```
Figure 8\.9: When you change your code, deal and shuffle will go from updating the global environment (left) to updating their parent environment (right).
We finally have a self\-contained card game. You can delete (or modify) the global copy of `deck` as much as you want and still play cards. `deal` and `shuffle` will use the pristine, protected copy of `deck`:
```
rm(deck)
shuffle()
deal()
## face suit value
## ace hearts 1
deal()
## face suit value
## jack clubs 11
```
Blackjack!
8\.7 Summary
------------
R saves its objects in an environment system that resembles your computer’s file system. If you understand this system, you can predict how R will look up objects. If you call an object at the command line, R will look for the object in the global environment and then the parents of the global environment, working its way up the environment tree one environment at a time.
R will use a slightly different search path when you call an object from inside of a function. When you run a function, R creates a new environment to execute commands in. This environment will be a child of the environment where the function was originally defined. This may be the global environment, but it also may not be. You can use this behavior to create closures, which are functions linked to objects in protected environments.
As you become familiar with R’s environment system, you can use it to produce elegant results, like we did here. However, the real value of understanding the environment system comes from knowing how R functions do their job. You can use this knowledge to figure out what is going wrong when a function does not perform as expected.
8\.8 Project 2 Wrap\-up
-----------------------
You now have full control over the data sets and values that you load into R. You can store data as R objects, you can retrieve and manipulate data values at will, and you can even predict how R will store and look up your objects in your computer’s memory.
You may not realize it yet, but your expertise makes you a powerful, computer\-augmented data user. You can use R to save and work with larger data sets than you could otherwise handle. So far we’ve only worked with `deck`, a small data set; but you can use the same techniques to work with any data set that fits in your computer’s memory.
However, storing data is not the only logistical task that you will face as a data scientist. You will often want to do tasks with your data that are so complex or repetitive that they are difficult to do without a computer. Some of the things can be done with functions that already exist in R and its packages, but others cannot. You will be the most versatile as a data scientist if you can write your own programs for computers to follow. R can help you do this. When you are ready, [Project 3: Slot Machine](project-3-slot-machine.html#project-3-slot-machine) will teach you the most useful skills for writing programs in R.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/programs.html |
9 Programs
==========
In this chapter, you will build a real, working slot machine that you can play by running an R function. When you’re finished, you’ll be able to play it like this:
```
play()
## 0 0 DD
## $0
play()
## 7 7 7
## $80
```
The `play` function will need to do two things. First, it will need to randomly generate three symbols; and, second, it will need to calculate a prize based on those symbols.
The first step is easy to simulate. You can randomly generate three symbols with the `sample` function—just like you randomly “rolled” two dice in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). The following function generates three symbols from a group of common slot machine symbols: diamonds (`DD`), sevens (`7`), triple bars (`BBB`), double bars (`BB`), single bars (`B`), cherries (`C`), and zeroes (`0`). The symbols are selected randomly, and each symbol appears with a different probability:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
```
You can use `get_symbols` to generate the symbols used in your slot machine:
```
get_symbols()
## "BBB" "0" "C"
get_symbols()
## "0" "0" "0"
get_symbols()
## "7" "0" "B"
```
`get_symbols` uses the probabilities observed in a group of video lottery terminals from Manitoba, Canada. These slot machines became briefly controversial in the 1990s, when a reporter decided to test their payout rate. The machines appeared to pay out only 40 cents on the dollar, even though the manufacturer claimed they would pay out 92 cents on the dollar. The original data collected on the machines and a description of the controversy is available online in [a journal article by W. John Braun](http://bit.ly/jse_Braun). The controversy died down when additional testing showed that the manufacturer was correct.
The Manitoba slot machines use the complicated payout scheme shown in Table [9\.1](programs.html#tab:prizes). A player will win a prize if he gets:
* Three of the same type of symbol (except for three zeroes)
* Three bars (of mixed variety)
* One or more cherries
Otherwise, the player receives no prize.
The monetary value of the prize is determined by the exact combination of symbols and is further modified by the presence of diamonds. Diamonds are treated like “wild cards,” which means they can be considered any other symbol if it would increase a player’s prize. For example, a player who rolls `7` `7` `DD` would earn a prize for getting three sevens. There is one exception to this rule, however: a diamond cannot be considered a cherry unless the player also gets one real cherry. This prevents a dud roll like, `0` `DD` `0` from being scored as `0` `C` `0`.
Diamonds are also special in another way. Every diamond that appears in a combination doubles the amount of the final prize. So `7` `7` `DD` would actually be scored *higher* than `7` `7` `7`. Three sevens would earn you $80, but two sevens and a diamond would earn you $160\. One seven and two diamonds would be even better, resulting in a prize that has been doubled twice, or $320\. A jackpot occurs when a player rolls `DD` `DD` `DD`. Then a player earns $100 doubled three times, which is $800\.
Table 9\.1: Each play of the slot machine costs $1\. A player’s symbols determine how much they win. Diamonds (`DD`) are wild, and each diamond doubles the final prize. \* \= any symbol.
| Combination | Prize($) |
| --- | --- |
| `DD DD DD` | 100 |
| `7 7 7` | 80 |
| `BBB BBB BBB` | 40 |
| `BB BB BB` | 25 |
| `B B B` | 10 |
| `C C C` | 10 |
| Any combination of bars | 5 |
| `C C *` | 5 |
| `C * C` | 5 |
| `* C C` | 5 |
| `C * *` | 2 |
| `* C *` | 2 |
| `* * C` | 2 |
To create your `play` function, you will need to write a program that can take the output of `get_symbols` and calculate the correct prize based on Table [9\.1](programs.html#tab:prizes).
In R, programs are saved either as R scripts or as functions. We’ll save your program as a function named `score`. When you are finished, you will be able to use `score` to calculate a prize like this:
```
score(c("DD", "DD", "DD"))
## 800
```
After that it will be easy to create the full slot machine, like this:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
The `print` command prints its output to the console window, which makes `print` a useful way to display messages from within the body of a function.
You may notice that `play` calls a new function, `print`. This will help `play` display the three slot machine symbols, since they do not get returned by the last line of the function. The `print` command prints its output to the console window – even if R calls it from within a function.
In [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice), I encouraged you to write all of your R code in an R script, a text file where you can compose and save code. That advice will become very important as you work through this chapter. Remember that you can open an R script in RStudio by going to the menu bar and clicking on File \> New File \> R Script.
9\.1 Strategy
-------------
Scoring slot\-machine results is a complex task that will require a complex algorithm. You can make this, and other coding tasks, easier by using a simple strategy:
* Break complex tasks into simple subtasks.
* Use concrete examples.
* Describe your solutions in English, then convert them to R.
Let’s start by looking at how you can divide a program into subtasks that are simple to work with.
A program is a set of step\-by\-step instructions for your computer to follow. Taken together, these instructions may accomplish something very sophisticated. Taken apart, each individual step will likely be simple and straightforward.
You can make coding easier by identifying the individual steps or subtasks within your program. You can then work on each subtask separately. If a subtask seems complicated, try to divide it again into even subtasks that are even more simple. You can often reduce an R program into substasks so simple that each can be performed with a preexisting function.
R programs contain two types of subtasks: sequential steps and parallel cases.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
9\.2 if Statements
------------------
Linking cases together in parallel requires a bit of structure; your program faces a fork in the road whenever it must choose between cases. You can help the program navigate this fork with an `if` statement.
An `if` statement tells R to do a certain task for a certain case. In English you would say something like, “If this is true, do that.” In R, you would say:
```
if (this) {
that
}
```
The `this` object should be a logical test or an R expression that evaluates to a single `TRUE` or `FALSE`. If `this` evaluates to `TRUE`, R will run all of the code that appears between the braces that follow the `if` statement (i.e., between the `{` and `}` symbols). If `this` evaluates to `FALSE`, R will skip the code between the braces without running it.
For example, you could write an `if` statement that ensures some object, `num`, is positive:
```
if (num < 0) {
num <- num * -1
}
```
If `num < 0` is `TRUE`, R will multiply `num` by negative one, which will make `num` positive:
```
num <- -2
if (num < 0) {
num <- num * -1
}
num
## 2
```
If `num < 0` is `FALSE`, R will do nothing and `num` will remain as it is—positive (or zero):
```
num <- 4
if (num < 0) {
num <- num * -1
}
num
## 4
```
The condition of an `if` statement must evaluate to a *single* `TRUE` or `FALSE`. If the condition creates a vector of `TRUE`s and `FALSE`s (which is easier to make than you may think), your `if` statement will print a warning message and use only the first element of the vector. Remember that you can condense vectors of logical values to a single `TRUE` or `FALSE` with the functions `any` and `all`.
You don’t have to limit your `if` statements to a single line of code; you can include as many lines as you like between the braces. For example, the following code uses many lines to ensure that `num` is positive. The additional lines print some informative statements if `num` begins as a negative number. R will skip the entire code block—`print` statements and all—if `num` begins as a positive number:
```
num <- -1
if (num < 0) {
print("num is negative.")
print("Don't worry, I'll fix it.")
num <- num * -1
print("Now num is positive.")
}
## "num is negative."
## "Don't worry, I'll fix it."
## "Now num is positive."
num
## 1
```
Try the following quizzes to develop your understanding of `if` statements.
**Exercise 9\.1 (Quiz A)** What will this return?
```
x <- 1
if (3 == 3) {
x <- 2
}
x
```
*Solution.* The code will return the number 2\. `x` begins as 1, and then R encounters the `if` statement. Since the condition evaluates to `TRUE`, R will run `x <- 2`, changing the value of `x`.
**Exercise 9\.2 (Quiz B)** What will this return?
```
x <- 1
if (TRUE) {
x <- 2
}
x
```
*Solution.* This code will also return the number 2\. It works the same as the code in Quiz A, except the condition in this statement is already `TRUE`. R doesn’t even need to evaluate it. As a result, the code inside the `if` statement will be run, and `x` will be set to 2\.
**Exercise 9\.3 (Quiz C)** What will this return?
```
x <- 1
if (x == 1) {
x <- 2
if (x == 1) {
x <- 3
}
}
x
```
*Solution.* Once again, the code will return the number 2\. `x` starts out as 1, and the condition of the first `if` statement will evaluate to `TRUE`, which causes R to run the code in the body of the `if` statement. First, R sets `x` equal to 2, then R evaluates the second `if` statement, which is in the body of the first. This time `x == 1` will evaluate to `FALSE` because `x` now equals 2\. As a result, R ignores `x <- 3` and exits both `if` statements.
9\.3 else Statements
--------------------
`if` statements tell R what to do when your condition is *true*, but you can also tell R what to do when the condition is *false*. `else` is a counterpart to `if` that extends an `if` statement to include a second case. In English, you would say, “If this is true, do plan A; else do plan B.” In R, you would say:
```
if (this) {
Plan A
} else {
Plan B
}
```
When `this` evaluates to `TRUE`, R will run the code in the first set of braces, but not the code in the second. When `this` evaluates to `FALSE`, R will run the code in the second set of braces, but not the first. You can use this arrangement to cover all of the possible cases. For example, you could write some code that rounds a decimal to the nearest integer.
Start with a decimal:
```
a <- 3.14
```
Then isolate the decimal component with `trunc`:
```
dec <- a - trunc(a)
dec
## 0.14
```
`trunc` takes a number and returns only the portion of the number that appears to the left of the decimal place (i.e., the integer part of the number).
`a - trunc(a)` is a convenient way to return the decimal part of `a`.
Then use an `if else` tree to round the number (either up or down):
```
if (dec >= 0.5) {
a <- trunc(a) + 1
} else {
a <- trunc(a)
}
a
## 3
```
If your situation has more than two mutually exclusive cases, you can string multiple `if` and `else` statements together by adding a new `if` statement immediately after `else`. For example:
```
a <- 1
b <- 1
if (a > b) {
print("A wins!")
} else if (a < b) {
print("B wins!")
} else {
print("Tie.")
}
## "Tie."
```
R will work through the `if` conditions until one evaluates to `TRUE`, then R will ignore any remaining `if` and `else` clauses in the tree. If no conditions evaluate to `TRUE`, R will run the final `else` statement.
If two `if` statements describe mutually exclusive events, it is better to join the `if` statements with an `else if` than to list them separately. This lets R ignore the second `if` statement whenever the first returns a `TRUE`, which saves work.
You can use `if` and `else` to link the subtasks in your slot\-machine function. Open a fresh R script, and copy this code into it. The code will be the skeleton of our final `score` function. Compare it to the flow chart for `score` in Figure [9\.2](programs.html#fig:subdivide2):
```
if ( # Case 1: all the same <1>) {
prize <- # look up the prize <3>
} else if ( # Case 2: all bars <2> ) {
prize <- # assign $5 <4>
} else {
# count cherries <5>
prize <- # calculate a prize <7>
}
# count diamonds <6>
# double the prize if necessary <8>
```
Our skeleton is rather incomplete; there are many sections that are just code comments instead of real code. However, we’ve reduced the program to eight simple subtasks:
**\<1\>** \- Test whether the symbols are three of a kind.
**\<2\>** \- Test whether the symbols are all bars.
**\<3\>** \- Look up the prize for three of a kind based on the common symbol.
**\<4\>** \- Assign a prize of $5\.
**\<5\>** \- Count the number of cherries.
**\<6\>** \- Count the number of diamonds.
**\<7\>** \- Calculate a prize based on the number of cherries.
**\<8\>** \- Adjust the prize for diamonds.
If you like, you can reorganize your flow chart around these tasks, as in Figure [9\.4](programs.html#fig:subdivide4). The chart will describe the same strategy, but in a more precise way. I’ll use a diamond shape to symbolize an `if else` decision.
Figure 9\.4: score can navigate three cases with two if else decisions. We can also break some of our tasks into two steps.
Now we can work through the subtasks one at a time, adding R code to the `if` tree as we go. Each subtask will be easy to solve if you set up a concrete example to work with and try to describe a solution in English before coding in R.
The first subtask asks you to test whether the symbols are three of a kind. How should you begin writing the code for this subtask?
You know that the final `score` function will look something like this:
```
score <- function(symbols) {
# calculate a prize
prize
}
```
Its argument, `symbols`, will be the output of `get_symbols`, a vector that contains three character strings. You could start writing `score` as I have written it, by defining an object named `score` and then slowly filling in the body of the function. However, this would be a bad idea. The eventual function will have eight separate parts, and it will not work correctly until *all* of those parts are written (and themselves work correctly). This means you would have to write the entire `score` function before you could test any of the subtasks. If `score` doesn’t work—which is very likely—you will not know which subtask needs fixed.
You can save yourself time and headaches if you focus on one subtask at a time. For each subtask, create a concrete example that you can test your code on. For example, you know that `score` will need to work on a vector named `symbols` that contains three character strings. If you make a real vector named `symbols`, you can run the code for many of your subtasks on the vector as you go:
```
symbols <- c("7", "7", "7")
```
If a piece of code does not work on `symbols`, you will know that you need to fix it before you move on. You can change the value of `symbols` from subtask to subtask to ensure that your code works in every situation:
```
symbols <- c("B", "BB", "BBB")
symbols <- c("C", "DD", "0")
```
Only combine your subtasks into a `score` function once each subtask works on a concrete example. If you follow this plan, you will spend more time using your functions and less time trying to figure out why they do not work.
After you set up a concrete example, try to describe how you will do the subtask in English. The more precisely you can describe your solution, the easier it will be to write your R code.
Our first subtask asks us to “test whether the symbols are three of a kind.” This phrase does not suggest any useful R code to me. However, I could describe a more precise test for three of a kind: three symbols will be the same if the first symbol is equal to the second and the second symbol is equal to the third. Or, even more precisely:
*A vector named `symbols` will contain three of the same symbol if the first element of `symbols` is equal to the second element of `symbols` and the second element of `symbols` is equal to the third element of `symbols`*.
**Exercise 9\.4 (Write a Test)** Turn the preceding statement into a logical test written in R. Use your knowledge of logical tests, Boolean operators, and subsetting from [R Notation](r-notation.html#r-notation). The test should work with the vector `symbols` and return a `TRUE` *if and only if* each element in `symbols` is the same. Be sure to test your code on `symbols`.
*Solution.* Here are a couple of ways to test that `symbols` contains three of the same symbol. The first method parallels the English suggestion above, but there are other ways to do the same test. There is no right or wrong answer, so long as your solution works, which is easy to check because you’ve created a vector named `symbols`:
```
symbols
## "7" "7" "7"
symbols[1] == symbols[2] & symbols[2] == symbols[3]
## TRUE
symbols[1] == symbols[2] & symbols[1] == symbols[3]
## TRUE
all(symbols == symbols[1])
## TRUE
```
As your vocabulary of R functions broadens, you’ll think of more ways to do basic tasks. One method that I like for checking three of a kind is:
```
length(unique(symbols) == 1)
```
The `unique` function returns every unique term that appears in a vector. If your `symbols` vector contains three of a kind (i.e., one unique term that appears three times), then `unique(symbols)` will return a vector of length `1`.
Now that you have a working test, you can add it to your slot\-machine script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
if (same) {
prize <- # look up the prize
} else if ( # Case 2: all bars ) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
`&&` and `||` behave like `&` and `|` but can sometimes be more efficient. The double operators will not evaluate the second test in a pair of tests if the first test makes the result clear. For example, if `symbols[1]` does not equal `symbols[2]` in the next expression, `&&` will not evaluate `symbols[2] == symbols[3]`; it can immediately return a `FALSE` for the whole expression (because `FALSE & TRUE` and `FALSE & FALSE` both evaluate to `FALSE`). This efficiency can speed up your programs; however, double operators are not appropriate everywhere. `&&` and `||` are not vectorized, which means they can only handle a single logical test on each side of the operator.
The second prize case occurs when all the symbols are a type of bar, for example, `B`, `BB`, and `BBB`. Let’s begin by creating a concrete example to work with:
```
symbols <- c("B", "BBB", "BB")
```
**Exercise 9\.5 (Test for All Bars)** Use R’s logical and Boolean operators to write a test that will determine whether a vector named `symbols` contains only symbols that are a type of bar. Check whether your test works with our example `symbols` vector. Remember to describe how the test should work in English, and then convert the solution to R.
*Solution.* As with many things in R, there are multiple ways to test whether `symbols` contains all bars. For example, you could write a very long test that uses multiple Boolean operators, like this:
```
symbols[1] == "B" | symbols[1] == "BB" | symbols[1] == "BBB" &
symbols[2] == "B" | symbols[2] == "BB" | symbols[2] == "BBB" &
symbols[3] == "B" | symbols[3] == "BB" | symbols[3] == "BBB"
## TRUE
```
However, this is not a very efficient solution, because R has to run nine logical tests (and you have to type them). You can often replace multiple `|` operators with a single `%in%`. Also, you can check that a test is true for each element in a vector with `all`. These two changes shorten the preceding code to:
```
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
Let’s add this code to our script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
prize <- # look up the prize
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
You may have noticed that I split this test up into two steps, `bars` and `all(bars)`. That’s just a matter of personal preference. Wherever possible, I like to write my code so it can be read with function and object names conveying what they do.
You also may have noticed that our test for Case 2 will capture some symbols that should be in Case 1 because they contain three of a kind:
```
symbols <- c("B", "B", "B")
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
That won’t be a problem, however, because we’ve connected our cases with `else if` in the `if` tree. As soon as R comes to a case that evaluates to `TRUE`, it will skip over the rest of the tree. Think of it this way: each `else` tells R to only run the code that follows it *if none of the previous conditions have been met*. So when we have three of the same type of bar, R will evaluate the code for Case 1 and then skip the code for Case 2 (and Case 3\).
Our next subtask is to assign a prize for `symbols`. When the `symbols` vector contains three of the same symbol, the prize will depend on which symbol there are three of. If there are three `DD`s, the prize will be $100; if there are three `7`s, the prize will be $80; and so on.
This suggests another `if` tree. You could assign a prize with some code like this:
```
if (same) {
symbol <- symbols[1]
if (symbol == "DD") {
prize <- 800
} else if (symbol == "7") {
prize <- 80
} else if (symbol == "BBB") {
prize <- 40
} else if (symbol == "BB") {
prize <- 5
} else if (symbol == "B") {
prize <- 10
} else if (symbol == "C") {
prize <- 10
} else if (symbol == "0") {
prize <- 0
}
}
```
While this code will work, it is a bit long to write and read, and it may require R to perform multiple logical tests before delivering the correct prize. We can do better with a different method.
9\.4 Lookup Tables
------------------
Very often in R, the simplest way to do something will involve subsetting. How could you use subsetting here? Since you know the exact relationship between the symbols and their prizes, you can create a vector that captures this information. This vector can store symbols as names and prize values as elements:
```
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
payouts
## DD 7 BBB BB B C 0
## 100 80 40 25 10 10 0
```
Now you can extract the correct prize for any symbol by subsetting the vector with the symbol’s name:
```
payouts["DD"]
## DD
## 100
payouts["B"]
## B
## 10
```
If you want to leave behind the symbol’s name when subsetting, you can run the `unname` function on the output:
```
unname(payouts["DD"])
## 100
```
`unname` returns a copy of an object with the names attribute removed.
`payouts` is a type of *lookup table*, an R object that you can use to look up values. Subsetting `payouts` provides a simple way to find the prize for a symbol. It doesn’t take many lines of code, and it does the same amount of work whether your symbol is `DD` or `0`. You can create lookup tables in R by creating named objects that can be subsetted in clever ways.
Sadly, our method is not quite automatic; we need to tell R which symbol to look up in `payouts`. Or do we? What would happen if you subsetted `payouts` by `symbols[1]`? Give it a try:
```
symbols <- c("7", "7", "7")
symbols[1]
## "7"
payouts[symbols[1]]
## 7
## 80
symbols <- c("C", "C", "C")
payouts[symbols[1]]
## C
## 10
```
You don’t need to know the exact symbol to look up because you can tell R to look up whichever symbol happens to be in `symbols`. You can find this symbol with `symbols[1]`, `symbols[2]`, or `symbols[3]`, because each contains the same symbol in this case. You now have a simple automated way to calculate the prize when `symbols` contains three of a kind. Let’s add it to our code and then look at Case 2:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Case 2 occurs whenever the symbols are all bars. In that case, the prize will be $5, which is easy to assign:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Now we can work on the last case. Here, you’ll need to know how many cherries are in `symbols` before you can calculate a prize.
**Exercise 9\.6 (Find C’s)** How can you tell which elements of a vector named `symbols` are a `C`? Devise a test and try it out.
**Challenge**
How might you count the number of `C`s in a vector named `symbols`? Remember R’s coercion rules.
*Solution.* As always, let’s work with a real example:
```
symbols <- c("C", "DD", "C")
```
One way to test for cherries would be to check which, if any, of the symbols are a `C`:
```
symbols == "C"
## TRUE FALSE TRUE
```
It’d be even more useful to count how many of the symbols are cherries. You can do this with `sum`, which expects numeric input, not logical. Knowing this, R will coerce the `TRUE`s and `FALSE`s to `1`s and `0`s before doing the summation. As a result, `sum` will return the number of `TRUE`s, which is also the number of cherries:
```
sum(symbols == "C")
## 2
```
You can use the same method to count the number of diamonds in `symbols`:
```
sum(symbols == "DD")
## 1
```
Let’s add both of these subtasks to the program skeleton:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- # calculate a prize
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
Since Case 3 appears further down the `if` tree than Cases 1 and 2, the code in Case 3 will only be applied to players that do not have three of a kind or all bars. According to the slot machine’s payout scheme, these players will win $5 if they have two cherries and $2 if they have one cherry. If the player has no cherries, she gets a prize of $0\. We don’t need to worry about three cherries because that outcome is already covered in Case 1\.
As in Case 1, you could write an `if` tree that handles each combination of cherries, but just like in Case 1, this would be an inefficient solution:
```
if (cherries == 2) {
prize <- 5
} else if (cherries == 1) {
prize <- 2
} else {}
prize <- 0
}
```
Again, I think the best solution will involve subsetting. If you are feeling ambitious, you can try to work this solution out on your own, but you will learn just as quickly by mentally working through the following proposed solution.
We know that our prize should be $0 if we have no cherries, $2 if we have one cherry, and $5 if we have two cherries. You can create a vector that contains this information. This will be a very simple lookup table:
```
c(0, 2, 5)
```
Now, like in Case 1, you can subset the vector to retrieve the correct prize. In this case, the prize’s aren’t identified by a symbol name, but by the number of cherries present. Do we have that information? Yes, it is stored in `cherries`. We can use basic integer subsetting to get the correct prize from the prior lookup table, for example, `c(0, 2, 5)[1]`.
`cherries` isn’t exactly suited for integer subsetting because it could contain a zero, but that’s easy to fix. We can subset with `cherries + 1`. Now when `cherries` equals zero, we have:
```
cherries + 1
## 1
c(0, 2, 5)[cherries + 1]
## 0
```
When `cherries` equals one, we have:
```
cherries + 1
## 2
c(0, 2, 5)[cherries + 1]
## 2
```
And when `cherries` equals two, we have:
```
cherries + 1
## 3
c(0, 2, 5)[cherries + 1]
## 5
```
Examine these solutions until you are satisfied that they return the correct prize for each number of cherries. Then add the code to your script, as follows:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
**Lookup Tables Versus if Trees**
This is the second time we’ve created a lookup table to avoid writing an `if` tree. Why is this technique helpful and why does it keep appearing? Many `if` trees in R are essential. They provide a useful way to tell R to use different algorithms in different cases. However, `if` trees are not appropriate everywhere.
`if` trees have a couple of drawbacks. First, they require R to run multiple tests as it works its way down the `if` tree, which can create unnecessary work. Second, as you’ll see in [Speed](speed.html#speed), it can be difficult to use `if` trees in vectorized code, a style of code that takes advantage of R’s programming strengths to create fast programs. Lookup tables do not suffer from either of these drawbacks.
You won’t be able to replace every `if` tree with a lookup table, nor should you. However, you can usually use lookup tables to avoid assigning variables with `if` trees. As a general rule, use an `if` tree if each branch of the tree runs different *code*. Use a lookup table if each branch of the tree only assigns a different *value*.
To convert an `if` tree to a lookup table, identify the values to be assigned and store them in a vector. Next, identify the selection criteria used in the conditions of the `if` tree. If the conditions use character strings, give your vector names and use name\-based subsetting. If the conditions use integers, use integer\-based subsetting.
The final subtask is to double the prize once for every diamond present. This means that the final prize will be some multiple of the current prize. For example, if no diamonds are present, the prize will be:
```
prize * 1 # 1 = 2 ^ 0
```
If one diamond is present, it will be:
```
prize * 2 # 2 = 2 ^ 1
```
If two diamonds are present, it will be:
```
prize * 4 # 4 = 2 ^ 2
```
And if three diamonds are present, it will be:
```
prize * 8 # 8 = 2 ^ 3
```
Can you think of an easy way to handle this? How about something similar to these examples?
**Exercise 9\.7 (Adjust for Diamonds)** Write a method for adjusting `prize` based on `diamonds`. Describe a solution in English first, and then write your code.
*Solution.* Here is a concise solution inspired by the previous pattern. The adjusted prize will equal:
```
prize * 2 ^ diamonds
```
which gives us our final `score` script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
9\.5 Code Comments
------------------
You now have a working score script that you can save to a function. Before you save your script, though, consider adding comments to your code with a `#`. Comments can make your code easier to understand by explaining *why* the code does what it does. You can also use comments to break long programs into scannable chunks. For example, I would include three comments in the `score` code:
```
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
Now that each part of your code works, you can wrap it into a function with the methods you learned in [Writing Your Own Functions](basics.html#write-functions). Either use RStudio’s Extract Function option in the menu bar under Code, or use the `function` function. Ensure that the last line of the function returns a result (it does), and identify any arguments used by your function. Often the concrete examples that you used to test your code, like `symbols`, will become the arguments of your function. Run the following code to start using the `score` function:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Once you have defined the `score` function, the `play` function will work as well:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
Now it is easy to play the slot machine:
```
play()
## "0" "BB" "B"
## 0
play()
## "DD" "0" "B"
## 0
play()
## "BB" "BB" "B"
## 25
```
9\.6 Summary
------------
An R program is a set of instructions for your computer to follow that has been organized into a sequence of steps and cases. This may make programs seem simple, but don’t be fooled: you can create complicated results with the right combination of simple steps (and cases).
As a programmer, you are more likely to be fooled in the opposite way. A program may seem impossible to write when you know that it must do something impressive. Do not panic in these situations. Divide the job before you into simple tasks, and then divide the tasks again. You can visualize the relationship between tasks with a flow chart if it helps. Then work on the subtasks one at a time. Describe solutions in English, then convert them to R code. Test each solution against concrete examples as you go. Once each of your subtasks works, combine your code into a function that you can share and reuse.
R provides tools that can help you do this. You can manage cases with `if` and `else` statements. You can create a lookup table with objects and subsetting. You can add code comments with `#`. And you can save your programs as a function with `function`.
Things often go wrong when people write programs. It will be up to you to find the source of any errors that occur and to fix them. It should be easy to find the source of your errors if you use a stepwise approach to writing functions, writing—and then testing—one bit at a time. However, if the source of an error eludes you, or you find yourself working with large chunks of untested code, consider using R’s built in debugging tools, described in [Debugging R Code](debug.html#debug).
The next two chapters will teach you more tools that you can use in your programs. As you master these tools, you will find it easier to write R programs that let you do whatever you wish to your data. In [S3](s3.html#s3), you will learn how to use R’s S3 system, an invisible hand that shapes many parts of R. You will use the system to build a custom class for your slot machine output, and you will tell R how to display objects that have your class.
9\.1 Strategy
-------------
Scoring slot\-machine results is a complex task that will require a complex algorithm. You can make this, and other coding tasks, easier by using a simple strategy:
* Break complex tasks into simple subtasks.
* Use concrete examples.
* Describe your solutions in English, then convert them to R.
Let’s start by looking at how you can divide a program into subtasks that are simple to work with.
A program is a set of step\-by\-step instructions for your computer to follow. Taken together, these instructions may accomplish something very sophisticated. Taken apart, each individual step will likely be simple and straightforward.
You can make coding easier by identifying the individual steps or subtasks within your program. You can then work on each subtask separately. If a subtask seems complicated, try to divide it again into even subtasks that are even more simple. You can often reduce an R program into substasks so simple that each can be performed with a preexisting function.
R programs contain two types of subtasks: sequential steps and parallel cases.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
9\.2 if Statements
------------------
Linking cases together in parallel requires a bit of structure; your program faces a fork in the road whenever it must choose between cases. You can help the program navigate this fork with an `if` statement.
An `if` statement tells R to do a certain task for a certain case. In English you would say something like, “If this is true, do that.” In R, you would say:
```
if (this) {
that
}
```
The `this` object should be a logical test or an R expression that evaluates to a single `TRUE` or `FALSE`. If `this` evaluates to `TRUE`, R will run all of the code that appears between the braces that follow the `if` statement (i.e., between the `{` and `}` symbols). If `this` evaluates to `FALSE`, R will skip the code between the braces without running it.
For example, you could write an `if` statement that ensures some object, `num`, is positive:
```
if (num < 0) {
num <- num * -1
}
```
If `num < 0` is `TRUE`, R will multiply `num` by negative one, which will make `num` positive:
```
num <- -2
if (num < 0) {
num <- num * -1
}
num
## 2
```
If `num < 0` is `FALSE`, R will do nothing and `num` will remain as it is—positive (or zero):
```
num <- 4
if (num < 0) {
num <- num * -1
}
num
## 4
```
The condition of an `if` statement must evaluate to a *single* `TRUE` or `FALSE`. If the condition creates a vector of `TRUE`s and `FALSE`s (which is easier to make than you may think), your `if` statement will print a warning message and use only the first element of the vector. Remember that you can condense vectors of logical values to a single `TRUE` or `FALSE` with the functions `any` and `all`.
You don’t have to limit your `if` statements to a single line of code; you can include as many lines as you like between the braces. For example, the following code uses many lines to ensure that `num` is positive. The additional lines print some informative statements if `num` begins as a negative number. R will skip the entire code block—`print` statements and all—if `num` begins as a positive number:
```
num <- -1
if (num < 0) {
print("num is negative.")
print("Don't worry, I'll fix it.")
num <- num * -1
print("Now num is positive.")
}
## "num is negative."
## "Don't worry, I'll fix it."
## "Now num is positive."
num
## 1
```
Try the following quizzes to develop your understanding of `if` statements.
**Exercise 9\.1 (Quiz A)** What will this return?
```
x <- 1
if (3 == 3) {
x <- 2
}
x
```
*Solution.* The code will return the number 2\. `x` begins as 1, and then R encounters the `if` statement. Since the condition evaluates to `TRUE`, R will run `x <- 2`, changing the value of `x`.
**Exercise 9\.2 (Quiz B)** What will this return?
```
x <- 1
if (TRUE) {
x <- 2
}
x
```
*Solution.* This code will also return the number 2\. It works the same as the code in Quiz A, except the condition in this statement is already `TRUE`. R doesn’t even need to evaluate it. As a result, the code inside the `if` statement will be run, and `x` will be set to 2\.
**Exercise 9\.3 (Quiz C)** What will this return?
```
x <- 1
if (x == 1) {
x <- 2
if (x == 1) {
x <- 3
}
}
x
```
*Solution.* Once again, the code will return the number 2\. `x` starts out as 1, and the condition of the first `if` statement will evaluate to `TRUE`, which causes R to run the code in the body of the `if` statement. First, R sets `x` equal to 2, then R evaluates the second `if` statement, which is in the body of the first. This time `x == 1` will evaluate to `FALSE` because `x` now equals 2\. As a result, R ignores `x <- 3` and exits both `if` statements.
9\.3 else Statements
--------------------
`if` statements tell R what to do when your condition is *true*, but you can also tell R what to do when the condition is *false*. `else` is a counterpart to `if` that extends an `if` statement to include a second case. In English, you would say, “If this is true, do plan A; else do plan B.” In R, you would say:
```
if (this) {
Plan A
} else {
Plan B
}
```
When `this` evaluates to `TRUE`, R will run the code in the first set of braces, but not the code in the second. When `this` evaluates to `FALSE`, R will run the code in the second set of braces, but not the first. You can use this arrangement to cover all of the possible cases. For example, you could write some code that rounds a decimal to the nearest integer.
Start with a decimal:
```
a <- 3.14
```
Then isolate the decimal component with `trunc`:
```
dec <- a - trunc(a)
dec
## 0.14
```
`trunc` takes a number and returns only the portion of the number that appears to the left of the decimal place (i.e., the integer part of the number).
`a - trunc(a)` is a convenient way to return the decimal part of `a`.
Then use an `if else` tree to round the number (either up or down):
```
if (dec >= 0.5) {
a <- trunc(a) + 1
} else {
a <- trunc(a)
}
a
## 3
```
If your situation has more than two mutually exclusive cases, you can string multiple `if` and `else` statements together by adding a new `if` statement immediately after `else`. For example:
```
a <- 1
b <- 1
if (a > b) {
print("A wins!")
} else if (a < b) {
print("B wins!")
} else {
print("Tie.")
}
## "Tie."
```
R will work through the `if` conditions until one evaluates to `TRUE`, then R will ignore any remaining `if` and `else` clauses in the tree. If no conditions evaluate to `TRUE`, R will run the final `else` statement.
If two `if` statements describe mutually exclusive events, it is better to join the `if` statements with an `else if` than to list them separately. This lets R ignore the second `if` statement whenever the first returns a `TRUE`, which saves work.
You can use `if` and `else` to link the subtasks in your slot\-machine function. Open a fresh R script, and copy this code into it. The code will be the skeleton of our final `score` function. Compare it to the flow chart for `score` in Figure [9\.2](programs.html#fig:subdivide2):
```
if ( # Case 1: all the same <1>) {
prize <- # look up the prize <3>
} else if ( # Case 2: all bars <2> ) {
prize <- # assign $5 <4>
} else {
# count cherries <5>
prize <- # calculate a prize <7>
}
# count diamonds <6>
# double the prize if necessary <8>
```
Our skeleton is rather incomplete; there are many sections that are just code comments instead of real code. However, we’ve reduced the program to eight simple subtasks:
**\<1\>** \- Test whether the symbols are three of a kind.
**\<2\>** \- Test whether the symbols are all bars.
**\<3\>** \- Look up the prize for three of a kind based on the common symbol.
**\<4\>** \- Assign a prize of $5\.
**\<5\>** \- Count the number of cherries.
**\<6\>** \- Count the number of diamonds.
**\<7\>** \- Calculate a prize based on the number of cherries.
**\<8\>** \- Adjust the prize for diamonds.
If you like, you can reorganize your flow chart around these tasks, as in Figure [9\.4](programs.html#fig:subdivide4). The chart will describe the same strategy, but in a more precise way. I’ll use a diamond shape to symbolize an `if else` decision.
Figure 9\.4: score can navigate three cases with two if else decisions. We can also break some of our tasks into two steps.
Now we can work through the subtasks one at a time, adding R code to the `if` tree as we go. Each subtask will be easy to solve if you set up a concrete example to work with and try to describe a solution in English before coding in R.
The first subtask asks you to test whether the symbols are three of a kind. How should you begin writing the code for this subtask?
You know that the final `score` function will look something like this:
```
score <- function(symbols) {
# calculate a prize
prize
}
```
Its argument, `symbols`, will be the output of `get_symbols`, a vector that contains three character strings. You could start writing `score` as I have written it, by defining an object named `score` and then slowly filling in the body of the function. However, this would be a bad idea. The eventual function will have eight separate parts, and it will not work correctly until *all* of those parts are written (and themselves work correctly). This means you would have to write the entire `score` function before you could test any of the subtasks. If `score` doesn’t work—which is very likely—you will not know which subtask needs fixed.
You can save yourself time and headaches if you focus on one subtask at a time. For each subtask, create a concrete example that you can test your code on. For example, you know that `score` will need to work on a vector named `symbols` that contains three character strings. If you make a real vector named `symbols`, you can run the code for many of your subtasks on the vector as you go:
```
symbols <- c("7", "7", "7")
```
If a piece of code does not work on `symbols`, you will know that you need to fix it before you move on. You can change the value of `symbols` from subtask to subtask to ensure that your code works in every situation:
```
symbols <- c("B", "BB", "BBB")
symbols <- c("C", "DD", "0")
```
Only combine your subtasks into a `score` function once each subtask works on a concrete example. If you follow this plan, you will spend more time using your functions and less time trying to figure out why they do not work.
After you set up a concrete example, try to describe how you will do the subtask in English. The more precisely you can describe your solution, the easier it will be to write your R code.
Our first subtask asks us to “test whether the symbols are three of a kind.” This phrase does not suggest any useful R code to me. However, I could describe a more precise test for three of a kind: three symbols will be the same if the first symbol is equal to the second and the second symbol is equal to the third. Or, even more precisely:
*A vector named `symbols` will contain three of the same symbol if the first element of `symbols` is equal to the second element of `symbols` and the second element of `symbols` is equal to the third element of `symbols`*.
**Exercise 9\.4 (Write a Test)** Turn the preceding statement into a logical test written in R. Use your knowledge of logical tests, Boolean operators, and subsetting from [R Notation](r-notation.html#r-notation). The test should work with the vector `symbols` and return a `TRUE` *if and only if* each element in `symbols` is the same. Be sure to test your code on `symbols`.
*Solution.* Here are a couple of ways to test that `symbols` contains three of the same symbol. The first method parallels the English suggestion above, but there are other ways to do the same test. There is no right or wrong answer, so long as your solution works, which is easy to check because you’ve created a vector named `symbols`:
```
symbols
## "7" "7" "7"
symbols[1] == symbols[2] & symbols[2] == symbols[3]
## TRUE
symbols[1] == symbols[2] & symbols[1] == symbols[3]
## TRUE
all(symbols == symbols[1])
## TRUE
```
As your vocabulary of R functions broadens, you’ll think of more ways to do basic tasks. One method that I like for checking three of a kind is:
```
length(unique(symbols) == 1)
```
The `unique` function returns every unique term that appears in a vector. If your `symbols` vector contains three of a kind (i.e., one unique term that appears three times), then `unique(symbols)` will return a vector of length `1`.
Now that you have a working test, you can add it to your slot\-machine script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
if (same) {
prize <- # look up the prize
} else if ( # Case 2: all bars ) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
`&&` and `||` behave like `&` and `|` but can sometimes be more efficient. The double operators will not evaluate the second test in a pair of tests if the first test makes the result clear. For example, if `symbols[1]` does not equal `symbols[2]` in the next expression, `&&` will not evaluate `symbols[2] == symbols[3]`; it can immediately return a `FALSE` for the whole expression (because `FALSE & TRUE` and `FALSE & FALSE` both evaluate to `FALSE`). This efficiency can speed up your programs; however, double operators are not appropriate everywhere. `&&` and `||` are not vectorized, which means they can only handle a single logical test on each side of the operator.
The second prize case occurs when all the symbols are a type of bar, for example, `B`, `BB`, and `BBB`. Let’s begin by creating a concrete example to work with:
```
symbols <- c("B", "BBB", "BB")
```
**Exercise 9\.5 (Test for All Bars)** Use R’s logical and Boolean operators to write a test that will determine whether a vector named `symbols` contains only symbols that are a type of bar. Check whether your test works with our example `symbols` vector. Remember to describe how the test should work in English, and then convert the solution to R.
*Solution.* As with many things in R, there are multiple ways to test whether `symbols` contains all bars. For example, you could write a very long test that uses multiple Boolean operators, like this:
```
symbols[1] == "B" | symbols[1] == "BB" | symbols[1] == "BBB" &
symbols[2] == "B" | symbols[2] == "BB" | symbols[2] == "BBB" &
symbols[3] == "B" | symbols[3] == "BB" | symbols[3] == "BBB"
## TRUE
```
However, this is not a very efficient solution, because R has to run nine logical tests (and you have to type them). You can often replace multiple `|` operators with a single `%in%`. Also, you can check that a test is true for each element in a vector with `all`. These two changes shorten the preceding code to:
```
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
Let’s add this code to our script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
prize <- # look up the prize
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
You may have noticed that I split this test up into two steps, `bars` and `all(bars)`. That’s just a matter of personal preference. Wherever possible, I like to write my code so it can be read with function and object names conveying what they do.
You also may have noticed that our test for Case 2 will capture some symbols that should be in Case 1 because they contain three of a kind:
```
symbols <- c("B", "B", "B")
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
That won’t be a problem, however, because we’ve connected our cases with `else if` in the `if` tree. As soon as R comes to a case that evaluates to `TRUE`, it will skip over the rest of the tree. Think of it this way: each `else` tells R to only run the code that follows it *if none of the previous conditions have been met*. So when we have three of the same type of bar, R will evaluate the code for Case 1 and then skip the code for Case 2 (and Case 3\).
Our next subtask is to assign a prize for `symbols`. When the `symbols` vector contains three of the same symbol, the prize will depend on which symbol there are three of. If there are three `DD`s, the prize will be $100; if there are three `7`s, the prize will be $80; and so on.
This suggests another `if` tree. You could assign a prize with some code like this:
```
if (same) {
symbol <- symbols[1]
if (symbol == "DD") {
prize <- 800
} else if (symbol == "7") {
prize <- 80
} else if (symbol == "BBB") {
prize <- 40
} else if (symbol == "BB") {
prize <- 5
} else if (symbol == "B") {
prize <- 10
} else if (symbol == "C") {
prize <- 10
} else if (symbol == "0") {
prize <- 0
}
}
```
While this code will work, it is a bit long to write and read, and it may require R to perform multiple logical tests before delivering the correct prize. We can do better with a different method.
9\.4 Lookup Tables
------------------
Very often in R, the simplest way to do something will involve subsetting. How could you use subsetting here? Since you know the exact relationship between the symbols and their prizes, you can create a vector that captures this information. This vector can store symbols as names and prize values as elements:
```
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
payouts
## DD 7 BBB BB B C 0
## 100 80 40 25 10 10 0
```
Now you can extract the correct prize for any symbol by subsetting the vector with the symbol’s name:
```
payouts["DD"]
## DD
## 100
payouts["B"]
## B
## 10
```
If you want to leave behind the symbol’s name when subsetting, you can run the `unname` function on the output:
```
unname(payouts["DD"])
## 100
```
`unname` returns a copy of an object with the names attribute removed.
`payouts` is a type of *lookup table*, an R object that you can use to look up values. Subsetting `payouts` provides a simple way to find the prize for a symbol. It doesn’t take many lines of code, and it does the same amount of work whether your symbol is `DD` or `0`. You can create lookup tables in R by creating named objects that can be subsetted in clever ways.
Sadly, our method is not quite automatic; we need to tell R which symbol to look up in `payouts`. Or do we? What would happen if you subsetted `payouts` by `symbols[1]`? Give it a try:
```
symbols <- c("7", "7", "7")
symbols[1]
## "7"
payouts[symbols[1]]
## 7
## 80
symbols <- c("C", "C", "C")
payouts[symbols[1]]
## C
## 10
```
You don’t need to know the exact symbol to look up because you can tell R to look up whichever symbol happens to be in `symbols`. You can find this symbol with `symbols[1]`, `symbols[2]`, or `symbols[3]`, because each contains the same symbol in this case. You now have a simple automated way to calculate the prize when `symbols` contains three of a kind. Let’s add it to our code and then look at Case 2:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Case 2 occurs whenever the symbols are all bars. In that case, the prize will be $5, which is easy to assign:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Now we can work on the last case. Here, you’ll need to know how many cherries are in `symbols` before you can calculate a prize.
**Exercise 9\.6 (Find C’s)** How can you tell which elements of a vector named `symbols` are a `C`? Devise a test and try it out.
**Challenge**
How might you count the number of `C`s in a vector named `symbols`? Remember R’s coercion rules.
*Solution.* As always, let’s work with a real example:
```
symbols <- c("C", "DD", "C")
```
One way to test for cherries would be to check which, if any, of the symbols are a `C`:
```
symbols == "C"
## TRUE FALSE TRUE
```
It’d be even more useful to count how many of the symbols are cherries. You can do this with `sum`, which expects numeric input, not logical. Knowing this, R will coerce the `TRUE`s and `FALSE`s to `1`s and `0`s before doing the summation. As a result, `sum` will return the number of `TRUE`s, which is also the number of cherries:
```
sum(symbols == "C")
## 2
```
You can use the same method to count the number of diamonds in `symbols`:
```
sum(symbols == "DD")
## 1
```
Let’s add both of these subtasks to the program skeleton:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- # calculate a prize
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
Since Case 3 appears further down the `if` tree than Cases 1 and 2, the code in Case 3 will only be applied to players that do not have three of a kind or all bars. According to the slot machine’s payout scheme, these players will win $5 if they have two cherries and $2 if they have one cherry. If the player has no cherries, she gets a prize of $0\. We don’t need to worry about three cherries because that outcome is already covered in Case 1\.
As in Case 1, you could write an `if` tree that handles each combination of cherries, but just like in Case 1, this would be an inefficient solution:
```
if (cherries == 2) {
prize <- 5
} else if (cherries == 1) {
prize <- 2
} else {}
prize <- 0
}
```
Again, I think the best solution will involve subsetting. If you are feeling ambitious, you can try to work this solution out on your own, but you will learn just as quickly by mentally working through the following proposed solution.
We know that our prize should be $0 if we have no cherries, $2 if we have one cherry, and $5 if we have two cherries. You can create a vector that contains this information. This will be a very simple lookup table:
```
c(0, 2, 5)
```
Now, like in Case 1, you can subset the vector to retrieve the correct prize. In this case, the prize’s aren’t identified by a symbol name, but by the number of cherries present. Do we have that information? Yes, it is stored in `cherries`. We can use basic integer subsetting to get the correct prize from the prior lookup table, for example, `c(0, 2, 5)[1]`.
`cherries` isn’t exactly suited for integer subsetting because it could contain a zero, but that’s easy to fix. We can subset with `cherries + 1`. Now when `cherries` equals zero, we have:
```
cherries + 1
## 1
c(0, 2, 5)[cherries + 1]
## 0
```
When `cherries` equals one, we have:
```
cherries + 1
## 2
c(0, 2, 5)[cherries + 1]
## 2
```
And when `cherries` equals two, we have:
```
cherries + 1
## 3
c(0, 2, 5)[cherries + 1]
## 5
```
Examine these solutions until you are satisfied that they return the correct prize for each number of cherries. Then add the code to your script, as follows:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
**Lookup Tables Versus if Trees**
This is the second time we’ve created a lookup table to avoid writing an `if` tree. Why is this technique helpful and why does it keep appearing? Many `if` trees in R are essential. They provide a useful way to tell R to use different algorithms in different cases. However, `if` trees are not appropriate everywhere.
`if` trees have a couple of drawbacks. First, they require R to run multiple tests as it works its way down the `if` tree, which can create unnecessary work. Second, as you’ll see in [Speed](speed.html#speed), it can be difficult to use `if` trees in vectorized code, a style of code that takes advantage of R’s programming strengths to create fast programs. Lookup tables do not suffer from either of these drawbacks.
You won’t be able to replace every `if` tree with a lookup table, nor should you. However, you can usually use lookup tables to avoid assigning variables with `if` trees. As a general rule, use an `if` tree if each branch of the tree runs different *code*. Use a lookup table if each branch of the tree only assigns a different *value*.
To convert an `if` tree to a lookup table, identify the values to be assigned and store them in a vector. Next, identify the selection criteria used in the conditions of the `if` tree. If the conditions use character strings, give your vector names and use name\-based subsetting. If the conditions use integers, use integer\-based subsetting.
The final subtask is to double the prize once for every diamond present. This means that the final prize will be some multiple of the current prize. For example, if no diamonds are present, the prize will be:
```
prize * 1 # 1 = 2 ^ 0
```
If one diamond is present, it will be:
```
prize * 2 # 2 = 2 ^ 1
```
If two diamonds are present, it will be:
```
prize * 4 # 4 = 2 ^ 2
```
And if three diamonds are present, it will be:
```
prize * 8 # 8 = 2 ^ 3
```
Can you think of an easy way to handle this? How about something similar to these examples?
**Exercise 9\.7 (Adjust for Diamonds)** Write a method for adjusting `prize` based on `diamonds`. Describe a solution in English first, and then write your code.
*Solution.* Here is a concise solution inspired by the previous pattern. The adjusted prize will equal:
```
prize * 2 ^ diamonds
```
which gives us our final `score` script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
9\.5 Code Comments
------------------
You now have a working score script that you can save to a function. Before you save your script, though, consider adding comments to your code with a `#`. Comments can make your code easier to understand by explaining *why* the code does what it does. You can also use comments to break long programs into scannable chunks. For example, I would include three comments in the `score` code:
```
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
Now that each part of your code works, you can wrap it into a function with the methods you learned in [Writing Your Own Functions](basics.html#write-functions). Either use RStudio’s Extract Function option in the menu bar under Code, or use the `function` function. Ensure that the last line of the function returns a result (it does), and identify any arguments used by your function. Often the concrete examples that you used to test your code, like `symbols`, will become the arguments of your function. Run the following code to start using the `score` function:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Once you have defined the `score` function, the `play` function will work as well:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
Now it is easy to play the slot machine:
```
play()
## "0" "BB" "B"
## 0
play()
## "DD" "0" "B"
## 0
play()
## "BB" "BB" "B"
## 25
```
9\.6 Summary
------------
An R program is a set of instructions for your computer to follow that has been organized into a sequence of steps and cases. This may make programs seem simple, but don’t be fooled: you can create complicated results with the right combination of simple steps (and cases).
As a programmer, you are more likely to be fooled in the opposite way. A program may seem impossible to write when you know that it must do something impressive. Do not panic in these situations. Divide the job before you into simple tasks, and then divide the tasks again. You can visualize the relationship between tasks with a flow chart if it helps. Then work on the subtasks one at a time. Describe solutions in English, then convert them to R code. Test each solution against concrete examples as you go. Once each of your subtasks works, combine your code into a function that you can share and reuse.
R provides tools that can help you do this. You can manage cases with `if` and `else` statements. You can create a lookup table with objects and subsetting. You can add code comments with `#`. And you can save your programs as a function with `function`.
Things often go wrong when people write programs. It will be up to you to find the source of any errors that occur and to fix them. It should be easy to find the source of your errors if you use a stepwise approach to writing functions, writing—and then testing—one bit at a time. However, if the source of an error eludes you, or you find yourself working with large chunks of untested code, consider using R’s built in debugging tools, described in [Debugging R Code](debug.html#debug).
The next two chapters will teach you more tools that you can use in your programs. As you master these tools, you will find it easier to write R programs that let you do whatever you wish to your data. In [S3](s3.html#s3), you will learn how to use R’s S3 system, an invisible hand that shapes many parts of R. You will use the system to build a custom class for your slot machine output, and you will tell R how to display objects that have your class.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/programs.html |
9 Programs
==========
In this chapter, you will build a real, working slot machine that you can play by running an R function. When you’re finished, you’ll be able to play it like this:
```
play()
## 0 0 DD
## $0
play()
## 7 7 7
## $80
```
The `play` function will need to do two things. First, it will need to randomly generate three symbols; and, second, it will need to calculate a prize based on those symbols.
The first step is easy to simulate. You can randomly generate three symbols with the `sample` function—just like you randomly “rolled” two dice in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice). The following function generates three symbols from a group of common slot machine symbols: diamonds (`DD`), sevens (`7`), triple bars (`BBB`), double bars (`BB`), single bars (`B`), cherries (`C`), and zeroes (`0`). The symbols are selected randomly, and each symbol appears with a different probability:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
```
You can use `get_symbols` to generate the symbols used in your slot machine:
```
get_symbols()
## "BBB" "0" "C"
get_symbols()
## "0" "0" "0"
get_symbols()
## "7" "0" "B"
```
`get_symbols` uses the probabilities observed in a group of video lottery terminals from Manitoba, Canada. These slot machines became briefly controversial in the 1990s, when a reporter decided to test their payout rate. The machines appeared to pay out only 40 cents on the dollar, even though the manufacturer claimed they would pay out 92 cents on the dollar. The original data collected on the machines and a description of the controversy is available online in [a journal article by W. John Braun](http://bit.ly/jse_Braun). The controversy died down when additional testing showed that the manufacturer was correct.
The Manitoba slot machines use the complicated payout scheme shown in Table [9\.1](programs.html#tab:prizes). A player will win a prize if he gets:
* Three of the same type of symbol (except for three zeroes)
* Three bars (of mixed variety)
* One or more cherries
Otherwise, the player receives no prize.
The monetary value of the prize is determined by the exact combination of symbols and is further modified by the presence of diamonds. Diamonds are treated like “wild cards,” which means they can be considered any other symbol if it would increase a player’s prize. For example, a player who rolls `7` `7` `DD` would earn a prize for getting three sevens. There is one exception to this rule, however: a diamond cannot be considered a cherry unless the player also gets one real cherry. This prevents a dud roll like, `0` `DD` `0` from being scored as `0` `C` `0`.
Diamonds are also special in another way. Every diamond that appears in a combination doubles the amount of the final prize. So `7` `7` `DD` would actually be scored *higher* than `7` `7` `7`. Three sevens would earn you $80, but two sevens and a diamond would earn you $160\. One seven and two diamonds would be even better, resulting in a prize that has been doubled twice, or $320\. A jackpot occurs when a player rolls `DD` `DD` `DD`. Then a player earns $100 doubled three times, which is $800\.
Table 9\.1: Each play of the slot machine costs $1\. A player’s symbols determine how much they win. Diamonds (`DD`) are wild, and each diamond doubles the final prize. \* \= any symbol.
| Combination | Prize($) |
| --- | --- |
| `DD DD DD` | 100 |
| `7 7 7` | 80 |
| `BBB BBB BBB` | 40 |
| `BB BB BB` | 25 |
| `B B B` | 10 |
| `C C C` | 10 |
| Any combination of bars | 5 |
| `C C *` | 5 |
| `C * C` | 5 |
| `* C C` | 5 |
| `C * *` | 2 |
| `* C *` | 2 |
| `* * C` | 2 |
To create your `play` function, you will need to write a program that can take the output of `get_symbols` and calculate the correct prize based on Table [9\.1](programs.html#tab:prizes).
In R, programs are saved either as R scripts or as functions. We’ll save your program as a function named `score`. When you are finished, you will be able to use `score` to calculate a prize like this:
```
score(c("DD", "DD", "DD"))
## 800
```
After that it will be easy to create the full slot machine, like this:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
The `print` command prints its output to the console window, which makes `print` a useful way to display messages from within the body of a function.
You may notice that `play` calls a new function, `print`. This will help `play` display the three slot machine symbols, since they do not get returned by the last line of the function. The `print` command prints its output to the console window – even if R calls it from within a function.
In [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice), I encouraged you to write all of your R code in an R script, a text file where you can compose and save code. That advice will become very important as you work through this chapter. Remember that you can open an R script in RStudio by going to the menu bar and clicking on File \> New File \> R Script.
9\.1 Strategy
-------------
Scoring slot\-machine results is a complex task that will require a complex algorithm. You can make this, and other coding tasks, easier by using a simple strategy:
* Break complex tasks into simple subtasks.
* Use concrete examples.
* Describe your solutions in English, then convert them to R.
Let’s start by looking at how you can divide a program into subtasks that are simple to work with.
A program is a set of step\-by\-step instructions for your computer to follow. Taken together, these instructions may accomplish something very sophisticated. Taken apart, each individual step will likely be simple and straightforward.
You can make coding easier by identifying the individual steps or subtasks within your program. You can then work on each subtask separately. If a subtask seems complicated, try to divide it again into even subtasks that are even more simple. You can often reduce an R program into substasks so simple that each can be performed with a preexisting function.
R programs contain two types of subtasks: sequential steps and parallel cases.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
9\.2 if Statements
------------------
Linking cases together in parallel requires a bit of structure; your program faces a fork in the road whenever it must choose between cases. You can help the program navigate this fork with an `if` statement.
An `if` statement tells R to do a certain task for a certain case. In English you would say something like, “If this is true, do that.” In R, you would say:
```
if (this) {
that
}
```
The `this` object should be a logical test or an R expression that evaluates to a single `TRUE` or `FALSE`. If `this` evaluates to `TRUE`, R will run all of the code that appears between the braces that follow the `if` statement (i.e., between the `{` and `}` symbols). If `this` evaluates to `FALSE`, R will skip the code between the braces without running it.
For example, you could write an `if` statement that ensures some object, `num`, is positive:
```
if (num < 0) {
num <- num * -1
}
```
If `num < 0` is `TRUE`, R will multiply `num` by negative one, which will make `num` positive:
```
num <- -2
if (num < 0) {
num <- num * -1
}
num
## 2
```
If `num < 0` is `FALSE`, R will do nothing and `num` will remain as it is—positive (or zero):
```
num <- 4
if (num < 0) {
num <- num * -1
}
num
## 4
```
The condition of an `if` statement must evaluate to a *single* `TRUE` or `FALSE`. If the condition creates a vector of `TRUE`s and `FALSE`s (which is easier to make than you may think), your `if` statement will print a warning message and use only the first element of the vector. Remember that you can condense vectors of logical values to a single `TRUE` or `FALSE` with the functions `any` and `all`.
You don’t have to limit your `if` statements to a single line of code; you can include as many lines as you like between the braces. For example, the following code uses many lines to ensure that `num` is positive. The additional lines print some informative statements if `num` begins as a negative number. R will skip the entire code block—`print` statements and all—if `num` begins as a positive number:
```
num <- -1
if (num < 0) {
print("num is negative.")
print("Don't worry, I'll fix it.")
num <- num * -1
print("Now num is positive.")
}
## "num is negative."
## "Don't worry, I'll fix it."
## "Now num is positive."
num
## 1
```
Try the following quizzes to develop your understanding of `if` statements.
**Exercise 9\.1 (Quiz A)** What will this return?
```
x <- 1
if (3 == 3) {
x <- 2
}
x
```
*Solution.* The code will return the number 2\. `x` begins as 1, and then R encounters the `if` statement. Since the condition evaluates to `TRUE`, R will run `x <- 2`, changing the value of `x`.
**Exercise 9\.2 (Quiz B)** What will this return?
```
x <- 1
if (TRUE) {
x <- 2
}
x
```
*Solution.* This code will also return the number 2\. It works the same as the code in Quiz A, except the condition in this statement is already `TRUE`. R doesn’t even need to evaluate it. As a result, the code inside the `if` statement will be run, and `x` will be set to 2\.
**Exercise 9\.3 (Quiz C)** What will this return?
```
x <- 1
if (x == 1) {
x <- 2
if (x == 1) {
x <- 3
}
}
x
```
*Solution.* Once again, the code will return the number 2\. `x` starts out as 1, and the condition of the first `if` statement will evaluate to `TRUE`, which causes R to run the code in the body of the `if` statement. First, R sets `x` equal to 2, then R evaluates the second `if` statement, which is in the body of the first. This time `x == 1` will evaluate to `FALSE` because `x` now equals 2\. As a result, R ignores `x <- 3` and exits both `if` statements.
9\.3 else Statements
--------------------
`if` statements tell R what to do when your condition is *true*, but you can also tell R what to do when the condition is *false*. `else` is a counterpart to `if` that extends an `if` statement to include a second case. In English, you would say, “If this is true, do plan A; else do plan B.” In R, you would say:
```
if (this) {
Plan A
} else {
Plan B
}
```
When `this` evaluates to `TRUE`, R will run the code in the first set of braces, but not the code in the second. When `this` evaluates to `FALSE`, R will run the code in the second set of braces, but not the first. You can use this arrangement to cover all of the possible cases. For example, you could write some code that rounds a decimal to the nearest integer.
Start with a decimal:
```
a <- 3.14
```
Then isolate the decimal component with `trunc`:
```
dec <- a - trunc(a)
dec
## 0.14
```
`trunc` takes a number and returns only the portion of the number that appears to the left of the decimal place (i.e., the integer part of the number).
`a - trunc(a)` is a convenient way to return the decimal part of `a`.
Then use an `if else` tree to round the number (either up or down):
```
if (dec >= 0.5) {
a <- trunc(a) + 1
} else {
a <- trunc(a)
}
a
## 3
```
If your situation has more than two mutually exclusive cases, you can string multiple `if` and `else` statements together by adding a new `if` statement immediately after `else`. For example:
```
a <- 1
b <- 1
if (a > b) {
print("A wins!")
} else if (a < b) {
print("B wins!")
} else {
print("Tie.")
}
## "Tie."
```
R will work through the `if` conditions until one evaluates to `TRUE`, then R will ignore any remaining `if` and `else` clauses in the tree. If no conditions evaluate to `TRUE`, R will run the final `else` statement.
If two `if` statements describe mutually exclusive events, it is better to join the `if` statements with an `else if` than to list them separately. This lets R ignore the second `if` statement whenever the first returns a `TRUE`, which saves work.
You can use `if` and `else` to link the subtasks in your slot\-machine function. Open a fresh R script, and copy this code into it. The code will be the skeleton of our final `score` function. Compare it to the flow chart for `score` in Figure [9\.2](programs.html#fig:subdivide2):
```
if ( # Case 1: all the same <1>) {
prize <- # look up the prize <3>
} else if ( # Case 2: all bars <2> ) {
prize <- # assign $5 <4>
} else {
# count cherries <5>
prize <- # calculate a prize <7>
}
# count diamonds <6>
# double the prize if necessary <8>
```
Our skeleton is rather incomplete; there are many sections that are just code comments instead of real code. However, we’ve reduced the program to eight simple subtasks:
**\<1\>** \- Test whether the symbols are three of a kind.
**\<2\>** \- Test whether the symbols are all bars.
**\<3\>** \- Look up the prize for three of a kind based on the common symbol.
**\<4\>** \- Assign a prize of $5\.
**\<5\>** \- Count the number of cherries.
**\<6\>** \- Count the number of diamonds.
**\<7\>** \- Calculate a prize based on the number of cherries.
**\<8\>** \- Adjust the prize for diamonds.
If you like, you can reorganize your flow chart around these tasks, as in Figure [9\.4](programs.html#fig:subdivide4). The chart will describe the same strategy, but in a more precise way. I’ll use a diamond shape to symbolize an `if else` decision.
Figure 9\.4: score can navigate three cases with two if else decisions. We can also break some of our tasks into two steps.
Now we can work through the subtasks one at a time, adding R code to the `if` tree as we go. Each subtask will be easy to solve if you set up a concrete example to work with and try to describe a solution in English before coding in R.
The first subtask asks you to test whether the symbols are three of a kind. How should you begin writing the code for this subtask?
You know that the final `score` function will look something like this:
```
score <- function(symbols) {
# calculate a prize
prize
}
```
Its argument, `symbols`, will be the output of `get_symbols`, a vector that contains three character strings. You could start writing `score` as I have written it, by defining an object named `score` and then slowly filling in the body of the function. However, this would be a bad idea. The eventual function will have eight separate parts, and it will not work correctly until *all* of those parts are written (and themselves work correctly). This means you would have to write the entire `score` function before you could test any of the subtasks. If `score` doesn’t work—which is very likely—you will not know which subtask needs fixed.
You can save yourself time and headaches if you focus on one subtask at a time. For each subtask, create a concrete example that you can test your code on. For example, you know that `score` will need to work on a vector named `symbols` that contains three character strings. If you make a real vector named `symbols`, you can run the code for many of your subtasks on the vector as you go:
```
symbols <- c("7", "7", "7")
```
If a piece of code does not work on `symbols`, you will know that you need to fix it before you move on. You can change the value of `symbols` from subtask to subtask to ensure that your code works in every situation:
```
symbols <- c("B", "BB", "BBB")
symbols <- c("C", "DD", "0")
```
Only combine your subtasks into a `score` function once each subtask works on a concrete example. If you follow this plan, you will spend more time using your functions and less time trying to figure out why they do not work.
After you set up a concrete example, try to describe how you will do the subtask in English. The more precisely you can describe your solution, the easier it will be to write your R code.
Our first subtask asks us to “test whether the symbols are three of a kind.” This phrase does not suggest any useful R code to me. However, I could describe a more precise test for three of a kind: three symbols will be the same if the first symbol is equal to the second and the second symbol is equal to the third. Or, even more precisely:
*A vector named `symbols` will contain three of the same symbol if the first element of `symbols` is equal to the second element of `symbols` and the second element of `symbols` is equal to the third element of `symbols`*.
**Exercise 9\.4 (Write a Test)** Turn the preceding statement into a logical test written in R. Use your knowledge of logical tests, Boolean operators, and subsetting from [R Notation](r-notation.html#r-notation). The test should work with the vector `symbols` and return a `TRUE` *if and only if* each element in `symbols` is the same. Be sure to test your code on `symbols`.
*Solution.* Here are a couple of ways to test that `symbols` contains three of the same symbol. The first method parallels the English suggestion above, but there are other ways to do the same test. There is no right or wrong answer, so long as your solution works, which is easy to check because you’ve created a vector named `symbols`:
```
symbols
## "7" "7" "7"
symbols[1] == symbols[2] & symbols[2] == symbols[3]
## TRUE
symbols[1] == symbols[2] & symbols[1] == symbols[3]
## TRUE
all(symbols == symbols[1])
## TRUE
```
As your vocabulary of R functions broadens, you’ll think of more ways to do basic tasks. One method that I like for checking three of a kind is:
```
length(unique(symbols) == 1)
```
The `unique` function returns every unique term that appears in a vector. If your `symbols` vector contains three of a kind (i.e., one unique term that appears three times), then `unique(symbols)` will return a vector of length `1`.
Now that you have a working test, you can add it to your slot\-machine script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
if (same) {
prize <- # look up the prize
} else if ( # Case 2: all bars ) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
`&&` and `||` behave like `&` and `|` but can sometimes be more efficient. The double operators will not evaluate the second test in a pair of tests if the first test makes the result clear. For example, if `symbols[1]` does not equal `symbols[2]` in the next expression, `&&` will not evaluate `symbols[2] == symbols[3]`; it can immediately return a `FALSE` for the whole expression (because `FALSE & TRUE` and `FALSE & FALSE` both evaluate to `FALSE`). This efficiency can speed up your programs; however, double operators are not appropriate everywhere. `&&` and `||` are not vectorized, which means they can only handle a single logical test on each side of the operator.
The second prize case occurs when all the symbols are a type of bar, for example, `B`, `BB`, and `BBB`. Let’s begin by creating a concrete example to work with:
```
symbols <- c("B", "BBB", "BB")
```
**Exercise 9\.5 (Test for All Bars)** Use R’s logical and Boolean operators to write a test that will determine whether a vector named `symbols` contains only symbols that are a type of bar. Check whether your test works with our example `symbols` vector. Remember to describe how the test should work in English, and then convert the solution to R.
*Solution.* As with many things in R, there are multiple ways to test whether `symbols` contains all bars. For example, you could write a very long test that uses multiple Boolean operators, like this:
```
symbols[1] == "B" | symbols[1] == "BB" | symbols[1] == "BBB" &
symbols[2] == "B" | symbols[2] == "BB" | symbols[2] == "BBB" &
symbols[3] == "B" | symbols[3] == "BB" | symbols[3] == "BBB"
## TRUE
```
However, this is not a very efficient solution, because R has to run nine logical tests (and you have to type them). You can often replace multiple `|` operators with a single `%in%`. Also, you can check that a test is true for each element in a vector with `all`. These two changes shorten the preceding code to:
```
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
Let’s add this code to our script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
prize <- # look up the prize
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
You may have noticed that I split this test up into two steps, `bars` and `all(bars)`. That’s just a matter of personal preference. Wherever possible, I like to write my code so it can be read with function and object names conveying what they do.
You also may have noticed that our test for Case 2 will capture some symbols that should be in Case 1 because they contain three of a kind:
```
symbols <- c("B", "B", "B")
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
That won’t be a problem, however, because we’ve connected our cases with `else if` in the `if` tree. As soon as R comes to a case that evaluates to `TRUE`, it will skip over the rest of the tree. Think of it this way: each `else` tells R to only run the code that follows it *if none of the previous conditions have been met*. So when we have three of the same type of bar, R will evaluate the code for Case 1 and then skip the code for Case 2 (and Case 3\).
Our next subtask is to assign a prize for `symbols`. When the `symbols` vector contains three of the same symbol, the prize will depend on which symbol there are three of. If there are three `DD`s, the prize will be $100; if there are three `7`s, the prize will be $80; and so on.
This suggests another `if` tree. You could assign a prize with some code like this:
```
if (same) {
symbol <- symbols[1]
if (symbol == "DD") {
prize <- 800
} else if (symbol == "7") {
prize <- 80
} else if (symbol == "BBB") {
prize <- 40
} else if (symbol == "BB") {
prize <- 5
} else if (symbol == "B") {
prize <- 10
} else if (symbol == "C") {
prize <- 10
} else if (symbol == "0") {
prize <- 0
}
}
```
While this code will work, it is a bit long to write and read, and it may require R to perform multiple logical tests before delivering the correct prize. We can do better with a different method.
9\.4 Lookup Tables
------------------
Very often in R, the simplest way to do something will involve subsetting. How could you use subsetting here? Since you know the exact relationship between the symbols and their prizes, you can create a vector that captures this information. This vector can store symbols as names and prize values as elements:
```
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
payouts
## DD 7 BBB BB B C 0
## 100 80 40 25 10 10 0
```
Now you can extract the correct prize for any symbol by subsetting the vector with the symbol’s name:
```
payouts["DD"]
## DD
## 100
payouts["B"]
## B
## 10
```
If you want to leave behind the symbol’s name when subsetting, you can run the `unname` function on the output:
```
unname(payouts["DD"])
## 100
```
`unname` returns a copy of an object with the names attribute removed.
`payouts` is a type of *lookup table*, an R object that you can use to look up values. Subsetting `payouts` provides a simple way to find the prize for a symbol. It doesn’t take many lines of code, and it does the same amount of work whether your symbol is `DD` or `0`. You can create lookup tables in R by creating named objects that can be subsetted in clever ways.
Sadly, our method is not quite automatic; we need to tell R which symbol to look up in `payouts`. Or do we? What would happen if you subsetted `payouts` by `symbols[1]`? Give it a try:
```
symbols <- c("7", "7", "7")
symbols[1]
## "7"
payouts[symbols[1]]
## 7
## 80
symbols <- c("C", "C", "C")
payouts[symbols[1]]
## C
## 10
```
You don’t need to know the exact symbol to look up because you can tell R to look up whichever symbol happens to be in `symbols`. You can find this symbol with `symbols[1]`, `symbols[2]`, or `symbols[3]`, because each contains the same symbol in this case. You now have a simple automated way to calculate the prize when `symbols` contains three of a kind. Let’s add it to our code and then look at Case 2:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Case 2 occurs whenever the symbols are all bars. In that case, the prize will be $5, which is easy to assign:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Now we can work on the last case. Here, you’ll need to know how many cherries are in `symbols` before you can calculate a prize.
**Exercise 9\.6 (Find C’s)** How can you tell which elements of a vector named `symbols` are a `C`? Devise a test and try it out.
**Challenge**
How might you count the number of `C`s in a vector named `symbols`? Remember R’s coercion rules.
*Solution.* As always, let’s work with a real example:
```
symbols <- c("C", "DD", "C")
```
One way to test for cherries would be to check which, if any, of the symbols are a `C`:
```
symbols == "C"
## TRUE FALSE TRUE
```
It’d be even more useful to count how many of the symbols are cherries. You can do this with `sum`, which expects numeric input, not logical. Knowing this, R will coerce the `TRUE`s and `FALSE`s to `1`s and `0`s before doing the summation. As a result, `sum` will return the number of `TRUE`s, which is also the number of cherries:
```
sum(symbols == "C")
## 2
```
You can use the same method to count the number of diamonds in `symbols`:
```
sum(symbols == "DD")
## 1
```
Let’s add both of these subtasks to the program skeleton:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- # calculate a prize
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
Since Case 3 appears further down the `if` tree than Cases 1 and 2, the code in Case 3 will only be applied to players that do not have three of a kind or all bars. According to the slot machine’s payout scheme, these players will win $5 if they have two cherries and $2 if they have one cherry. If the player has no cherries, she gets a prize of $0\. We don’t need to worry about three cherries because that outcome is already covered in Case 1\.
As in Case 1, you could write an `if` tree that handles each combination of cherries, but just like in Case 1, this would be an inefficient solution:
```
if (cherries == 2) {
prize <- 5
} else if (cherries == 1) {
prize <- 2
} else {}
prize <- 0
}
```
Again, I think the best solution will involve subsetting. If you are feeling ambitious, you can try to work this solution out on your own, but you will learn just as quickly by mentally working through the following proposed solution.
We know that our prize should be $0 if we have no cherries, $2 if we have one cherry, and $5 if we have two cherries. You can create a vector that contains this information. This will be a very simple lookup table:
```
c(0, 2, 5)
```
Now, like in Case 1, you can subset the vector to retrieve the correct prize. In this case, the prize’s aren’t identified by a symbol name, but by the number of cherries present. Do we have that information? Yes, it is stored in `cherries`. We can use basic integer subsetting to get the correct prize from the prior lookup table, for example, `c(0, 2, 5)[1]`.
`cherries` isn’t exactly suited for integer subsetting because it could contain a zero, but that’s easy to fix. We can subset with `cherries + 1`. Now when `cherries` equals zero, we have:
```
cherries + 1
## 1
c(0, 2, 5)[cherries + 1]
## 0
```
When `cherries` equals one, we have:
```
cherries + 1
## 2
c(0, 2, 5)[cherries + 1]
## 2
```
And when `cherries` equals two, we have:
```
cherries + 1
## 3
c(0, 2, 5)[cherries + 1]
## 5
```
Examine these solutions until you are satisfied that they return the correct prize for each number of cherries. Then add the code to your script, as follows:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
**Lookup Tables Versus if Trees**
This is the second time we’ve created a lookup table to avoid writing an `if` tree. Why is this technique helpful and why does it keep appearing? Many `if` trees in R are essential. They provide a useful way to tell R to use different algorithms in different cases. However, `if` trees are not appropriate everywhere.
`if` trees have a couple of drawbacks. First, they require R to run multiple tests as it works its way down the `if` tree, which can create unnecessary work. Second, as you’ll see in [Speed](speed.html#speed), it can be difficult to use `if` trees in vectorized code, a style of code that takes advantage of R’s programming strengths to create fast programs. Lookup tables do not suffer from either of these drawbacks.
You won’t be able to replace every `if` tree with a lookup table, nor should you. However, you can usually use lookup tables to avoid assigning variables with `if` trees. As a general rule, use an `if` tree if each branch of the tree runs different *code*. Use a lookup table if each branch of the tree only assigns a different *value*.
To convert an `if` tree to a lookup table, identify the values to be assigned and store them in a vector. Next, identify the selection criteria used in the conditions of the `if` tree. If the conditions use character strings, give your vector names and use name\-based subsetting. If the conditions use integers, use integer\-based subsetting.
The final subtask is to double the prize once for every diamond present. This means that the final prize will be some multiple of the current prize. For example, if no diamonds are present, the prize will be:
```
prize * 1 # 1 = 2 ^ 0
```
If one diamond is present, it will be:
```
prize * 2 # 2 = 2 ^ 1
```
If two diamonds are present, it will be:
```
prize * 4 # 4 = 2 ^ 2
```
And if three diamonds are present, it will be:
```
prize * 8 # 8 = 2 ^ 3
```
Can you think of an easy way to handle this? How about something similar to these examples?
**Exercise 9\.7 (Adjust for Diamonds)** Write a method for adjusting `prize` based on `diamonds`. Describe a solution in English first, and then write your code.
*Solution.* Here is a concise solution inspired by the previous pattern. The adjusted prize will equal:
```
prize * 2 ^ diamonds
```
which gives us our final `score` script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
9\.5 Code Comments
------------------
You now have a working score script that you can save to a function. Before you save your script, though, consider adding comments to your code with a `#`. Comments can make your code easier to understand by explaining *why* the code does what it does. You can also use comments to break long programs into scannable chunks. For example, I would include three comments in the `score` code:
```
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
Now that each part of your code works, you can wrap it into a function with the methods you learned in [Writing Your Own Functions](basics.html#write-functions). Either use RStudio’s Extract Function option in the menu bar under Code, or use the `function` function. Ensure that the last line of the function returns a result (it does), and identify any arguments used by your function. Often the concrete examples that you used to test your code, like `symbols`, will become the arguments of your function. Run the following code to start using the `score` function:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Once you have defined the `score` function, the `play` function will work as well:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
Now it is easy to play the slot machine:
```
play()
## "0" "BB" "B"
## 0
play()
## "DD" "0" "B"
## 0
play()
## "BB" "BB" "B"
## 25
```
9\.6 Summary
------------
An R program is a set of instructions for your computer to follow that has been organized into a sequence of steps and cases. This may make programs seem simple, but don’t be fooled: you can create complicated results with the right combination of simple steps (and cases).
As a programmer, you are more likely to be fooled in the opposite way. A program may seem impossible to write when you know that it must do something impressive. Do not panic in these situations. Divide the job before you into simple tasks, and then divide the tasks again. You can visualize the relationship between tasks with a flow chart if it helps. Then work on the subtasks one at a time. Describe solutions in English, then convert them to R code. Test each solution against concrete examples as you go. Once each of your subtasks works, combine your code into a function that you can share and reuse.
R provides tools that can help you do this. You can manage cases with `if` and `else` statements. You can create a lookup table with objects and subsetting. You can add code comments with `#`. And you can save your programs as a function with `function`.
Things often go wrong when people write programs. It will be up to you to find the source of any errors that occur and to fix them. It should be easy to find the source of your errors if you use a stepwise approach to writing functions, writing—and then testing—one bit at a time. However, if the source of an error eludes you, or you find yourself working with large chunks of untested code, consider using R’s built in debugging tools, described in [Debugging R Code](debug.html#debug).
The next two chapters will teach you more tools that you can use in your programs. As you master these tools, you will find it easier to write R programs that let you do whatever you wish to your data. In [S3](s3.html#s3), you will learn how to use R’s S3 system, an invisible hand that shapes many parts of R. You will use the system to build a custom class for your slot machine output, and you will tell R how to display objects that have your class.
9\.1 Strategy
-------------
Scoring slot\-machine results is a complex task that will require a complex algorithm. You can make this, and other coding tasks, easier by using a simple strategy:
* Break complex tasks into simple subtasks.
* Use concrete examples.
* Describe your solutions in English, then convert them to R.
Let’s start by looking at how you can divide a program into subtasks that are simple to work with.
A program is a set of step\-by\-step instructions for your computer to follow. Taken together, these instructions may accomplish something very sophisticated. Taken apart, each individual step will likely be simple and straightforward.
You can make coding easier by identifying the individual steps or subtasks within your program. You can then work on each subtask separately. If a subtask seems complicated, try to divide it again into even subtasks that are even more simple. You can often reduce an R program into substasks so simple that each can be performed with a preexisting function.
R programs contain two types of subtasks: sequential steps and parallel cases.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
### 9\.1\.1 Sequential Steps
One way to subdivide a program is into a series of sequential steps. The `play` function takes the approach, shown in Figure [9\.1](programs.html#fig:subdivide1). First, it generates three symbols (step 1\), then it displays them in the console window (step 2\), and then it scores them (step 3\):
```
play <- function() {
# step 1: generate symbols
symbols <- get_symbols()
# step 2: display the symbols
print(symbols)
# step 3: score the symbols
score(symbols)
}
```
To have R execute steps in sequence, place the steps one after another in an R script or function body.
Figure 9\.1: The play function uses a series of steps.
### 9\.1\.2 Parallel Cases
Another way to divide a task is to spot groups of similar cases within the task. Some tasks require different algorithms for different groups of input. If you can identify those groups, you can work out their algorithms one at a time.
For example, `score` will need to calculate the prize one way if `symbols` contains three of a kind (In that case, `score` will need to match the common symbol to a prize). `score` will need to calculate the prize a second way if the symbols are all bars (In that case, `score` can just assign a prize of $5\). And, finally, `score` will need to calculate the prize in a third way if the symbols do not contain three of a kind or all bars (In that case, `score` must count the number of cherries present). `score` will never use all three of these algorithms at once; it will always choose just one algorithm to run based on the combination of symbols.
Diamonds complicate all of this because diamonds can be treated as wild cards. Let’s ignore that for now and focus on the simpler case where diamonds double the prize but are not wilds. `score` can double the prize as necessary after it runs one of the following algorithms, as shown in Figure [9\.2](programs.html#fig:subdivide2).
Adding the `score` cases to the `play` steps reveals a strategy for the complete slot machine program, as shown in Figure [9\.3](programs.html#fig:subdivide3).
We’ve already solved the first few steps in this strategy. Our program can get three slot machine symbols with the `get_symbols` function. Then it can display the symbols with the `print` function. Now let’s examine how the program can handle the parallel score cases.
Figure 9\.2: The score function must distinguish between parallel cases.
Figure 9\.3: The complete slot machine simulation will involve subtasks that are arranged both in series and in parallel.
9\.2 if Statements
------------------
Linking cases together in parallel requires a bit of structure; your program faces a fork in the road whenever it must choose between cases. You can help the program navigate this fork with an `if` statement.
An `if` statement tells R to do a certain task for a certain case. In English you would say something like, “If this is true, do that.” In R, you would say:
```
if (this) {
that
}
```
The `this` object should be a logical test or an R expression that evaluates to a single `TRUE` or `FALSE`. If `this` evaluates to `TRUE`, R will run all of the code that appears between the braces that follow the `if` statement (i.e., between the `{` and `}` symbols). If `this` evaluates to `FALSE`, R will skip the code between the braces without running it.
For example, you could write an `if` statement that ensures some object, `num`, is positive:
```
if (num < 0) {
num <- num * -1
}
```
If `num < 0` is `TRUE`, R will multiply `num` by negative one, which will make `num` positive:
```
num <- -2
if (num < 0) {
num <- num * -1
}
num
## 2
```
If `num < 0` is `FALSE`, R will do nothing and `num` will remain as it is—positive (or zero):
```
num <- 4
if (num < 0) {
num <- num * -1
}
num
## 4
```
The condition of an `if` statement must evaluate to a *single* `TRUE` or `FALSE`. If the condition creates a vector of `TRUE`s and `FALSE`s (which is easier to make than you may think), your `if` statement will print a warning message and use only the first element of the vector. Remember that you can condense vectors of logical values to a single `TRUE` or `FALSE` with the functions `any` and `all`.
You don’t have to limit your `if` statements to a single line of code; you can include as many lines as you like between the braces. For example, the following code uses many lines to ensure that `num` is positive. The additional lines print some informative statements if `num` begins as a negative number. R will skip the entire code block—`print` statements and all—if `num` begins as a positive number:
```
num <- -1
if (num < 0) {
print("num is negative.")
print("Don't worry, I'll fix it.")
num <- num * -1
print("Now num is positive.")
}
## "num is negative."
## "Don't worry, I'll fix it."
## "Now num is positive."
num
## 1
```
Try the following quizzes to develop your understanding of `if` statements.
**Exercise 9\.1 (Quiz A)** What will this return?
```
x <- 1
if (3 == 3) {
x <- 2
}
x
```
*Solution.* The code will return the number 2\. `x` begins as 1, and then R encounters the `if` statement. Since the condition evaluates to `TRUE`, R will run `x <- 2`, changing the value of `x`.
**Exercise 9\.2 (Quiz B)** What will this return?
```
x <- 1
if (TRUE) {
x <- 2
}
x
```
*Solution.* This code will also return the number 2\. It works the same as the code in Quiz A, except the condition in this statement is already `TRUE`. R doesn’t even need to evaluate it. As a result, the code inside the `if` statement will be run, and `x` will be set to 2\.
**Exercise 9\.3 (Quiz C)** What will this return?
```
x <- 1
if (x == 1) {
x <- 2
if (x == 1) {
x <- 3
}
}
x
```
*Solution.* Once again, the code will return the number 2\. `x` starts out as 1, and the condition of the first `if` statement will evaluate to `TRUE`, which causes R to run the code in the body of the `if` statement. First, R sets `x` equal to 2, then R evaluates the second `if` statement, which is in the body of the first. This time `x == 1` will evaluate to `FALSE` because `x` now equals 2\. As a result, R ignores `x <- 3` and exits both `if` statements.
9\.3 else Statements
--------------------
`if` statements tell R what to do when your condition is *true*, but you can also tell R what to do when the condition is *false*. `else` is a counterpart to `if` that extends an `if` statement to include a second case. In English, you would say, “If this is true, do plan A; else do plan B.” In R, you would say:
```
if (this) {
Plan A
} else {
Plan B
}
```
When `this` evaluates to `TRUE`, R will run the code in the first set of braces, but not the code in the second. When `this` evaluates to `FALSE`, R will run the code in the second set of braces, but not the first. You can use this arrangement to cover all of the possible cases. For example, you could write some code that rounds a decimal to the nearest integer.
Start with a decimal:
```
a <- 3.14
```
Then isolate the decimal component with `trunc`:
```
dec <- a - trunc(a)
dec
## 0.14
```
`trunc` takes a number and returns only the portion of the number that appears to the left of the decimal place (i.e., the integer part of the number).
`a - trunc(a)` is a convenient way to return the decimal part of `a`.
Then use an `if else` tree to round the number (either up or down):
```
if (dec >= 0.5) {
a <- trunc(a) + 1
} else {
a <- trunc(a)
}
a
## 3
```
If your situation has more than two mutually exclusive cases, you can string multiple `if` and `else` statements together by adding a new `if` statement immediately after `else`. For example:
```
a <- 1
b <- 1
if (a > b) {
print("A wins!")
} else if (a < b) {
print("B wins!")
} else {
print("Tie.")
}
## "Tie."
```
R will work through the `if` conditions until one evaluates to `TRUE`, then R will ignore any remaining `if` and `else` clauses in the tree. If no conditions evaluate to `TRUE`, R will run the final `else` statement.
If two `if` statements describe mutually exclusive events, it is better to join the `if` statements with an `else if` than to list them separately. This lets R ignore the second `if` statement whenever the first returns a `TRUE`, which saves work.
You can use `if` and `else` to link the subtasks in your slot\-machine function. Open a fresh R script, and copy this code into it. The code will be the skeleton of our final `score` function. Compare it to the flow chart for `score` in Figure [9\.2](programs.html#fig:subdivide2):
```
if ( # Case 1: all the same <1>) {
prize <- # look up the prize <3>
} else if ( # Case 2: all bars <2> ) {
prize <- # assign $5 <4>
} else {
# count cherries <5>
prize <- # calculate a prize <7>
}
# count diamonds <6>
# double the prize if necessary <8>
```
Our skeleton is rather incomplete; there are many sections that are just code comments instead of real code. However, we’ve reduced the program to eight simple subtasks:
**\<1\>** \- Test whether the symbols are three of a kind.
**\<2\>** \- Test whether the symbols are all bars.
**\<3\>** \- Look up the prize for three of a kind based on the common symbol.
**\<4\>** \- Assign a prize of $5\.
**\<5\>** \- Count the number of cherries.
**\<6\>** \- Count the number of diamonds.
**\<7\>** \- Calculate a prize based on the number of cherries.
**\<8\>** \- Adjust the prize for diamonds.
If you like, you can reorganize your flow chart around these tasks, as in Figure [9\.4](programs.html#fig:subdivide4). The chart will describe the same strategy, but in a more precise way. I’ll use a diamond shape to symbolize an `if else` decision.
Figure 9\.4: score can navigate three cases with two if else decisions. We can also break some of our tasks into two steps.
Now we can work through the subtasks one at a time, adding R code to the `if` tree as we go. Each subtask will be easy to solve if you set up a concrete example to work with and try to describe a solution in English before coding in R.
The first subtask asks you to test whether the symbols are three of a kind. How should you begin writing the code for this subtask?
You know that the final `score` function will look something like this:
```
score <- function(symbols) {
# calculate a prize
prize
}
```
Its argument, `symbols`, will be the output of `get_symbols`, a vector that contains three character strings. You could start writing `score` as I have written it, by defining an object named `score` and then slowly filling in the body of the function. However, this would be a bad idea. The eventual function will have eight separate parts, and it will not work correctly until *all* of those parts are written (and themselves work correctly). This means you would have to write the entire `score` function before you could test any of the subtasks. If `score` doesn’t work—which is very likely—you will not know which subtask needs fixed.
You can save yourself time and headaches if you focus on one subtask at a time. For each subtask, create a concrete example that you can test your code on. For example, you know that `score` will need to work on a vector named `symbols` that contains three character strings. If you make a real vector named `symbols`, you can run the code for many of your subtasks on the vector as you go:
```
symbols <- c("7", "7", "7")
```
If a piece of code does not work on `symbols`, you will know that you need to fix it before you move on. You can change the value of `symbols` from subtask to subtask to ensure that your code works in every situation:
```
symbols <- c("B", "BB", "BBB")
symbols <- c("C", "DD", "0")
```
Only combine your subtasks into a `score` function once each subtask works on a concrete example. If you follow this plan, you will spend more time using your functions and less time trying to figure out why they do not work.
After you set up a concrete example, try to describe how you will do the subtask in English. The more precisely you can describe your solution, the easier it will be to write your R code.
Our first subtask asks us to “test whether the symbols are three of a kind.” This phrase does not suggest any useful R code to me. However, I could describe a more precise test for three of a kind: three symbols will be the same if the first symbol is equal to the second and the second symbol is equal to the third. Or, even more precisely:
*A vector named `symbols` will contain three of the same symbol if the first element of `symbols` is equal to the second element of `symbols` and the second element of `symbols` is equal to the third element of `symbols`*.
**Exercise 9\.4 (Write a Test)** Turn the preceding statement into a logical test written in R. Use your knowledge of logical tests, Boolean operators, and subsetting from [R Notation](r-notation.html#r-notation). The test should work with the vector `symbols` and return a `TRUE` *if and only if* each element in `symbols` is the same. Be sure to test your code on `symbols`.
*Solution.* Here are a couple of ways to test that `symbols` contains three of the same symbol. The first method parallels the English suggestion above, but there are other ways to do the same test. There is no right or wrong answer, so long as your solution works, which is easy to check because you’ve created a vector named `symbols`:
```
symbols
## "7" "7" "7"
symbols[1] == symbols[2] & symbols[2] == symbols[3]
## TRUE
symbols[1] == symbols[2] & symbols[1] == symbols[3]
## TRUE
all(symbols == symbols[1])
## TRUE
```
As your vocabulary of R functions broadens, you’ll think of more ways to do basic tasks. One method that I like for checking three of a kind is:
```
length(unique(symbols) == 1)
```
The `unique` function returns every unique term that appears in a vector. If your `symbols` vector contains three of a kind (i.e., one unique term that appears three times), then `unique(symbols)` will return a vector of length `1`.
Now that you have a working test, you can add it to your slot\-machine script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
if (same) {
prize <- # look up the prize
} else if ( # Case 2: all bars ) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
`&&` and `||` behave like `&` and `|` but can sometimes be more efficient. The double operators will not evaluate the second test in a pair of tests if the first test makes the result clear. For example, if `symbols[1]` does not equal `symbols[2]` in the next expression, `&&` will not evaluate `symbols[2] == symbols[3]`; it can immediately return a `FALSE` for the whole expression (because `FALSE & TRUE` and `FALSE & FALSE` both evaluate to `FALSE`). This efficiency can speed up your programs; however, double operators are not appropriate everywhere. `&&` and `||` are not vectorized, which means they can only handle a single logical test on each side of the operator.
The second prize case occurs when all the symbols are a type of bar, for example, `B`, `BB`, and `BBB`. Let’s begin by creating a concrete example to work with:
```
symbols <- c("B", "BBB", "BB")
```
**Exercise 9\.5 (Test for All Bars)** Use R’s logical and Boolean operators to write a test that will determine whether a vector named `symbols` contains only symbols that are a type of bar. Check whether your test works with our example `symbols` vector. Remember to describe how the test should work in English, and then convert the solution to R.
*Solution.* As with many things in R, there are multiple ways to test whether `symbols` contains all bars. For example, you could write a very long test that uses multiple Boolean operators, like this:
```
symbols[1] == "B" | symbols[1] == "BB" | symbols[1] == "BBB" &
symbols[2] == "B" | symbols[2] == "BB" | symbols[2] == "BBB" &
symbols[3] == "B" | symbols[3] == "BB" | symbols[3] == "BBB"
## TRUE
```
However, this is not a very efficient solution, because R has to run nine logical tests (and you have to type them). You can often replace multiple `|` operators with a single `%in%`. Also, you can check that a test is true for each element in a vector with `all`. These two changes shorten the preceding code to:
```
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
Let’s add this code to our script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
prize <- # look up the prize
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
You may have noticed that I split this test up into two steps, `bars` and `all(bars)`. That’s just a matter of personal preference. Wherever possible, I like to write my code so it can be read with function and object names conveying what they do.
You also may have noticed that our test for Case 2 will capture some symbols that should be in Case 1 because they contain three of a kind:
```
symbols <- c("B", "B", "B")
all(symbols %in% c("B", "BB", "BBB"))
## TRUE
```
That won’t be a problem, however, because we’ve connected our cases with `else if` in the `if` tree. As soon as R comes to a case that evaluates to `TRUE`, it will skip over the rest of the tree. Think of it this way: each `else` tells R to only run the code that follows it *if none of the previous conditions have been met*. So when we have three of the same type of bar, R will evaluate the code for Case 1 and then skip the code for Case 2 (and Case 3\).
Our next subtask is to assign a prize for `symbols`. When the `symbols` vector contains three of the same symbol, the prize will depend on which symbol there are three of. If there are three `DD`s, the prize will be $100; if there are three `7`s, the prize will be $80; and so on.
This suggests another `if` tree. You could assign a prize with some code like this:
```
if (same) {
symbol <- symbols[1]
if (symbol == "DD") {
prize <- 800
} else if (symbol == "7") {
prize <- 80
} else if (symbol == "BBB") {
prize <- 40
} else if (symbol == "BB") {
prize <- 5
} else if (symbol == "B") {
prize <- 10
} else if (symbol == "C") {
prize <- 10
} else if (symbol == "0") {
prize <- 0
}
}
```
While this code will work, it is a bit long to write and read, and it may require R to perform multiple logical tests before delivering the correct prize. We can do better with a different method.
9\.4 Lookup Tables
------------------
Very often in R, the simplest way to do something will involve subsetting. How could you use subsetting here? Since you know the exact relationship between the symbols and their prizes, you can create a vector that captures this information. This vector can store symbols as names and prize values as elements:
```
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
payouts
## DD 7 BBB BB B C 0
## 100 80 40 25 10 10 0
```
Now you can extract the correct prize for any symbol by subsetting the vector with the symbol’s name:
```
payouts["DD"]
## DD
## 100
payouts["B"]
## B
## 10
```
If you want to leave behind the symbol’s name when subsetting, you can run the `unname` function on the output:
```
unname(payouts["DD"])
## 100
```
`unname` returns a copy of an object with the names attribute removed.
`payouts` is a type of *lookup table*, an R object that you can use to look up values. Subsetting `payouts` provides a simple way to find the prize for a symbol. It doesn’t take many lines of code, and it does the same amount of work whether your symbol is `DD` or `0`. You can create lookup tables in R by creating named objects that can be subsetted in clever ways.
Sadly, our method is not quite automatic; we need to tell R which symbol to look up in `payouts`. Or do we? What would happen if you subsetted `payouts` by `symbols[1]`? Give it a try:
```
symbols <- c("7", "7", "7")
symbols[1]
## "7"
payouts[symbols[1]]
## 7
## 80
symbols <- c("C", "C", "C")
payouts[symbols[1]]
## C
## 10
```
You don’t need to know the exact symbol to look up because you can tell R to look up whichever symbol happens to be in `symbols`. You can find this symbol with `symbols[1]`, `symbols[2]`, or `symbols[3]`, because each contains the same symbol in this case. You now have a simple automated way to calculate the prize when `symbols` contains three of a kind. Let’s add it to our code and then look at Case 2:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- # assign $5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Case 2 occurs whenever the symbols are all bars. In that case, the prize will be $5, which is easy to assign:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
# count cherries
prize <- # calculate a prize
}
# count diamonds
# double the prize if necessary
```
Now we can work on the last case. Here, you’ll need to know how many cherries are in `symbols` before you can calculate a prize.
**Exercise 9\.6 (Find C’s)** How can you tell which elements of a vector named `symbols` are a `C`? Devise a test and try it out.
**Challenge**
How might you count the number of `C`s in a vector named `symbols`? Remember R’s coercion rules.
*Solution.* As always, let’s work with a real example:
```
symbols <- c("C", "DD", "C")
```
One way to test for cherries would be to check which, if any, of the symbols are a `C`:
```
symbols == "C"
## TRUE FALSE TRUE
```
It’d be even more useful to count how many of the symbols are cherries. You can do this with `sum`, which expects numeric input, not logical. Knowing this, R will coerce the `TRUE`s and `FALSE`s to `1`s and `0`s before doing the summation. As a result, `sum` will return the number of `TRUE`s, which is also the number of cherries:
```
sum(symbols == "C")
## 2
```
You can use the same method to count the number of diamonds in `symbols`:
```
sum(symbols == "DD")
## 1
```
Let’s add both of these subtasks to the program skeleton:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- # calculate a prize
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
Since Case 3 appears further down the `if` tree than Cases 1 and 2, the code in Case 3 will only be applied to players that do not have three of a kind or all bars. According to the slot machine’s payout scheme, these players will win $5 if they have two cherries and $2 if they have one cherry. If the player has no cherries, she gets a prize of $0\. We don’t need to worry about three cherries because that outcome is already covered in Case 1\.
As in Case 1, you could write an `if` tree that handles each combination of cherries, but just like in Case 1, this would be an inefficient solution:
```
if (cherries == 2) {
prize <- 5
} else if (cherries == 1) {
prize <- 2
} else {}
prize <- 0
}
```
Again, I think the best solution will involve subsetting. If you are feeling ambitious, you can try to work this solution out on your own, but you will learn just as quickly by mentally working through the following proposed solution.
We know that our prize should be $0 if we have no cherries, $2 if we have one cherry, and $5 if we have two cherries. You can create a vector that contains this information. This will be a very simple lookup table:
```
c(0, 2, 5)
```
Now, like in Case 1, you can subset the vector to retrieve the correct prize. In this case, the prize’s aren’t identified by a symbol name, but by the number of cherries present. Do we have that information? Yes, it is stored in `cherries`. We can use basic integer subsetting to get the correct prize from the prior lookup table, for example, `c(0, 2, 5)[1]`.
`cherries` isn’t exactly suited for integer subsetting because it could contain a zero, but that’s easy to fix. We can subset with `cherries + 1`. Now when `cherries` equals zero, we have:
```
cherries + 1
## 1
c(0, 2, 5)[cherries + 1]
## 0
```
When `cherries` equals one, we have:
```
cherries + 1
## 2
c(0, 2, 5)[cherries + 1]
## 2
```
And when `cherries` equals two, we have:
```
cherries + 1
## 3
c(0, 2, 5)[cherries + 1]
## 5
```
Examine these solutions until you are satisfied that they return the correct prize for each number of cherries. Then add the code to your script, as follows:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
# double the prize if necessary
```
**Lookup Tables Versus if Trees**
This is the second time we’ve created a lookup table to avoid writing an `if` tree. Why is this technique helpful and why does it keep appearing? Many `if` trees in R are essential. They provide a useful way to tell R to use different algorithms in different cases. However, `if` trees are not appropriate everywhere.
`if` trees have a couple of drawbacks. First, they require R to run multiple tests as it works its way down the `if` tree, which can create unnecessary work. Second, as you’ll see in [Speed](speed.html#speed), it can be difficult to use `if` trees in vectorized code, a style of code that takes advantage of R’s programming strengths to create fast programs. Lookup tables do not suffer from either of these drawbacks.
You won’t be able to replace every `if` tree with a lookup table, nor should you. However, you can usually use lookup tables to avoid assigning variables with `if` trees. As a general rule, use an `if` tree if each branch of the tree runs different *code*. Use a lookup table if each branch of the tree only assigns a different *value*.
To convert an `if` tree to a lookup table, identify the values to be assigned and store them in a vector. Next, identify the selection criteria used in the conditions of the `if` tree. If the conditions use character strings, give your vector names and use name\-based subsetting. If the conditions use integers, use integer\-based subsetting.
The final subtask is to double the prize once for every diamond present. This means that the final prize will be some multiple of the current prize. For example, if no diamonds are present, the prize will be:
```
prize * 1 # 1 = 2 ^ 0
```
If one diamond is present, it will be:
```
prize * 2 # 2 = 2 ^ 1
```
If two diamonds are present, it will be:
```
prize * 4 # 4 = 2 ^ 2
```
And if three diamonds are present, it will be:
```
prize * 8 # 8 = 2 ^ 3
```
Can you think of an easy way to handle this? How about something similar to these examples?
**Exercise 9\.7 (Adjust for Diamonds)** Write a method for adjusting `prize` based on `diamonds`. Describe a solution in English first, and then write your code.
*Solution.* Here is a concise solution inspired by the previous pattern. The adjusted prize will equal:
```
prize * 2 ^ diamonds
```
which gives us our final `score` script:
```
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
9\.5 Code Comments
------------------
You now have a working score script that you can save to a function. Before you save your script, though, consider adding comments to your code with a `#`. Comments can make your code easier to understand by explaining *why* the code does what it does. You can also use comments to break long programs into scannable chunks. For example, I would include three comments in the `score` code:
```
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
```
Now that each part of your code works, you can wrap it into a function with the methods you learned in [Writing Your Own Functions](basics.html#write-functions). Either use RStudio’s Extract Function option in the menu bar under Code, or use the `function` function. Ensure that the last line of the function returns a result (it does), and identify any arguments used by your function. Often the concrete examples that you used to test your code, like `symbols`, will become the arguments of your function. Run the following code to start using the `score` function:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Once you have defined the `score` function, the `play` function will work as well:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
Now it is easy to play the slot machine:
```
play()
## "0" "BB" "B"
## 0
play()
## "DD" "0" "B"
## 0
play()
## "BB" "BB" "B"
## 25
```
9\.6 Summary
------------
An R program is a set of instructions for your computer to follow that has been organized into a sequence of steps and cases. This may make programs seem simple, but don’t be fooled: you can create complicated results with the right combination of simple steps (and cases).
As a programmer, you are more likely to be fooled in the opposite way. A program may seem impossible to write when you know that it must do something impressive. Do not panic in these situations. Divide the job before you into simple tasks, and then divide the tasks again. You can visualize the relationship between tasks with a flow chart if it helps. Then work on the subtasks one at a time. Describe solutions in English, then convert them to R code. Test each solution against concrete examples as you go. Once each of your subtasks works, combine your code into a function that you can share and reuse.
R provides tools that can help you do this. You can manage cases with `if` and `else` statements. You can create a lookup table with objects and subsetting. You can add code comments with `#`. And you can save your programs as a function with `function`.
Things often go wrong when people write programs. It will be up to you to find the source of any errors that occur and to fix them. It should be easy to find the source of your errors if you use a stepwise approach to writing functions, writing—and then testing—one bit at a time. However, if the source of an error eludes you, or you find yourself working with large chunks of untested code, consider using R’s built in debugging tools, described in [Debugging R Code](debug.html#debug).
The next two chapters will teach you more tools that you can use in your programs. As you master these tools, you will find it easier to write R programs that let you do whatever you wish to your data. In [S3](s3.html#s3), you will learn how to use R’s S3 system, an invisible hand that shapes many parts of R. You will use the system to build a custom class for your slot machine output, and you will tell R how to display objects that have your class.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/s3.html |
10 S3
=====
You may have noticed that your slot machine results do not look the way I promised they would. I suggested that the slot machine would display its results like this:
```
play()
## 0 0 DD
## $0
```
But the current machine displays its results in a less pretty format:
```
play()
## "0" "0" "DD"
## 0
```
Moreover, the slot machine uses a hack to display symbols (we call `print` from within `play`). As a result, the symbols do not follow your prize output if you save it:
```
one_play <- play()
## "B" "0" "B"
one_play
## 0
```
You can fix both of these problems with R’s S3 system.
10\.1 The S3 System
-------------------
S3 refers to a class system built into R. The system governs how R handles objects of different classes. Certain R functions will look up an object’s S3 class, and then behave differently in response.
The `print` function is like this. When you print a numeric vector, `print` will display a number:
```
num <- 1000000000
print(num)
## 1000000000
```
But if you give that number the S3 class `POSIXct` followed by `POSIXt`, `print` will display a time:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
If you use objects with classes—and you do—you will run into R’s S3 system. S3 behavior can seem odd at first, but is easy to predict once you are familiar with it.
R’s S3 system is built around three components: attributes (especially the `class` attribute), generic functions, and methods.
10\.2 Attributes
----------------
In [Attributes](r-objects.html#attributes), you learned that many R objects come with attributes, pieces of extra information that are given a name and appended to the object. Attributes do not affect the values of the object, but stick to the object as a type of metadata that R can use to handle the object. For example, a data frame stores its row and column names as attributes. Data frames also store their class, `"data.frame"`, as an attribute.
You can see an object’s attributes with `attribute`. If you run `attribute` on the `deck` data frame that you created in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards), you will see:
```
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
## [20] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
```
R comes with many helper functions that let you set and access the most common attributes used in R. You’ve already met the `names`, `dim`, and `class` functions, which each work with an eponymously named attribute. However, R also has `row.names`, `levels`, and many other attribute\-based helper functions. You can use any of these functions to retrieve an attribute’s value:
```
row.names(deck)
## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13"
## [14] "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26"
## [27] "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39"
## [40] "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52"
```
or to change an attribute’s value:
```
row.names(deck) <- 101:152
```
or to give an object a new attribute altogether:
```
levels(deck) <- c("level 1", "level 2", "level 3")
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117
## [18] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134
## [35] 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
## [52] 152
##
## $levels
## [1] "level 1" "level 2" "level 3"
```
R is very laissez faire when it comes to attributes. It will let you add any attributes that you like to an object (and then it will usually ignore them). The only time R will complain is when a function needs to find an attribute and it is not there.
You can add any general attribute to an object with `attr`; you can also use `attr` to look up the value of any attribute of an object. Let’s see how this works with `one_play`, the result of playing our slot machine one time:
```
one_play <- play()
one_play
## 0
attributes(one_play)
## NULL
```
`attr` takes two arguments: an R object and the name of an attribute (as a character string). To give the R object an attribute of the specified name, save a value to the output of `attr`. Let’s give `one_play` an attribute named `symbols` that contains a vector of character strings:
```
attr(one_play, "symbols") <- c("B", "0", "B")
attributes(one_play)
## $symbols
## [1] "B" "0" "B"
```
To look up the value of any attribute, give `attr` an R object and the name of the attribute you would like to look up:
```
attr(one_play, "symbols")
## "B" "0" "B"
```
If you give an attribute to an atomic vector, like `one_play`, R will usually display the attribute beneath the vector’s values. However, if the attribute changes the vector’s class, R may display all of the information in the vector in a new way (as we saw with `POSIXct` objects):
```
one_play
## [1] 0
## attr(,"symbols")
## [1] "B" "0" "B"
```
R will generally ignore an object’s attributes unless you give them a name that an R function looks for, like `names` or `class`. For example, R will ignore the `symbols` attribute of `one_play` as you manipulate `one_play`:
```
one_play + 1
## 1
## attr(,"symbols")
## "B" "0" "B"
```
**Exercise 10\.1 (Add an Attribute)** Modify `play` to return a prize that contains the symbols associated with it as an attribute named `symbols`. Remove the redundant call to `print(symbols)`:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
*Solution.* You can create a new version of `play` by capturing the output of `score(symbols)` and assigning an attribute to it. `play` can then return the enhanced version of the output:
```
play <- function() {
symbols <- get_symbols()
prize <- score(symbols)
attr(prize, "symbols") <- symbols
prize
}
```
Now `play` returns both the prize and the symbols associated with the prize. The results may not look pretty, but the symbols stick with the prize when we copy it to a new object. We can work on tidying up the display in a minute:
```
play()
## [1] 0
## attr(,"symbols")
## [1] "B" "BB" "0"
two_play <- play()
two_play
## [1] 0
## attr(,"symbols")
## [1] "0" "B" "0"
```
You can also generate a prize and set its attributes in one step with the `structure` function. `structure` creates an object with a set of attributes. The first argument of `structure` should be an R object or set of values, and the remaining arguments should be named attributes for `structure` to add to the object. You can give these arguments any argument names you like. `structure` will add the attributes to the object under the names that you provide as argument names:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
three_play <- play()
three_play
## 0
## attr(,"symbols")
## "0" "BB" "B"
```
Now that your `play` output contains a `symbols` attribute, what can you do with it? You can write your own functions that lookup and use the attribute. For example, the following function will look up `one_play`’s `symbols` attribute and use it to display `one_play` in a pretty manner. We will use this function to display our slot results, so let’s take a moment to study what it does:
```
slot_display <- function(prize){
# extract symbols
symbols <- attr(prize, "symbols")
# collapse symbols into single string
symbols <- paste(symbols, collapse = " ")
# combine symbol with prize as a character string
# \n is special escape sequence for a new line (i.e. return or enter)
string <- paste(symbols, prize, sep = "\n$")
# display character string in console without quotes
cat(string)
}
slot_display(one_play)
## B 0 B
## $0
```
The function expects an object like `one_play` that has both a numerical value and a `symbols` attribute. The first line of the function will look up the value of the `symbols` attribute and save it as an object named `symbols`. Let’s make an example `symbols` object so we can see what the rest of the function does. We can use `one_play`’s `symbols` attribute to do the job. `symbols` will be a vector of three\-character strings:
```
symbols <- attr(one_play, "symbols")
symbols
## "B" "0" "B"
```
Next, `slot_display` uses `paste` to collapse the three strings in `symbols` into a single\-character string. `paste` collapses a vector of character strings into a single string when you give it the `collapse` argument. `paste` will use the value of `collapse` to separate the formerly distinct strings. Hence, `symbols` becomes `B 0 B` the three strings separated by a space:
```
symbols <- paste(symbols, collapse = " ")
symbols
## "B 0 B"
```
Our function then uses `paste` in a new way to combine `symbols` with the value of `prize`. `paste` combines separate objects into a character string when you give it a `sep` argument. For example, here `paste` will combine the string in `symbols`, `B 0 B`, with the number in `prize`, 0\. `paste` will use the value of `sep` argument to separate the inputs in the new string. Here, that value is `\n$`, so our result will look like `"B 0 B\n$0"`:
```
prize <- one_play
string <- paste(symbols, prize, sep = "\n$")
string
## "B 0 B\n$0"
```
The last line of `slot_display` calls `cat` on the new string. `cat` is like `print`; it displays its input at the command line. However, `cat` does not surround its output with quotation marks. `cat` also replaces every `\n` with a new line or line break. The result is what we see. Notice that it looks just how I suggested that our `play` output should look in [Programs](programs.html#programs):
```
cat(string)
## B 0 B
## $0
```
You can use `slot_display` to manually clean up the output of `play`:
```
slot_display(play())
## C B 0
## $2
slot_display(play())
## 7 0 BB
## $0
```
This method of cleaning the output requires you to manually intervene in your R session (to call `slot_display`). There is a function that you can use to automatically clean up the output of `play` *each* time it is displayed. This function is `print`, and it is a *generic function*.
10\.3 Generic Functions
-----------------------
R uses `print` more often than you may think; R calls `print` each time it displays a result in your console window. This call happens in the background, so you do not notice it; but the call explains how output makes it to the console window (recall that `print` always prints its argument in the console window). This `print` call also explains why the output of `print` always matches what you see when you display an object at the command line:
```
print(pi)
## 3.141593
pi
## 3.141593
print(head(deck))
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
print(play())
## 5
## attr(,"symbols")
## "B" "BB" "B"
play()
## 5
## attr(,"symbols")
## "B" "BB" "B"
```
You can change how R displays your slot output by rewriting `print` to look like `slot_display`. Then R would print the output in our tidy format. However, this method would have negative side effects. You do not want R to call `slot_display` when it prints a data frame, a numerical vector, or any other object.
Fortunately, `print` is not a normal function; it is a *generic* function. This means that `print` is written in a way that lets it do different things in different cases. You’ve already seen this behavior in action (although you may not have realized it). `print` did one thing when we looked at the unclassed version of `num`:
```
num <- 1000000000
print(num)
## 1000000000
```
and a different thing when we gave `num` a class:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
Take a look at the code inside `print` to see how it does this. You may imagine that print looks up the class attribute of its input and then uses an \+if\+ tree to pick which output to display. If this occurred to you, great job! `print` does something very similar, but much more simple.
10\.4 Methods
-------------
When you call `print`, `print` calls a special function, `UseMethod`:
```
print
## function (x, ...)
## UseMethod("print")
## <bytecode: 0x7ffee4c62f80>
## <environment: namespace:base>
```
`UseMethod` examines the class of the input that you provide for the first argument of `print`, and then passes all of your arguments to a new function designed to handle that class of input. For example, when you give `print` a POSIXct object, `UseMethod` will pass all of `print`’s arguments to `print.POSIXct`. R will then run `print.POSIXct` and return the results:
```
print.POSIXct
## function (x, ...)
## {
## max.print <- getOption("max.print", 9999L)
## if (max.print < length(x)) {
## print(format(x[seq_len(max.print)], usetz = TRUE), ...)
## cat(" [ reached getOption(\"max.print\") -- omitted",
## length(x) - max.print, "entries ]\n")
## }
## else print(format(x, usetz = TRUE), ...)
## invisible(x)
## }
## <bytecode: 0x7fa948f3d008>
## <environment: namespace:base>
```
If you give `print` a factor object, `UseMethod` will pass all of `print`’s arguments to `print.factor`. R will then run `print.factor` and return the results:
```
print.factor
## function (x, quote = FALSE, max.levels = NULL, width = getOption("width"),
## ...)
## {
## ord <- is.ordered(x)
## if (length(x) == 0L)
## cat(if (ord)
## "ordered"
## ...
## drop <- n > maxl
## cat(if (drop)
## paste(format(n), ""), T0, paste(if (drop)
## c(lev[1L:max(1, maxl - 1)], "...", if (maxl > 1) lev[n])
## else lev, collapse = colsep), "\n", sep = "")
## }
## invisible(x)
## }
## <bytecode: 0x7fa94a64d470>
## <environment: namespace:base>
```
`print.POSIXct` and `print.factor` are called *methods* of `print`. By themselves, `print.POSIXct` and `print.factor` work like regular R functions. However, each was written specifically so `UseMethod` could call it to handle a specific class of `print` input.
Notice that `print.POSIXct` and `print.factor` do two different things (also notice that I abridged the middle of `print.factor`—it is a long function). This is how `print` manages to do different things in different cases. `print` calls `UseMethod`, which calls a specialized method based on the class of `print`’s first argument.
You can see which methods exist for a generic function by calling `methods` on the function. For example, `print` has almost 200 methods (which gives you an idea of how many classes exist in R):
```
methods(print)
## [1] print.acf*
## [2] print.anova
## [3] print.aov*
## ...
## [176] print.xgettext*
## [177] print.xngettext*
## [178] print.xtabs*
##
## Nonvisible functions are asterisked
```
This system of generic functions, methods, and class\-based dispatch is known as S3 because it originated in the third version of S, the programming language that would evolve into S\-PLUS and R. Many common R functions are S3 generics that work with a set of class methods. For example, `summary` and `head` also call `UseMethod`. More basic functions, like `c`, `+`, `-`, `<` and others also behave like generic functions, although they call `.primitive` instead of `UseMethod`.
The S3 system allows R functions to behave in different ways for different classes. You can use S3 to format your slot output. First, give your output its own class. Then write a print method for that class. To do this efficiently, you will need to know a little about how `UseMethod` selects a method function to use.
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
10\.5 Classes
-------------
You can use the S3 system to make a robust new class of objects in R. Then R will treat objects of your class in a consistent, sensible manner. To make a class:
* Choose a name for your class.
* Assign each instance of your class a \+class\+ attribute.
* Write class methods for any generic function likely to use objects of your class.
Many R packages are based on classes that have been built in a similar manner. While this work is simple, it may not be easy. For example, consider how many methods exist for predefined classes.
You can call `methods` on a class with the `class` argument, which takes a character string. `methods` will return every method written for the class. Notice that `methods` will not be able to show you methods that come in an unloaded R package:
```
methods(class = "factor")
## [1] [.factor [[.factor
## [3] [[<-.factor [<-.factor
## [5] all.equal.factor as.character.factor
## [7] as.data.frame.factor as.Date.factor
## [9] as.list.factor as.logical.factor
## [11] as.POSIXlt.factor as.vector.factor
## [13] droplevels.factor format.factor
## [15] is.na<-.factor length<-.factor
## [17] levels<-.factor Math.factor
## [19] Ops.factor plot.factor*
## [21] print.factor relevel.factor*
## [23] relist.factor* rep.factor
## [25] summary.factor Summary.factor
## [27] xtfrm.factor
##
## Nonvisible functions are asterisked
```
This output indicates how much work is required to create a robust, well\-behaved class. You will usually need to write a `class` method for every basic R operation.
Consider two challenges that you will face right away. First, R drops attributes (like `class`) when it combines objects into a vector:
```
play1 <- play()
play1
## B BBB BBB
## $5
play2 <- play()
play2
## 0 B 0
## $0
c(play1, play2)
## [1] 5 0
```
Here, R stops using `print.slots` to display the vector because the vector `c(play1, play2)` no longer has a “slots” \+class\+ attribute.
Next, R will drop the attributes of an object (like `class`) when you subset the object:
```
play1[1]
## [1] 5
```
You can avoid this behavior by writing a `c.slots` method and a `[.slots` method, but then difficulties will quickly accrue. How would you combine the `symbols` attributes of multiple plays into a vector of symbols attributes? How would you change `print.slots` to handle vectors of outputs? These challenges are open for you to explore. However, you will usually not have to attempt this type of large\-scale programming as a data scientist.
In our case, it is very handy to let `slots` objects revert to single prize values when we combine groups of them together into a vector.
10\.6 S3 and Debugging
----------------------
S3 can be annoying if you are trying to understand R functions. It is difficult to tell what a function does if its code body contains a call to `UseMethod`. Now that you know that `UseMethod` calls a class\-specific method, you can search for and examine the method directly. It will be a function whose name follows the `<function.class>` syntax, or possibly `<function.default>`. You can also use the `methods` function to see what methods are associated with a function or a class.
10\.7 S4 and R5
---------------
R also contains two other systems that create class specific behavior. These are known as S4 and R5 (or reference classes). Each of these systems is much harder to use than S3, and perhaps as a consequence, more rare. However, they offer safeguards that S3 does not. If you’d like to learn more about these systems, including how to write and use your own generic functions, I recommend the book [*Advanced R Programming*](http://adv-r.had.co.nz/) by Hadley Wickham.
10\.8 Summary
-------------
Values are not the only place to store information in R, and functions are not the only way to create unique behavior. You can also do both of these things with R’s S3 system. The S3 system provides a simple way to create object\-specific behavior in R. In other words, it is R’s version of object\-oriented programming (OOP). The system is implemented by generic functions. These functions examine the class attribute of their input and call a class\-specific method to generate output. Many S3 methods will look for and use additional information that is stored in an object’s attributes. Many common R functions are S3 generics.
R’s S3 system is more helpful for the tasks of computer science than the tasks of data science, but understanding S3 can help you troubleshoot your work in R as a data scientist.
You now know quite a bit about how to write R code that performs custom tasks, but how could you repeat these tasks? As a data scientist, you will often repeat tasks, sometimes thousands or even millions of times. Why? Because repetition lets you simulate results and estimate probabilities. [Loops](loops.html#loops) will show you how to automate repetition with R’s `for` and `while` functions. You’ll use `for` to simulate various slot machine plays and to calculate the payout rate of your slot machine.
10\.1 The S3 System
-------------------
S3 refers to a class system built into R. The system governs how R handles objects of different classes. Certain R functions will look up an object’s S3 class, and then behave differently in response.
The `print` function is like this. When you print a numeric vector, `print` will display a number:
```
num <- 1000000000
print(num)
## 1000000000
```
But if you give that number the S3 class `POSIXct` followed by `POSIXt`, `print` will display a time:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
If you use objects with classes—and you do—you will run into R’s S3 system. S3 behavior can seem odd at first, but is easy to predict once you are familiar with it.
R’s S3 system is built around three components: attributes (especially the `class` attribute), generic functions, and methods.
10\.2 Attributes
----------------
In [Attributes](r-objects.html#attributes), you learned that many R objects come with attributes, pieces of extra information that are given a name and appended to the object. Attributes do not affect the values of the object, but stick to the object as a type of metadata that R can use to handle the object. For example, a data frame stores its row and column names as attributes. Data frames also store their class, `"data.frame"`, as an attribute.
You can see an object’s attributes with `attribute`. If you run `attribute` on the `deck` data frame that you created in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards), you will see:
```
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
## [20] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
```
R comes with many helper functions that let you set and access the most common attributes used in R. You’ve already met the `names`, `dim`, and `class` functions, which each work with an eponymously named attribute. However, R also has `row.names`, `levels`, and many other attribute\-based helper functions. You can use any of these functions to retrieve an attribute’s value:
```
row.names(deck)
## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13"
## [14] "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26"
## [27] "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39"
## [40] "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52"
```
or to change an attribute’s value:
```
row.names(deck) <- 101:152
```
or to give an object a new attribute altogether:
```
levels(deck) <- c("level 1", "level 2", "level 3")
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117
## [18] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134
## [35] 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
## [52] 152
##
## $levels
## [1] "level 1" "level 2" "level 3"
```
R is very laissez faire when it comes to attributes. It will let you add any attributes that you like to an object (and then it will usually ignore them). The only time R will complain is when a function needs to find an attribute and it is not there.
You can add any general attribute to an object with `attr`; you can also use `attr` to look up the value of any attribute of an object. Let’s see how this works with `one_play`, the result of playing our slot machine one time:
```
one_play <- play()
one_play
## 0
attributes(one_play)
## NULL
```
`attr` takes two arguments: an R object and the name of an attribute (as a character string). To give the R object an attribute of the specified name, save a value to the output of `attr`. Let’s give `one_play` an attribute named `symbols` that contains a vector of character strings:
```
attr(one_play, "symbols") <- c("B", "0", "B")
attributes(one_play)
## $symbols
## [1] "B" "0" "B"
```
To look up the value of any attribute, give `attr` an R object and the name of the attribute you would like to look up:
```
attr(one_play, "symbols")
## "B" "0" "B"
```
If you give an attribute to an atomic vector, like `one_play`, R will usually display the attribute beneath the vector’s values. However, if the attribute changes the vector’s class, R may display all of the information in the vector in a new way (as we saw with `POSIXct` objects):
```
one_play
## [1] 0
## attr(,"symbols")
## [1] "B" "0" "B"
```
R will generally ignore an object’s attributes unless you give them a name that an R function looks for, like `names` or `class`. For example, R will ignore the `symbols` attribute of `one_play` as you manipulate `one_play`:
```
one_play + 1
## 1
## attr(,"symbols")
## "B" "0" "B"
```
**Exercise 10\.1 (Add an Attribute)** Modify `play` to return a prize that contains the symbols associated with it as an attribute named `symbols`. Remove the redundant call to `print(symbols)`:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
*Solution.* You can create a new version of `play` by capturing the output of `score(symbols)` and assigning an attribute to it. `play` can then return the enhanced version of the output:
```
play <- function() {
symbols <- get_symbols()
prize <- score(symbols)
attr(prize, "symbols") <- symbols
prize
}
```
Now `play` returns both the prize and the symbols associated with the prize. The results may not look pretty, but the symbols stick with the prize when we copy it to a new object. We can work on tidying up the display in a minute:
```
play()
## [1] 0
## attr(,"symbols")
## [1] "B" "BB" "0"
two_play <- play()
two_play
## [1] 0
## attr(,"symbols")
## [1] "0" "B" "0"
```
You can also generate a prize and set its attributes in one step with the `structure` function. `structure` creates an object with a set of attributes. The first argument of `structure` should be an R object or set of values, and the remaining arguments should be named attributes for `structure` to add to the object. You can give these arguments any argument names you like. `structure` will add the attributes to the object under the names that you provide as argument names:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
three_play <- play()
three_play
## 0
## attr(,"symbols")
## "0" "BB" "B"
```
Now that your `play` output contains a `symbols` attribute, what can you do with it? You can write your own functions that lookup and use the attribute. For example, the following function will look up `one_play`’s `symbols` attribute and use it to display `one_play` in a pretty manner. We will use this function to display our slot results, so let’s take a moment to study what it does:
```
slot_display <- function(prize){
# extract symbols
symbols <- attr(prize, "symbols")
# collapse symbols into single string
symbols <- paste(symbols, collapse = " ")
# combine symbol with prize as a character string
# \n is special escape sequence for a new line (i.e. return or enter)
string <- paste(symbols, prize, sep = "\n$")
# display character string in console without quotes
cat(string)
}
slot_display(one_play)
## B 0 B
## $0
```
The function expects an object like `one_play` that has both a numerical value and a `symbols` attribute. The first line of the function will look up the value of the `symbols` attribute and save it as an object named `symbols`. Let’s make an example `symbols` object so we can see what the rest of the function does. We can use `one_play`’s `symbols` attribute to do the job. `symbols` will be a vector of three\-character strings:
```
symbols <- attr(one_play, "symbols")
symbols
## "B" "0" "B"
```
Next, `slot_display` uses `paste` to collapse the three strings in `symbols` into a single\-character string. `paste` collapses a vector of character strings into a single string when you give it the `collapse` argument. `paste` will use the value of `collapse` to separate the formerly distinct strings. Hence, `symbols` becomes `B 0 B` the three strings separated by a space:
```
symbols <- paste(symbols, collapse = " ")
symbols
## "B 0 B"
```
Our function then uses `paste` in a new way to combine `symbols` with the value of `prize`. `paste` combines separate objects into a character string when you give it a `sep` argument. For example, here `paste` will combine the string in `symbols`, `B 0 B`, with the number in `prize`, 0\. `paste` will use the value of `sep` argument to separate the inputs in the new string. Here, that value is `\n$`, so our result will look like `"B 0 B\n$0"`:
```
prize <- one_play
string <- paste(symbols, prize, sep = "\n$")
string
## "B 0 B\n$0"
```
The last line of `slot_display` calls `cat` on the new string. `cat` is like `print`; it displays its input at the command line. However, `cat` does not surround its output with quotation marks. `cat` also replaces every `\n` with a new line or line break. The result is what we see. Notice that it looks just how I suggested that our `play` output should look in [Programs](programs.html#programs):
```
cat(string)
## B 0 B
## $0
```
You can use `slot_display` to manually clean up the output of `play`:
```
slot_display(play())
## C B 0
## $2
slot_display(play())
## 7 0 BB
## $0
```
This method of cleaning the output requires you to manually intervene in your R session (to call `slot_display`). There is a function that you can use to automatically clean up the output of `play` *each* time it is displayed. This function is `print`, and it is a *generic function*.
10\.3 Generic Functions
-----------------------
R uses `print` more often than you may think; R calls `print` each time it displays a result in your console window. This call happens in the background, so you do not notice it; but the call explains how output makes it to the console window (recall that `print` always prints its argument in the console window). This `print` call also explains why the output of `print` always matches what you see when you display an object at the command line:
```
print(pi)
## 3.141593
pi
## 3.141593
print(head(deck))
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
print(play())
## 5
## attr(,"symbols")
## "B" "BB" "B"
play()
## 5
## attr(,"symbols")
## "B" "BB" "B"
```
You can change how R displays your slot output by rewriting `print` to look like `slot_display`. Then R would print the output in our tidy format. However, this method would have negative side effects. You do not want R to call `slot_display` when it prints a data frame, a numerical vector, or any other object.
Fortunately, `print` is not a normal function; it is a *generic* function. This means that `print` is written in a way that lets it do different things in different cases. You’ve already seen this behavior in action (although you may not have realized it). `print` did one thing when we looked at the unclassed version of `num`:
```
num <- 1000000000
print(num)
## 1000000000
```
and a different thing when we gave `num` a class:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
Take a look at the code inside `print` to see how it does this. You may imagine that print looks up the class attribute of its input and then uses an \+if\+ tree to pick which output to display. If this occurred to you, great job! `print` does something very similar, but much more simple.
10\.4 Methods
-------------
When you call `print`, `print` calls a special function, `UseMethod`:
```
print
## function (x, ...)
## UseMethod("print")
## <bytecode: 0x7ffee4c62f80>
## <environment: namespace:base>
```
`UseMethod` examines the class of the input that you provide for the first argument of `print`, and then passes all of your arguments to a new function designed to handle that class of input. For example, when you give `print` a POSIXct object, `UseMethod` will pass all of `print`’s arguments to `print.POSIXct`. R will then run `print.POSIXct` and return the results:
```
print.POSIXct
## function (x, ...)
## {
## max.print <- getOption("max.print", 9999L)
## if (max.print < length(x)) {
## print(format(x[seq_len(max.print)], usetz = TRUE), ...)
## cat(" [ reached getOption(\"max.print\") -- omitted",
## length(x) - max.print, "entries ]\n")
## }
## else print(format(x, usetz = TRUE), ...)
## invisible(x)
## }
## <bytecode: 0x7fa948f3d008>
## <environment: namespace:base>
```
If you give `print` a factor object, `UseMethod` will pass all of `print`’s arguments to `print.factor`. R will then run `print.factor` and return the results:
```
print.factor
## function (x, quote = FALSE, max.levels = NULL, width = getOption("width"),
## ...)
## {
## ord <- is.ordered(x)
## if (length(x) == 0L)
## cat(if (ord)
## "ordered"
## ...
## drop <- n > maxl
## cat(if (drop)
## paste(format(n), ""), T0, paste(if (drop)
## c(lev[1L:max(1, maxl - 1)], "...", if (maxl > 1) lev[n])
## else lev, collapse = colsep), "\n", sep = "")
## }
## invisible(x)
## }
## <bytecode: 0x7fa94a64d470>
## <environment: namespace:base>
```
`print.POSIXct` and `print.factor` are called *methods* of `print`. By themselves, `print.POSIXct` and `print.factor` work like regular R functions. However, each was written specifically so `UseMethod` could call it to handle a specific class of `print` input.
Notice that `print.POSIXct` and `print.factor` do two different things (also notice that I abridged the middle of `print.factor`—it is a long function). This is how `print` manages to do different things in different cases. `print` calls `UseMethod`, which calls a specialized method based on the class of `print`’s first argument.
You can see which methods exist for a generic function by calling `methods` on the function. For example, `print` has almost 200 methods (which gives you an idea of how many classes exist in R):
```
methods(print)
## [1] print.acf*
## [2] print.anova
## [3] print.aov*
## ...
## [176] print.xgettext*
## [177] print.xngettext*
## [178] print.xtabs*
##
## Nonvisible functions are asterisked
```
This system of generic functions, methods, and class\-based dispatch is known as S3 because it originated in the third version of S, the programming language that would evolve into S\-PLUS and R. Many common R functions are S3 generics that work with a set of class methods. For example, `summary` and `head` also call `UseMethod`. More basic functions, like `c`, `+`, `-`, `<` and others also behave like generic functions, although they call `.primitive` instead of `UseMethod`.
The S3 system allows R functions to behave in different ways for different classes. You can use S3 to format your slot output. First, give your output its own class. Then write a print method for that class. To do this efficiently, you will need to know a little about how `UseMethod` selects a method function to use.
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
10\.5 Classes
-------------
You can use the S3 system to make a robust new class of objects in R. Then R will treat objects of your class in a consistent, sensible manner. To make a class:
* Choose a name for your class.
* Assign each instance of your class a \+class\+ attribute.
* Write class methods for any generic function likely to use objects of your class.
Many R packages are based on classes that have been built in a similar manner. While this work is simple, it may not be easy. For example, consider how many methods exist for predefined classes.
You can call `methods` on a class with the `class` argument, which takes a character string. `methods` will return every method written for the class. Notice that `methods` will not be able to show you methods that come in an unloaded R package:
```
methods(class = "factor")
## [1] [.factor [[.factor
## [3] [[<-.factor [<-.factor
## [5] all.equal.factor as.character.factor
## [7] as.data.frame.factor as.Date.factor
## [9] as.list.factor as.logical.factor
## [11] as.POSIXlt.factor as.vector.factor
## [13] droplevels.factor format.factor
## [15] is.na<-.factor length<-.factor
## [17] levels<-.factor Math.factor
## [19] Ops.factor plot.factor*
## [21] print.factor relevel.factor*
## [23] relist.factor* rep.factor
## [25] summary.factor Summary.factor
## [27] xtfrm.factor
##
## Nonvisible functions are asterisked
```
This output indicates how much work is required to create a robust, well\-behaved class. You will usually need to write a `class` method for every basic R operation.
Consider two challenges that you will face right away. First, R drops attributes (like `class`) when it combines objects into a vector:
```
play1 <- play()
play1
## B BBB BBB
## $5
play2 <- play()
play2
## 0 B 0
## $0
c(play1, play2)
## [1] 5 0
```
Here, R stops using `print.slots` to display the vector because the vector `c(play1, play2)` no longer has a “slots” \+class\+ attribute.
Next, R will drop the attributes of an object (like `class`) when you subset the object:
```
play1[1]
## [1] 5
```
You can avoid this behavior by writing a `c.slots` method and a `[.slots` method, but then difficulties will quickly accrue. How would you combine the `symbols` attributes of multiple plays into a vector of symbols attributes? How would you change `print.slots` to handle vectors of outputs? These challenges are open for you to explore. However, you will usually not have to attempt this type of large\-scale programming as a data scientist.
In our case, it is very handy to let `slots` objects revert to single prize values when we combine groups of them together into a vector.
10\.6 S3 and Debugging
----------------------
S3 can be annoying if you are trying to understand R functions. It is difficult to tell what a function does if its code body contains a call to `UseMethod`. Now that you know that `UseMethod` calls a class\-specific method, you can search for and examine the method directly. It will be a function whose name follows the `<function.class>` syntax, or possibly `<function.default>`. You can also use the `methods` function to see what methods are associated with a function or a class.
10\.7 S4 and R5
---------------
R also contains two other systems that create class specific behavior. These are known as S4 and R5 (or reference classes). Each of these systems is much harder to use than S3, and perhaps as a consequence, more rare. However, they offer safeguards that S3 does not. If you’d like to learn more about these systems, including how to write and use your own generic functions, I recommend the book [*Advanced R Programming*](http://adv-r.had.co.nz/) by Hadley Wickham.
10\.8 Summary
-------------
Values are not the only place to store information in R, and functions are not the only way to create unique behavior. You can also do both of these things with R’s S3 system. The S3 system provides a simple way to create object\-specific behavior in R. In other words, it is R’s version of object\-oriented programming (OOP). The system is implemented by generic functions. These functions examine the class attribute of their input and call a class\-specific method to generate output. Many S3 methods will look for and use additional information that is stored in an object’s attributes. Many common R functions are S3 generics.
R’s S3 system is more helpful for the tasks of computer science than the tasks of data science, but understanding S3 can help you troubleshoot your work in R as a data scientist.
You now know quite a bit about how to write R code that performs custom tasks, but how could you repeat these tasks? As a data scientist, you will often repeat tasks, sometimes thousands or even millions of times. Why? Because repetition lets you simulate results and estimate probabilities. [Loops](loops.html#loops) will show you how to automate repetition with R’s `for` and `while` functions. You’ll use `for` to simulate various slot machine plays and to calculate the payout rate of your slot machine.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/s3.html |
10 S3
=====
You may have noticed that your slot machine results do not look the way I promised they would. I suggested that the slot machine would display its results like this:
```
play()
## 0 0 DD
## $0
```
But the current machine displays its results in a less pretty format:
```
play()
## "0" "0" "DD"
## 0
```
Moreover, the slot machine uses a hack to display symbols (we call `print` from within `play`). As a result, the symbols do not follow your prize output if you save it:
```
one_play <- play()
## "B" "0" "B"
one_play
## 0
```
You can fix both of these problems with R’s S3 system.
10\.1 The S3 System
-------------------
S3 refers to a class system built into R. The system governs how R handles objects of different classes. Certain R functions will look up an object’s S3 class, and then behave differently in response.
The `print` function is like this. When you print a numeric vector, `print` will display a number:
```
num <- 1000000000
print(num)
## 1000000000
```
But if you give that number the S3 class `POSIXct` followed by `POSIXt`, `print` will display a time:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
If you use objects with classes—and you do—you will run into R’s S3 system. S3 behavior can seem odd at first, but is easy to predict once you are familiar with it.
R’s S3 system is built around three components: attributes (especially the `class` attribute), generic functions, and methods.
10\.2 Attributes
----------------
In [Attributes](r-objects.html#attributes), you learned that many R objects come with attributes, pieces of extra information that are given a name and appended to the object. Attributes do not affect the values of the object, but stick to the object as a type of metadata that R can use to handle the object. For example, a data frame stores its row and column names as attributes. Data frames also store their class, `"data.frame"`, as an attribute.
You can see an object’s attributes with `attribute`. If you run `attribute` on the `deck` data frame that you created in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards), you will see:
```
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
## [20] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
```
R comes with many helper functions that let you set and access the most common attributes used in R. You’ve already met the `names`, `dim`, and `class` functions, which each work with an eponymously named attribute. However, R also has `row.names`, `levels`, and many other attribute\-based helper functions. You can use any of these functions to retrieve an attribute’s value:
```
row.names(deck)
## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13"
## [14] "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26"
## [27] "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39"
## [40] "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52"
```
or to change an attribute’s value:
```
row.names(deck) <- 101:152
```
or to give an object a new attribute altogether:
```
levels(deck) <- c("level 1", "level 2", "level 3")
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117
## [18] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134
## [35] 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
## [52] 152
##
## $levels
## [1] "level 1" "level 2" "level 3"
```
R is very laissez faire when it comes to attributes. It will let you add any attributes that you like to an object (and then it will usually ignore them). The only time R will complain is when a function needs to find an attribute and it is not there.
You can add any general attribute to an object with `attr`; you can also use `attr` to look up the value of any attribute of an object. Let’s see how this works with `one_play`, the result of playing our slot machine one time:
```
one_play <- play()
one_play
## 0
attributes(one_play)
## NULL
```
`attr` takes two arguments: an R object and the name of an attribute (as a character string). To give the R object an attribute of the specified name, save a value to the output of `attr`. Let’s give `one_play` an attribute named `symbols` that contains a vector of character strings:
```
attr(one_play, "symbols") <- c("B", "0", "B")
attributes(one_play)
## $symbols
## [1] "B" "0" "B"
```
To look up the value of any attribute, give `attr` an R object and the name of the attribute you would like to look up:
```
attr(one_play, "symbols")
## "B" "0" "B"
```
If you give an attribute to an atomic vector, like `one_play`, R will usually display the attribute beneath the vector’s values. However, if the attribute changes the vector’s class, R may display all of the information in the vector in a new way (as we saw with `POSIXct` objects):
```
one_play
## [1] 0
## attr(,"symbols")
## [1] "B" "0" "B"
```
R will generally ignore an object’s attributes unless you give them a name that an R function looks for, like `names` or `class`. For example, R will ignore the `symbols` attribute of `one_play` as you manipulate `one_play`:
```
one_play + 1
## 1
## attr(,"symbols")
## "B" "0" "B"
```
**Exercise 10\.1 (Add an Attribute)** Modify `play` to return a prize that contains the symbols associated with it as an attribute named `symbols`. Remove the redundant call to `print(symbols)`:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
*Solution.* You can create a new version of `play` by capturing the output of `score(symbols)` and assigning an attribute to it. `play` can then return the enhanced version of the output:
```
play <- function() {
symbols <- get_symbols()
prize <- score(symbols)
attr(prize, "symbols") <- symbols
prize
}
```
Now `play` returns both the prize and the symbols associated with the prize. The results may not look pretty, but the symbols stick with the prize when we copy it to a new object. We can work on tidying up the display in a minute:
```
play()
## [1] 0
## attr(,"symbols")
## [1] "B" "BB" "0"
two_play <- play()
two_play
## [1] 0
## attr(,"symbols")
## [1] "0" "B" "0"
```
You can also generate a prize and set its attributes in one step with the `structure` function. `structure` creates an object with a set of attributes. The first argument of `structure` should be an R object or set of values, and the remaining arguments should be named attributes for `structure` to add to the object. You can give these arguments any argument names you like. `structure` will add the attributes to the object under the names that you provide as argument names:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
three_play <- play()
three_play
## 0
## attr(,"symbols")
## "0" "BB" "B"
```
Now that your `play` output contains a `symbols` attribute, what can you do with it? You can write your own functions that lookup and use the attribute. For example, the following function will look up `one_play`’s `symbols` attribute and use it to display `one_play` in a pretty manner. We will use this function to display our slot results, so let’s take a moment to study what it does:
```
slot_display <- function(prize){
# extract symbols
symbols <- attr(prize, "symbols")
# collapse symbols into single string
symbols <- paste(symbols, collapse = " ")
# combine symbol with prize as a character string
# \n is special escape sequence for a new line (i.e. return or enter)
string <- paste(symbols, prize, sep = "\n$")
# display character string in console without quotes
cat(string)
}
slot_display(one_play)
## B 0 B
## $0
```
The function expects an object like `one_play` that has both a numerical value and a `symbols` attribute. The first line of the function will look up the value of the `symbols` attribute and save it as an object named `symbols`. Let’s make an example `symbols` object so we can see what the rest of the function does. We can use `one_play`’s `symbols` attribute to do the job. `symbols` will be a vector of three\-character strings:
```
symbols <- attr(one_play, "symbols")
symbols
## "B" "0" "B"
```
Next, `slot_display` uses `paste` to collapse the three strings in `symbols` into a single\-character string. `paste` collapses a vector of character strings into a single string when you give it the `collapse` argument. `paste` will use the value of `collapse` to separate the formerly distinct strings. Hence, `symbols` becomes `B 0 B` the three strings separated by a space:
```
symbols <- paste(symbols, collapse = " ")
symbols
## "B 0 B"
```
Our function then uses `paste` in a new way to combine `symbols` with the value of `prize`. `paste` combines separate objects into a character string when you give it a `sep` argument. For example, here `paste` will combine the string in `symbols`, `B 0 B`, with the number in `prize`, 0\. `paste` will use the value of `sep` argument to separate the inputs in the new string. Here, that value is `\n$`, so our result will look like `"B 0 B\n$0"`:
```
prize <- one_play
string <- paste(symbols, prize, sep = "\n$")
string
## "B 0 B\n$0"
```
The last line of `slot_display` calls `cat` on the new string. `cat` is like `print`; it displays its input at the command line. However, `cat` does not surround its output with quotation marks. `cat` also replaces every `\n` with a new line or line break. The result is what we see. Notice that it looks just how I suggested that our `play` output should look in [Programs](programs.html#programs):
```
cat(string)
## B 0 B
## $0
```
You can use `slot_display` to manually clean up the output of `play`:
```
slot_display(play())
## C B 0
## $2
slot_display(play())
## 7 0 BB
## $0
```
This method of cleaning the output requires you to manually intervene in your R session (to call `slot_display`). There is a function that you can use to automatically clean up the output of `play` *each* time it is displayed. This function is `print`, and it is a *generic function*.
10\.3 Generic Functions
-----------------------
R uses `print` more often than you may think; R calls `print` each time it displays a result in your console window. This call happens in the background, so you do not notice it; but the call explains how output makes it to the console window (recall that `print` always prints its argument in the console window). This `print` call also explains why the output of `print` always matches what you see when you display an object at the command line:
```
print(pi)
## 3.141593
pi
## 3.141593
print(head(deck))
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
print(play())
## 5
## attr(,"symbols")
## "B" "BB" "B"
play()
## 5
## attr(,"symbols")
## "B" "BB" "B"
```
You can change how R displays your slot output by rewriting `print` to look like `slot_display`. Then R would print the output in our tidy format. However, this method would have negative side effects. You do not want R to call `slot_display` when it prints a data frame, a numerical vector, or any other object.
Fortunately, `print` is not a normal function; it is a *generic* function. This means that `print` is written in a way that lets it do different things in different cases. You’ve already seen this behavior in action (although you may not have realized it). `print` did one thing when we looked at the unclassed version of `num`:
```
num <- 1000000000
print(num)
## 1000000000
```
and a different thing when we gave `num` a class:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
Take a look at the code inside `print` to see how it does this. You may imagine that print looks up the class attribute of its input and then uses an \+if\+ tree to pick which output to display. If this occurred to you, great job! `print` does something very similar, but much more simple.
10\.4 Methods
-------------
When you call `print`, `print` calls a special function, `UseMethod`:
```
print
## function (x, ...)
## UseMethod("print")
## <bytecode: 0x7ffee4c62f80>
## <environment: namespace:base>
```
`UseMethod` examines the class of the input that you provide for the first argument of `print`, and then passes all of your arguments to a new function designed to handle that class of input. For example, when you give `print` a POSIXct object, `UseMethod` will pass all of `print`’s arguments to `print.POSIXct`. R will then run `print.POSIXct` and return the results:
```
print.POSIXct
## function (x, ...)
## {
## max.print <- getOption("max.print", 9999L)
## if (max.print < length(x)) {
## print(format(x[seq_len(max.print)], usetz = TRUE), ...)
## cat(" [ reached getOption(\"max.print\") -- omitted",
## length(x) - max.print, "entries ]\n")
## }
## else print(format(x, usetz = TRUE), ...)
## invisible(x)
## }
## <bytecode: 0x7fa948f3d008>
## <environment: namespace:base>
```
If you give `print` a factor object, `UseMethod` will pass all of `print`’s arguments to `print.factor`. R will then run `print.factor` and return the results:
```
print.factor
## function (x, quote = FALSE, max.levels = NULL, width = getOption("width"),
## ...)
## {
## ord <- is.ordered(x)
## if (length(x) == 0L)
## cat(if (ord)
## "ordered"
## ...
## drop <- n > maxl
## cat(if (drop)
## paste(format(n), ""), T0, paste(if (drop)
## c(lev[1L:max(1, maxl - 1)], "...", if (maxl > 1) lev[n])
## else lev, collapse = colsep), "\n", sep = "")
## }
## invisible(x)
## }
## <bytecode: 0x7fa94a64d470>
## <environment: namespace:base>
```
`print.POSIXct` and `print.factor` are called *methods* of `print`. By themselves, `print.POSIXct` and `print.factor` work like regular R functions. However, each was written specifically so `UseMethod` could call it to handle a specific class of `print` input.
Notice that `print.POSIXct` and `print.factor` do two different things (also notice that I abridged the middle of `print.factor`—it is a long function). This is how `print` manages to do different things in different cases. `print` calls `UseMethod`, which calls a specialized method based on the class of `print`’s first argument.
You can see which methods exist for a generic function by calling `methods` on the function. For example, `print` has almost 200 methods (which gives you an idea of how many classes exist in R):
```
methods(print)
## [1] print.acf*
## [2] print.anova
## [3] print.aov*
## ...
## [176] print.xgettext*
## [177] print.xngettext*
## [178] print.xtabs*
##
## Nonvisible functions are asterisked
```
This system of generic functions, methods, and class\-based dispatch is known as S3 because it originated in the third version of S, the programming language that would evolve into S\-PLUS and R. Many common R functions are S3 generics that work with a set of class methods. For example, `summary` and `head` also call `UseMethod`. More basic functions, like `c`, `+`, `-`, `<` and others also behave like generic functions, although they call `.primitive` instead of `UseMethod`.
The S3 system allows R functions to behave in different ways for different classes. You can use S3 to format your slot output. First, give your output its own class. Then write a print method for that class. To do this efficiently, you will need to know a little about how `UseMethod` selects a method function to use.
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
10\.5 Classes
-------------
You can use the S3 system to make a robust new class of objects in R. Then R will treat objects of your class in a consistent, sensible manner. To make a class:
* Choose a name for your class.
* Assign each instance of your class a \+class\+ attribute.
* Write class methods for any generic function likely to use objects of your class.
Many R packages are based on classes that have been built in a similar manner. While this work is simple, it may not be easy. For example, consider how many methods exist for predefined classes.
You can call `methods` on a class with the `class` argument, which takes a character string. `methods` will return every method written for the class. Notice that `methods` will not be able to show you methods that come in an unloaded R package:
```
methods(class = "factor")
## [1] [.factor [[.factor
## [3] [[<-.factor [<-.factor
## [5] all.equal.factor as.character.factor
## [7] as.data.frame.factor as.Date.factor
## [9] as.list.factor as.logical.factor
## [11] as.POSIXlt.factor as.vector.factor
## [13] droplevels.factor format.factor
## [15] is.na<-.factor length<-.factor
## [17] levels<-.factor Math.factor
## [19] Ops.factor plot.factor*
## [21] print.factor relevel.factor*
## [23] relist.factor* rep.factor
## [25] summary.factor Summary.factor
## [27] xtfrm.factor
##
## Nonvisible functions are asterisked
```
This output indicates how much work is required to create a robust, well\-behaved class. You will usually need to write a `class` method for every basic R operation.
Consider two challenges that you will face right away. First, R drops attributes (like `class`) when it combines objects into a vector:
```
play1 <- play()
play1
## B BBB BBB
## $5
play2 <- play()
play2
## 0 B 0
## $0
c(play1, play2)
## [1] 5 0
```
Here, R stops using `print.slots` to display the vector because the vector `c(play1, play2)` no longer has a “slots” \+class\+ attribute.
Next, R will drop the attributes of an object (like `class`) when you subset the object:
```
play1[1]
## [1] 5
```
You can avoid this behavior by writing a `c.slots` method and a `[.slots` method, but then difficulties will quickly accrue. How would you combine the `symbols` attributes of multiple plays into a vector of symbols attributes? How would you change `print.slots` to handle vectors of outputs? These challenges are open for you to explore. However, you will usually not have to attempt this type of large\-scale programming as a data scientist.
In our case, it is very handy to let `slots` objects revert to single prize values when we combine groups of them together into a vector.
10\.6 S3 and Debugging
----------------------
S3 can be annoying if you are trying to understand R functions. It is difficult to tell what a function does if its code body contains a call to `UseMethod`. Now that you know that `UseMethod` calls a class\-specific method, you can search for and examine the method directly. It will be a function whose name follows the `<function.class>` syntax, or possibly `<function.default>`. You can also use the `methods` function to see what methods are associated with a function or a class.
10\.7 S4 and R5
---------------
R also contains two other systems that create class specific behavior. These are known as S4 and R5 (or reference classes). Each of these systems is much harder to use than S3, and perhaps as a consequence, more rare. However, they offer safeguards that S3 does not. If you’d like to learn more about these systems, including how to write and use your own generic functions, I recommend the book [*Advanced R Programming*](http://adv-r.had.co.nz/) by Hadley Wickham.
10\.8 Summary
-------------
Values are not the only place to store information in R, and functions are not the only way to create unique behavior. You can also do both of these things with R’s S3 system. The S3 system provides a simple way to create object\-specific behavior in R. In other words, it is R’s version of object\-oriented programming (OOP). The system is implemented by generic functions. These functions examine the class attribute of their input and call a class\-specific method to generate output. Many S3 methods will look for and use additional information that is stored in an object’s attributes. Many common R functions are S3 generics.
R’s S3 system is more helpful for the tasks of computer science than the tasks of data science, but understanding S3 can help you troubleshoot your work in R as a data scientist.
You now know quite a bit about how to write R code that performs custom tasks, but how could you repeat these tasks? As a data scientist, you will often repeat tasks, sometimes thousands or even millions of times. Why? Because repetition lets you simulate results and estimate probabilities. [Loops](loops.html#loops) will show you how to automate repetition with R’s `for` and `while` functions. You’ll use `for` to simulate various slot machine plays and to calculate the payout rate of your slot machine.
10\.1 The S3 System
-------------------
S3 refers to a class system built into R. The system governs how R handles objects of different classes. Certain R functions will look up an object’s S3 class, and then behave differently in response.
The `print` function is like this. When you print a numeric vector, `print` will display a number:
```
num <- 1000000000
print(num)
## 1000000000
```
But if you give that number the S3 class `POSIXct` followed by `POSIXt`, `print` will display a time:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
If you use objects with classes—and you do—you will run into R’s S3 system. S3 behavior can seem odd at first, but is easy to predict once you are familiar with it.
R’s S3 system is built around three components: attributes (especially the `class` attribute), generic functions, and methods.
10\.2 Attributes
----------------
In [Attributes](r-objects.html#attributes), you learned that many R objects come with attributes, pieces of extra information that are given a name and appended to the object. Attributes do not affect the values of the object, but stick to the object as a type of metadata that R can use to handle the object. For example, a data frame stores its row and column names as attributes. Data frames also store their class, `"data.frame"`, as an attribute.
You can see an object’s attributes with `attribute`. If you run `attribute` on the `deck` data frame that you created in [Project 2: Playing Cards](project-2-playing-cards.html#project-2-playing-cards), you will see:
```
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
## [20] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
```
R comes with many helper functions that let you set and access the most common attributes used in R. You’ve already met the `names`, `dim`, and `class` functions, which each work with an eponymously named attribute. However, R also has `row.names`, `levels`, and many other attribute\-based helper functions. You can use any of these functions to retrieve an attribute’s value:
```
row.names(deck)
## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13"
## [14] "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26"
## [27] "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39"
## [40] "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52"
```
or to change an attribute’s value:
```
row.names(deck) <- 101:152
```
or to give an object a new attribute altogether:
```
levels(deck) <- c("level 1", "level 2", "level 3")
attributes(deck)
## $names
## [1] "face" "suit" "value"
##
## $class
## [1] "data.frame"
##
## $row.names
## [1] 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117
## [18] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134
## [35] 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
## [52] 152
##
## $levels
## [1] "level 1" "level 2" "level 3"
```
R is very laissez faire when it comes to attributes. It will let you add any attributes that you like to an object (and then it will usually ignore them). The only time R will complain is when a function needs to find an attribute and it is not there.
You can add any general attribute to an object with `attr`; you can also use `attr` to look up the value of any attribute of an object. Let’s see how this works with `one_play`, the result of playing our slot machine one time:
```
one_play <- play()
one_play
## 0
attributes(one_play)
## NULL
```
`attr` takes two arguments: an R object and the name of an attribute (as a character string). To give the R object an attribute of the specified name, save a value to the output of `attr`. Let’s give `one_play` an attribute named `symbols` that contains a vector of character strings:
```
attr(one_play, "symbols") <- c("B", "0", "B")
attributes(one_play)
## $symbols
## [1] "B" "0" "B"
```
To look up the value of any attribute, give `attr` an R object and the name of the attribute you would like to look up:
```
attr(one_play, "symbols")
## "B" "0" "B"
```
If you give an attribute to an atomic vector, like `one_play`, R will usually display the attribute beneath the vector’s values. However, if the attribute changes the vector’s class, R may display all of the information in the vector in a new way (as we saw with `POSIXct` objects):
```
one_play
## [1] 0
## attr(,"symbols")
## [1] "B" "0" "B"
```
R will generally ignore an object’s attributes unless you give them a name that an R function looks for, like `names` or `class`. For example, R will ignore the `symbols` attribute of `one_play` as you manipulate `one_play`:
```
one_play + 1
## 1
## attr(,"symbols")
## "B" "0" "B"
```
**Exercise 10\.1 (Add an Attribute)** Modify `play` to return a prize that contains the symbols associated with it as an attribute named `symbols`. Remove the redundant call to `print(symbols)`:
```
play <- function() {
symbols <- get_symbols()
print(symbols)
score(symbols)
}
```
*Solution.* You can create a new version of `play` by capturing the output of `score(symbols)` and assigning an attribute to it. `play` can then return the enhanced version of the output:
```
play <- function() {
symbols <- get_symbols()
prize <- score(symbols)
attr(prize, "symbols") <- symbols
prize
}
```
Now `play` returns both the prize and the symbols associated with the prize. The results may not look pretty, but the symbols stick with the prize when we copy it to a new object. We can work on tidying up the display in a minute:
```
play()
## [1] 0
## attr(,"symbols")
## [1] "B" "BB" "0"
two_play <- play()
two_play
## [1] 0
## attr(,"symbols")
## [1] "0" "B" "0"
```
You can also generate a prize and set its attributes in one step with the `structure` function. `structure` creates an object with a set of attributes. The first argument of `structure` should be an R object or set of values, and the remaining arguments should be named attributes for `structure` to add to the object. You can give these arguments any argument names you like. `structure` will add the attributes to the object under the names that you provide as argument names:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
three_play <- play()
three_play
## 0
## attr(,"symbols")
## "0" "BB" "B"
```
Now that your `play` output contains a `symbols` attribute, what can you do with it? You can write your own functions that lookup and use the attribute. For example, the following function will look up `one_play`’s `symbols` attribute and use it to display `one_play` in a pretty manner. We will use this function to display our slot results, so let’s take a moment to study what it does:
```
slot_display <- function(prize){
# extract symbols
symbols <- attr(prize, "symbols")
# collapse symbols into single string
symbols <- paste(symbols, collapse = " ")
# combine symbol with prize as a character string
# \n is special escape sequence for a new line (i.e. return or enter)
string <- paste(symbols, prize, sep = "\n$")
# display character string in console without quotes
cat(string)
}
slot_display(one_play)
## B 0 B
## $0
```
The function expects an object like `one_play` that has both a numerical value and a `symbols` attribute. The first line of the function will look up the value of the `symbols` attribute and save it as an object named `symbols`. Let’s make an example `symbols` object so we can see what the rest of the function does. We can use `one_play`’s `symbols` attribute to do the job. `symbols` will be a vector of three\-character strings:
```
symbols <- attr(one_play, "symbols")
symbols
## "B" "0" "B"
```
Next, `slot_display` uses `paste` to collapse the three strings in `symbols` into a single\-character string. `paste` collapses a vector of character strings into a single string when you give it the `collapse` argument. `paste` will use the value of `collapse` to separate the formerly distinct strings. Hence, `symbols` becomes `B 0 B` the three strings separated by a space:
```
symbols <- paste(symbols, collapse = " ")
symbols
## "B 0 B"
```
Our function then uses `paste` in a new way to combine `symbols` with the value of `prize`. `paste` combines separate objects into a character string when you give it a `sep` argument. For example, here `paste` will combine the string in `symbols`, `B 0 B`, with the number in `prize`, 0\. `paste` will use the value of `sep` argument to separate the inputs in the new string. Here, that value is `\n$`, so our result will look like `"B 0 B\n$0"`:
```
prize <- one_play
string <- paste(symbols, prize, sep = "\n$")
string
## "B 0 B\n$0"
```
The last line of `slot_display` calls `cat` on the new string. `cat` is like `print`; it displays its input at the command line. However, `cat` does not surround its output with quotation marks. `cat` also replaces every `\n` with a new line or line break. The result is what we see. Notice that it looks just how I suggested that our `play` output should look in [Programs](programs.html#programs):
```
cat(string)
## B 0 B
## $0
```
You can use `slot_display` to manually clean up the output of `play`:
```
slot_display(play())
## C B 0
## $2
slot_display(play())
## 7 0 BB
## $0
```
This method of cleaning the output requires you to manually intervene in your R session (to call `slot_display`). There is a function that you can use to automatically clean up the output of `play` *each* time it is displayed. This function is `print`, and it is a *generic function*.
10\.3 Generic Functions
-----------------------
R uses `print` more often than you may think; R calls `print` each time it displays a result in your console window. This call happens in the background, so you do not notice it; but the call explains how output makes it to the console window (recall that `print` always prints its argument in the console window). This `print` call also explains why the output of `print` always matches what you see when you display an object at the command line:
```
print(pi)
## 3.141593
pi
## 3.141593
print(head(deck))
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
head(deck)
## face suit value
## king spades 13
## queen spades 12
## jack spades 11
## ten spades 10
## nine spades 9
## eight spades 8
print(play())
## 5
## attr(,"symbols")
## "B" "BB" "B"
play()
## 5
## attr(,"symbols")
## "B" "BB" "B"
```
You can change how R displays your slot output by rewriting `print` to look like `slot_display`. Then R would print the output in our tidy format. However, this method would have negative side effects. You do not want R to call `slot_display` when it prints a data frame, a numerical vector, or any other object.
Fortunately, `print` is not a normal function; it is a *generic* function. This means that `print` is written in a way that lets it do different things in different cases. You’ve already seen this behavior in action (although you may not have realized it). `print` did one thing when we looked at the unclassed version of `num`:
```
num <- 1000000000
print(num)
## 1000000000
```
and a different thing when we gave `num` a class:
```
class(num) <- c("POSIXct", "POSIXt")
print(num)
## "2001-09-08 19:46:40 CST"
```
Take a look at the code inside `print` to see how it does this. You may imagine that print looks up the class attribute of its input and then uses an \+if\+ tree to pick which output to display. If this occurred to you, great job! `print` does something very similar, but much more simple.
10\.4 Methods
-------------
When you call `print`, `print` calls a special function, `UseMethod`:
```
print
## function (x, ...)
## UseMethod("print")
## <bytecode: 0x7ffee4c62f80>
## <environment: namespace:base>
```
`UseMethod` examines the class of the input that you provide for the first argument of `print`, and then passes all of your arguments to a new function designed to handle that class of input. For example, when you give `print` a POSIXct object, `UseMethod` will pass all of `print`’s arguments to `print.POSIXct`. R will then run `print.POSIXct` and return the results:
```
print.POSIXct
## function (x, ...)
## {
## max.print <- getOption("max.print", 9999L)
## if (max.print < length(x)) {
## print(format(x[seq_len(max.print)], usetz = TRUE), ...)
## cat(" [ reached getOption(\"max.print\") -- omitted",
## length(x) - max.print, "entries ]\n")
## }
## else print(format(x, usetz = TRUE), ...)
## invisible(x)
## }
## <bytecode: 0x7fa948f3d008>
## <environment: namespace:base>
```
If you give `print` a factor object, `UseMethod` will pass all of `print`’s arguments to `print.factor`. R will then run `print.factor` and return the results:
```
print.factor
## function (x, quote = FALSE, max.levels = NULL, width = getOption("width"),
## ...)
## {
## ord <- is.ordered(x)
## if (length(x) == 0L)
## cat(if (ord)
## "ordered"
## ...
## drop <- n > maxl
## cat(if (drop)
## paste(format(n), ""), T0, paste(if (drop)
## c(lev[1L:max(1, maxl - 1)], "...", if (maxl > 1) lev[n])
## else lev, collapse = colsep), "\n", sep = "")
## }
## invisible(x)
## }
## <bytecode: 0x7fa94a64d470>
## <environment: namespace:base>
```
`print.POSIXct` and `print.factor` are called *methods* of `print`. By themselves, `print.POSIXct` and `print.factor` work like regular R functions. However, each was written specifically so `UseMethod` could call it to handle a specific class of `print` input.
Notice that `print.POSIXct` and `print.factor` do two different things (also notice that I abridged the middle of `print.factor`—it is a long function). This is how `print` manages to do different things in different cases. `print` calls `UseMethod`, which calls a specialized method based on the class of `print`’s first argument.
You can see which methods exist for a generic function by calling `methods` on the function. For example, `print` has almost 200 methods (which gives you an idea of how many classes exist in R):
```
methods(print)
## [1] print.acf*
## [2] print.anova
## [3] print.aov*
## ...
## [176] print.xgettext*
## [177] print.xngettext*
## [178] print.xtabs*
##
## Nonvisible functions are asterisked
```
This system of generic functions, methods, and class\-based dispatch is known as S3 because it originated in the third version of S, the programming language that would evolve into S\-PLUS and R. Many common R functions are S3 generics that work with a set of class methods. For example, `summary` and `head` also call `UseMethod`. More basic functions, like `c`, `+`, `-`, `<` and others also behave like generic functions, although they call `.primitive` instead of `UseMethod`.
The S3 system allows R functions to behave in different ways for different classes. You can use S3 to format your slot output. First, give your output its own class. Then write a print method for that class. To do this efficiently, you will need to know a little about how `UseMethod` selects a method function to use.
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
### 10\.4\.1 Method Dispatch
`UseMethod` uses a very simple system to match methods to functions.
Every S3 method has a two\-part name. The first part of the name will refer to the function that the method works with. The second part will refer to the class. These two parts will be separated by a period. So for example, the print method that works with functions will be called `print.function`. The summary method that works with matrices will be called `summary.matrix`. And so on.
When `UseMethod` needs to call a method, it searches for an R function with the correct S3\-style name. The function does not have to be special in any way; it just needs to have the correct name.
You can participate in this system by writing your own function and giving it a valid S3\-style name. For example, let’s give `one_play` a class of its own. It doesn’t matter what you call the class; R will store any character string in the class attribute:
```
class(one_play) <- "slots"
```
Now let’s write an S3 print method for the \+slots\+ class. The method doesn’t need to do anything special—it doesn’t even need to print `one_play`. But it *does* need to be named `print.slots`; otherwise `UseMethod` will not find it. The method should also take the same arguments as `print`; otherwise, R will give an error when it tries to pass the arguments to `print.slots`:
```
args(print)
## function (x, ...)
## NULL
print.slots <- function(x, ...) {
cat("I'm using the print.slots method")
}
```
Does our method work? Yes, and not only that; R uses the print method to display the contents of `one_play`. This method isn’t very useful, so I’m going to remove it. You’ll have a chance to write a better one in a minute:
```
print(one_play)
## I'm using the print.slots method
one_play
## I'm using the print.slots method
rm(print.slots)
```
Some R objects have multiple classes. For example, the output of `Sys.time` has two classes. Which class will `UseMethod` use to find a print method?
```
now <- Sys.time()
attributes(now)
## $class
## [1] "POSIXct" "POSIXt"
```
`UseMethod` will first look for a method that matches the first class listed in the object’s class vector. If `UseMethod` cannot find one, it will then look for the method that matches the second class (and so on if there are more classes in an object’s class vector).
If you give `print` an object whose class or classes do not have a print method, `UseMethod` will call `print.default`, a special method written to handle general cases.
Let’s use this system to write a better print method for the slot machine output.
**Exercise 10\.2 (Make a Print Method)** Write a new print method for the slots class. The method should call `slot_display` to return well\-formatted slot\-machine output.
What name must you use for this method?
*Solution.* It is surprisingly easy to write a good `print.slots` method because we’ve already done all of the hard work when we wrote `slot_display`. For example, the following method will work. Just make sure the method is named `print.slots` so `UseMethod` can find it, and make sure that it takes the same arguments as `print` so `UseMethod` can pass those arguments to `print.slots` without any trouble:
```
print.slots <- function(x, ...) {
slot_display(x)
}
```
Now R will automatically use `slot_display` to display objects of class \+slots\+ (and only objects of class “slots”):
```
one_play
## B 0 B
## $0
```
Let’s ensure that every piece of slot machine output has the `slots` class.
**Exercise 10\.3 (Add a Class)** Modify the `play` function so it assigns `slots` to the `class` attribute of its output:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols)
}
```
*Solution.* You can set the `class` attribute of the output at the same time that you set the \+symbols\+ attribute. Just add `class = "slots"` to the `structure` call:
```
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
Now each of our slot machine plays will have the class `slots`:
```
class(play())
## "slots"
```
As a result, R will display them in the correct slot\-machine format:
```
play()
## BB BB BBB
## $5
play()
## BB 0 0
## $0
```
10\.5 Classes
-------------
You can use the S3 system to make a robust new class of objects in R. Then R will treat objects of your class in a consistent, sensible manner. To make a class:
* Choose a name for your class.
* Assign each instance of your class a \+class\+ attribute.
* Write class methods for any generic function likely to use objects of your class.
Many R packages are based on classes that have been built in a similar manner. While this work is simple, it may not be easy. For example, consider how many methods exist for predefined classes.
You can call `methods` on a class with the `class` argument, which takes a character string. `methods` will return every method written for the class. Notice that `methods` will not be able to show you methods that come in an unloaded R package:
```
methods(class = "factor")
## [1] [.factor [[.factor
## [3] [[<-.factor [<-.factor
## [5] all.equal.factor as.character.factor
## [7] as.data.frame.factor as.Date.factor
## [9] as.list.factor as.logical.factor
## [11] as.POSIXlt.factor as.vector.factor
## [13] droplevels.factor format.factor
## [15] is.na<-.factor length<-.factor
## [17] levels<-.factor Math.factor
## [19] Ops.factor plot.factor*
## [21] print.factor relevel.factor*
## [23] relist.factor* rep.factor
## [25] summary.factor Summary.factor
## [27] xtfrm.factor
##
## Nonvisible functions are asterisked
```
This output indicates how much work is required to create a robust, well\-behaved class. You will usually need to write a `class` method for every basic R operation.
Consider two challenges that you will face right away. First, R drops attributes (like `class`) when it combines objects into a vector:
```
play1 <- play()
play1
## B BBB BBB
## $5
play2 <- play()
play2
## 0 B 0
## $0
c(play1, play2)
## [1] 5 0
```
Here, R stops using `print.slots` to display the vector because the vector `c(play1, play2)` no longer has a “slots” \+class\+ attribute.
Next, R will drop the attributes of an object (like `class`) when you subset the object:
```
play1[1]
## [1] 5
```
You can avoid this behavior by writing a `c.slots` method and a `[.slots` method, but then difficulties will quickly accrue. How would you combine the `symbols` attributes of multiple plays into a vector of symbols attributes? How would you change `print.slots` to handle vectors of outputs? These challenges are open for you to explore. However, you will usually not have to attempt this type of large\-scale programming as a data scientist.
In our case, it is very handy to let `slots` objects revert to single prize values when we combine groups of them together into a vector.
10\.6 S3 and Debugging
----------------------
S3 can be annoying if you are trying to understand R functions. It is difficult to tell what a function does if its code body contains a call to `UseMethod`. Now that you know that `UseMethod` calls a class\-specific method, you can search for and examine the method directly. It will be a function whose name follows the `<function.class>` syntax, or possibly `<function.default>`. You can also use the `methods` function to see what methods are associated with a function or a class.
10\.7 S4 and R5
---------------
R also contains two other systems that create class specific behavior. These are known as S4 and R5 (or reference classes). Each of these systems is much harder to use than S3, and perhaps as a consequence, more rare. However, they offer safeguards that S3 does not. If you’d like to learn more about these systems, including how to write and use your own generic functions, I recommend the book [*Advanced R Programming*](http://adv-r.had.co.nz/) by Hadley Wickham.
10\.8 Summary
-------------
Values are not the only place to store information in R, and functions are not the only way to create unique behavior. You can also do both of these things with R’s S3 system. The S3 system provides a simple way to create object\-specific behavior in R. In other words, it is R’s version of object\-oriented programming (OOP). The system is implemented by generic functions. These functions examine the class attribute of their input and call a class\-specific method to generate output. Many S3 methods will look for and use additional information that is stored in an object’s attributes. Many common R functions are S3 generics.
R’s S3 system is more helpful for the tasks of computer science than the tasks of data science, but understanding S3 can help you troubleshoot your work in R as a data scientist.
You now know quite a bit about how to write R code that performs custom tasks, but how could you repeat these tasks? As a data scientist, you will often repeat tasks, sometimes thousands or even millions of times. Why? Because repetition lets you simulate results and estimate probabilities. [Loops](loops.html#loops) will show you how to automate repetition with R’s `for` and `while` functions. You’ll use `for` to simulate various slot machine plays and to calculate the payout rate of your slot machine.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/loops.html |
11 Loops
========
Loops are R’s method for repeating a task, which makes them a useful tool for programming simulations. This chapter will teach you how to use R’s loop tools.
Let’s use the `score` function to solve a real\-world problem.
Your slot machine is modeled after real machines that were accused of fraud. The machines appeared to pay out 40 cents on the dollar, but the manufacturer claimed that they paid out 92 cents on the dollar. You can calculate the exact payout rate of your machine with the `score` program. The payout rate will be the expected value of the slot machine’s prize.
11\.1 Expected Values
---------------------
The expected value of a random event is a type of weighted average; it is the sum of each possible outcome of the event, weighted by the probability that each outcome occurs:
\\\[
E(x) \= \\sum\_{i \= 1}^{n}\\left( x\_{i} \\cdot P(x\_{i}) \\right)
\\]
You can think of the expected value as the average prize that you would observe if you played the slot machine an infinite number of times. Let’s use the formula to calculate some simple expected values. Then we will apply the formula to your slot machine.
Do you remember the `die` you created in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice)?
```
die <- c(1, 2, 3, 4, 5, 6)
```
Each time you roll the die, it returns a value selected at random (one through six). You can find the expected value of rolling the die with the formula:
\\\[
E(\\text{die}) \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)
\\]
The \\(\\text{die}\_{i}\\)s are the possible outcomes of rolling the die: 1, 2, 3, 4, 5, and 6; and the \\(P(\\text{die}\_{i})\\)’s are the probabilities associated with each of the outcomes. If your die is fair, each outcome will occur with the same probability: 1/6\. So our equation simplifies to:
\\\[
\\begin{array}{rl}
E(\\text{die}) \& \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)\\\\
\& \= 1 \\cdot \\frac{1}{6} \+ 2 \\cdot \\frac{1}{6} \+ 3 \\cdot \\frac{1}{6} \+ 4 \\cdot \\frac{1}{6} \+ 5 \\cdot \\frac{1}{6} \+ 6 \\cdot \\frac{1}{6}\\\\
\& \= 3\.5\\\\
\\end{array}
\\]
Hence, the expected value of rolling a fair die is 3\.5\. You may notice that this is also the average value of the die. The expected value will equal the average if every outcome has the same chance of occurring.
But what if each outcome has a different chance of occurring? For example, we weighted our dice in [Packages and Help Pages](packages.html#packages) so that each die rolled 1, 2, 3, 4, and 5 with probability 1/8 and 6 with probability 3/8\. You can use the same formula to calculate the expected value in these conditions:
\\\[
\\begin{array}{rl}
E(die) \& \= 1 \\cdot \\frac{1}{8} \+ 2 \\cdot \\frac{1}{8} \+ 3 \\cdot \\frac{1}{8} \+ 4 \\cdot \\frac{1}{8} \+ 5 \\cdot \\frac{1}{8} \+ 6 \\cdot \\frac{3}{8}\\\\
\& \= 4\.125\\\\
\\end{array}
\\]
Hence, the expected value of a loaded die does not equal the average value of its outcomes. If you rolled a loaded die an infinite number of times, the average outcome would be 4\.125, which is higher than what you would expect from a fair die.
Notice that we did the same three things to calculate both of these expected values. We have:
* Listed out all of the possible outcomes
* Determined the *value* of each outcome (here just the value of the die)
* Calculated the probability that each outcome occurred
The expected value was then just the sum of the values in step 2 multiplied by the probabilities in step 3\.
You can use these steps to calculate more sophisticated expected values. For example, you could calculate the expected value of rolling a pair of weighted dice. Let’s do this step by step.
First, list out all of the possible outcomes. A total of 36 different outcomes can appear when you roll two dice. For example, you might roll (1, 1\), which notates one on the first die and one on the second die. Or, you may roll (1, 2\), one on the first die and two on the second. And so on. Listing out these combinations can be tedious, but R has a function that can help.
11\.2 expand.grid
-----------------
The `expand.grid` function in R provides a quick way to write out every combination of the elements in *n* vectors. For example, you can list every combination of two dice. To do so, run `expand.grid` on two copies of `die`:
```
rolls <- expand.grid(die, die)
```
`expand.grid` will return a data frame that contains every way to pair an element from the first `die` vector with an element from the second `die` vector. This will capture all 36 possible combinations of values:
```
rolls
## Var1 Var2
## 1 1 1
## 2 2 1
## 3 3 1
## ...
## 34 4 6
## 35 5 6
## 36 6 6
```
You can use `expand.grid` with more than two vectors if you like. For example, you could list every combination of rolling three dice with `expand.grid(die, die, die)` and every combination of rolling four dice with `expand.grid(die, die, die, die)`, and so on. `expand.grid` will always return a data frame that contains each possible combination of *n* elements from the *n* vectors. Each combination will contain exactly one element from each vector.
You can determine the value of each roll once you’ve made your list of outcomes. This will be the sum of the two dice, which you can calculate using R’s element\-wise execution:
```
rolls$value <- rolls$Var1 + rolls$Var2
head(rolls, 3)
## Var1 Var2 value
## 1 1 2
## 2 1 3
## 3 1 4
```
R will match up the elements in each vector before adding them together. As a result, each element of `value` will refer to the elements of `Var1` and `Var2` that appear in the same row.
Next, you must determine the probability that each combination appears. You can calculate this with a basic rule of probability:
*The probability that* n *independent, random events all occur is equal to the product of the probabilities that each random event occurs*.
Or more succinctly:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
So the probability that we roll a (1, 1\) will be equal to the probability that we roll a one on the first die, 1/8, times the probability that we roll a one on the second die, 1/8:
\\\[
\\begin{array}{rl}
P(1 \\\& 1\) \& \= P(1\) \\cdot P(1\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And the probability that we roll a (1, 2\) will be:
\\\[
\\begin{array}{rl}
P(1 \\\& 2\) \& \= P(1\) \\cdot P(2\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And so on.
Let me suggest a three\-step process for calculating these probabilities in R. First, we can look up the probabilities of rolling the values in `Var1`. We’ll do this with the lookup table that follows:
```
prob <- c("1" = 1/8, "2" = 1/8, "3" = 1/8, "4" = 1/8, "5" = 1/8, "6" = 3/8)
prob
## 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375
```
If you subset this table by `rolls$Var1`, you will get a vector of probabilities perfectly keyed to the values of `Var1`:
```
rolls$Var1
## 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
prob[rolls$Var1]
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
rolls$prob1 <- prob[rolls$Var1]
head(rolls, 3)
## Var1 Var2 value prob1
## 1 1 2 0.125
## 2 1 3 0.125
## 3 1 4 0.125
```
Second, we can look up the probabilities of rolling the values in `Var2`:
```
rolls$prob2 <- prob[rolls$Var2]
head(rolls, 3)
## Var1 Var2 value prob1 prob2
## 1 1 2 0.125 0.125
## 2 1 3 0.125 0.125
## 3 1 4 0.125 0.125
```
Third, we can calculate the probability of rolling each combination by multiplying `prob1` by `prob2`:
```
rolls$prob <- rolls$prob1 * rolls$prob2
head(rolls, 3)
## Var1 Var2 value prob1 prob2 prob
## 1 1 2 0.125 0.125 0.015625
## 2 1 3 0.125 0.125 0.015625
## 3 1 4 0.125 0.125 0.015625
```
It is easy to calculate the expected value now that we have each outcome, the value of each outcome, and the probability of each outcome. The expected value will be the summation of the dice values multiplied by the dice probabilities:
```
sum(rolls$value * rolls$prob)
## 8.25
```
So the expected value of rolling two loaded dice is 8\.25\. If you rolled a pair of loaded dice an infinite number of times, the average sum would be 8\.25\. (If you are curious, the expected value of rolling a pair of fair dice is 7, which explains why 7 plays such a large role in dice games like craps.)
Now that you’ve warmed up, let’s use our method to calculate the expected value of the slot machine prize. We will follow the same steps we just took:
* We will list out every possible outcome of playing the machine. This will be a list of every combination of three slot symbols.
* We will calculate the probability of getting each combination when you play the machine.
* We will determine the prize that we would win for each combination.
When we are finished, we will have a data set that looks like this:
```
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
## ... and so on.
```
The expected value will then be the sum of the prizes multiplied by their probability of occuring:
\\\[
E(\\text{prize}) \= \\sum\_{i \= 1}^{n}\\left( \\text{prize}\_{i} \\cdot P(\\text{prize}\_{i}) \\right)
\\]
Ready to begin?
**Exercise 11\.1 (List the Combinations)** Use `expand.grid` to make a data frame that contains every possible combination of *three* symbols from the `wheel` vector:
```
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
```
Be sure to add the argument `stringsAsFactors = FALSE` to your `expand.grid` call; otherwise, `expand.grid` will save the combinations as factors, an unfortunate choice that will disrupt the `score` function.
*Solution.* To create a data frame of each combination of *three* symbols, you need to run `expand.grid` and give it *three* copies of `wheel`. The result will be a data frame with 343 rows, one for each unique combination of three slot symbols:
```
combos <- expand.grid(wheel, wheel, wheel, stringsAsFactors = FALSE)
combos
## Var1 Var2 Var3
## 1 DD DD DD
## 2 7 DD DD
## 3 BBB DD DD
## 4 BB DD DD
## 5 B DD DD
## 6 C DD DD
## ...
## 341 B 0 0
## 342 C 0 0
## 343 0 0 0
```
Now, let’s calculate the probability of getting each combination. You can use the probabilities contained in the `prob` argument of `get_symbols` to do this. These probabilities determine how frequently each symbol is chosen when your slot machine generates symbols. They were calculated after observing 345 plays of the Manitoba video lottery terminals. Zeroes have the largest chance of being selected (0\.52\) and cherries the least (0\.01\):
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52)
}
```
**Exercise 11\.2 (Make a Lookup Table)** Isolate the previous probabilities in a lookup table. What names will you use in your table?
*Solution.* Your names should match the input that you want to look up. In this case, the input will be the character strings that appear in `Var1`, `Var2`, and `Var3`. So your lookup table should look like this:
```
prob <- c("DD" = 0.03, "7" = 0.03, "BBB" = 0.06,
"BB" = 0.1, "B" = 0.25, "C" = 0.01, "0" = 0.52)
```
Now let’s look up our probabilities.
**Exercise 11\.3 (Lookup the Probabilities)** Look up the probabilities of getting the values in `Var1`. Then add them to `combos` as a column named `prob1`. Then do the same for `Var2` (`prob2`) and `Var3` (`prob3`).
*Solution.* Remember that you use R’s selection notation to look up values in a lookup table. The values that result will be keyed to the index that you use:
```
combos$prob1 <- prob[combos$Var1]
combos$prob2 <- prob[combos$Var2]
combos$prob3 <- prob[combos$Var3]
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3
## DD DD DD 0.03 0.03 0.03
## 7 DD DD 0.03 0.03 0.03
## BBB DD DD 0.06 0.03 0.03
```
Now how should we calculate the total probability of each combination? Our three slot symbols are all chosen independently, which means that the same rule that governed our dice probabilities governs our symbol probabilities:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
**Exercise 11\.4 (Calculate Probabilities for Each Combination)** Calculate the overall probabilities for each combination. Save them as a column named `prob` in `combos`, then check your work.
You can check that the math worked by summing the probabilities. The probabilities should add up to one, because one of the combinations *must* appear when you play the slot machine. In other words, a combination will appear, with probability of one.
You can calculate the probabilities of every possible combination in one fell swoop with some element\-wise execution:
```
combos$prob <- combos$prob1 * combos$prob2 * combos$prob3
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob
## DD DD DD 0.03 0.03 0.03 0.000027
## 7 DD DD 0.03 0.03 0.03 0.000027
## BBB DD DD 0.06 0.03 0.03 0.000054
```
The sum of the probabilities is one, which suggests that our math is correct:
```
sum(combos$prob)
## 1
```
You only need to do one more thing before you can calculate the expected value: you must determine the prize for each combination in `combos`. You can calculate the prize with `score`. For example, we can calculate the prize for the first row of `combos` like this:
```
symbols <- c(combos[1, 1], combos[1, 2], combos[1, 3])
## "DD" "DD" "DD"
score(symbols)
## 800
```
However there are 343 rows, which makes for tedious work if you plan to calculate the scores manually. It will be quicker to automate this task and have R do it for you, which you can do with a `for` loop.
11\.3 for Loops
---------------
A `for` loop repeats a chunk of code many times, once for each element in a set of input. `for` loops provide a way to tell R, “Do this for every value of that.” In R syntax, this looks like:
```
for (value in that) {
this
}
```
The `that` object should be a set of objects (often a vector of numbers or character strings). The for loop will run the code in that appears between the braces once for each member of `that`. For example, the for loop below runs `print("one run")` once for each element in a vector of character strings:
```
for (value in c("My", "first", "for", "loop")) {
print("one run")
}
## "one run"
## "one run"
## "one run"
## "one run"
```
The `value` symbol in a for loop acts like an argument in a function. The for loop will create an object named `value` and assign it a new value on each run of the loop. The code in your loop can access this value by calling the `value` object.
What values will the for loop assign to `value`? It will use the elements in the set that you run the loop on. `for` starts with the first element and then assigns a different element to `value` on each run of the for loop, until all of the elements have been assigned to `value`. For example, the for loop below will run `print(value)` four times and will print out one element of `c("My", "second", "for", "loop")` each time:
```
for (value in c("My", "second", "for", "loop")) {
print(value)
}
## "My"
## "second"
## "for"
## "loop"
```
On the first run, the for loop substituted `"My"` for `value` in `print(value)`. On the second run it substituted `"second"`, and so on until `for` had run `print(value)` once with every element in the set:
If you look at `value` after the loop runs, you will see that it still contains the value of the last element in the set:
```
value
## "loop"
```
I’ve been using the symbol `value` in my for loops, but there is nothing special about it. You can use any symbol you like in your loop to do the same thing as long as the symbol appears before `in` in the parentheses that follow `for`. For example, you could rewrite the previous loop with any of the following:
```
for (word in c("My", "second", "for", "loop")) {
print(word)
}
for (string in c("My", "second", "for", "loop")) {
print(string)
}
for (i in c("My", "second", "for", "loop")) {
print(i)
}
```
**Choose your symbols carefully**
R will run your loop in whichever environment you call it from. This is bad news if your loop uses object names that already exist in the environment. Your loop will overwrite the existing objects with the objects that it creates. This applies to the value symbol as well.
**For loops run on sets**
In many programming languages, `for` loops are designed to work with integers, not sets. You give the loop a starting value and an ending value, as well as an increment to advance the value by between loops. The `for` loop then runs until the loop value exceeds the ending value.
You can recreate this effect in R by having a `for` loop execute on a set of integers, but don’t lose track of the fact that R’s `for` loops execute on members of a set, not sequences of integers.
`for` loops are very useful in programming because they help you connect a piece of code with each element in a set. For example, we could use a `for` loop to run `score` once for each row in `combos`. However, R’s `for` loops have a shortcoming that you’ll want to know about before you start using them: `for` loops do not return output.
`for` loops are like Las Vegas: what happens in a `for` loop stays in a `for` loop. If you want to use the products of a `for` loop, you must write the `for` loop so that it saves its own output as it goes.
Our previous examples appeared to return output, but this was misleading. The examples worked because we called `print`, which always prints its arguments in the console (even if it is called from a function, a `for` loop, or anything else). Our `for` loops won’t return anything if you remove the `print` call:
```
for (value in c("My", "third", "for", "loop")) {
value
}
##
```
To save output from a `for` loop, you must write the loop so that it saves its own output as it runs. You can do this by creating an empty vector or list before you run the `for` loop. Then use the `for` loop to fill up the vector or list. When the `for` loop is finished, you’ll be able to access the vector or list, which will now have all of your results.
Let’s see this in action. The following code creates an empty vector of length 4:
```
chars <- vector(length = 4)
```
The next loop will fill it with strings:
```
words <- c("My", "fourth", "for", "loop")
for (i in 1:4) {
chars[i] <- words[i]
}
chars
## "My" "fourth" "for" "loop"
```
This approach will usually require you to change the sets that you execute your `for` loop on. Instead of executing on a set of objects, execute on a set of integers that you can use to index both your object and your storage vector. This approach is very common in R. You’ll find in practice that you use `for` loops not so much to run code, but to fill up vectors and lists with the results of code.
Let’s use a `for` loop to calculate the prize for each row in `combos`. To begin, create a new column in `combos` to store the results of the `for` loop:
```
combos$prize <- NA
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 NA
## 7 DD DD 0.03 0.03 0.03 0.000027 NA
## BBB DD DD 0.06 0.03 0.03 0.000054 NA
```
The code creates a new column named prize and fills it with `NA`s. R uses its recycling rules to populate every value of the column with `NA`.
**Exercise 11\.5 (Build a Loop)** Construct a `for` loop that will run `score` on all 343 rows of `combos`. The loop should run `score` on the first three entries of the \_i\_th row of `combos` and should store the results in the \_i\_th entry of `combos$prize`.
*Solution.* You can score the rows in `combos` with:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
After you run the for loop, `combos$prize` will contain the correct prize for each row. This exercise also tests the `score` function; `score` appears to work correctly for every possible slot combination:
```
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
```
We’re now ready to calculate the expected value of the prize. The expected value is the sum of `combos$prize` weighted by `combos$prob`. This is also the payout rate of the slot machine:
```
sum(combos$prize * combos$prob)
## 0.538014
```
Uh oh. The expected prize is about 0\.54, which means our slot machine only pays 54 cents on the dollar over the long run. Does this mean that the manufacturer of the Manitoba slot machines *was* lying?
No, because we ignored an important feature of the slot machine when we wrote `score`: a diamond is wild. You can treat a `DD` as any other symbol if it increases your prize, with one exception. You cannot make a `DD` a `C` unless you already have another `C` in your symbols (it’d be too easy if every `DD` automatically earned you $2\).
The best thing about `DD`s is that their effects are cumulative. For example, consider the combination `B`, `DD`, `B`. Not only does the `DD` count as a `B`, which would earn a prize of $10; the `DD` also doubles the prize to $20\.
Adding this behavior to our code is a little tougher than what we have done so far, but it involves all of the same principles. You can decide that your slot machine doesn’t use wilds and keep the code that we have. In that case, your slot machine will have a payout rate of about 54 percent. Or, you could rewrite your code to use wilds. If you do, you will find that your slot machine has a payout rate of 93 percent, one percent higher than the manufacturer’s claim. You can calculate this rate with the same method that we used in this section.
**Exercise 11\.6 (Challenge)** There are many ways to modify `score` that would count `DD`s as wild. If you would like to test your skill as an R programmer, try to write your own version of `score` that correctly handles diamonds.
If you would like a more modest challenge, study the following `score` code. It accounts for wild diamonds in a way that I find elegant and succinct. See if you can understand each step in the code and how it achieves its result.
*Solution.* Here is a version of score that handles wild diamonds:
```
score <- function(symbols) {
diamonds <- sum(symbols == "DD")
cherries <- sum(symbols == "C")
# identify case
# since diamonds are wild, only nondiamonds
# matter for three of a kind and all bars
slots <- symbols[symbols != "DD"]
same <- length(unique(slots)) == 1
bars <- slots %in% c("B", "BB", "BBB")
# assign prize
if (diamonds == 3) {
prize <- 100
} else if (same) {
payouts <- c("7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[slots[1]])
} else if (all(bars)) {
prize <- 5
} else if (cherries > 0) {
# diamonds count as cherries
# so long as there is one real cherry
prize <- c(0, 2, 5)[cherries + diamonds + 1]
} else {
prize <- 0
}
# double for each diamond
prize * 2^diamonds
}
```
**Exercise 11\.7 (Calculate the Expected Value)** Calculate the expected value of the slot machine when it uses the new `score` function. You can use the existing `combos` data frame, but you will need to build a `for` loop to recalculate `combos$prize`.
To update the expected value, just update `combos$prize`:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
Then recompute the expected value:
```
sum(combos$prize * combos$prob)
## 0.934356
```
This result vindicates the manufacturer’s claim. If anything, the slot machines seem more generous than the manufacturer stated.
11\.4 while Loops
-----------------
R has two companions to the `for` loop: the `while` loop and the `repeat` loop. A `while` loop reruns a chunk *while* a certain condition remains `TRUE`. To create a `while` loop, follow `while` by a condition and a chunk of code, like this:
```
while (condition) {
code
}
```
`while` will rerun `condition`, which should be a logical test, at the start of each loop. If `condition` evaluates to `TRUE`, `while` will run the code between its braces. If `condition` evaluates to `FALSE`, `while` will finish the loop.
Why might `condition` change from `TRUE` to `FALSE`? Presumably because the code inside your loop has changed whether the condition is still `TRUE`. If the code has no relationship to the condition, a `while` loop will run until you stop it. So be careful. You can stop a `while` loop by hitting Escape or by clicking on the stop\-sign icon at the top of the RStudio console pane. The icon will appear once the loop begins to run.
Like `for` loops, `while` loops do not return a result, so you must think about what you want the loop to return and save it to an object during the loop.
You can use `while` loops to do things that take a varying number of iterations, like calculating how long it takes to go broke playing slots (as follows). However, in practice, `while` loops are much less common than `for` loops in R:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
while (cash > 0) {
cash <- cash - 1 + play()
n <- n + 1
}
n
}
plays_till_broke(100)
## 260
```
11\.5 repeat Loops
------------------
`repeat` loops are even more basic than `while` loops. They will repeat a chunk of code until you tell them to stop (by hitting Escape) or until they encounter the command `break`, which will stop the loop.
You can use a `repeat` loop to recreate `plays_till_broke`, my function that simulates how long it takes to lose money while playing slots:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
repeat {
cash <- cash - 1 + play()
n <- n + 1
if (cash <= 0) {
break
}
}
n
}
plays_till_broke(100)
## 237
```
11\.6 Summary
-------------
You can repeat tasks in R with `for`, `while`, and `repeat` loops. To use `for`, give it a chunk of code to run and a set of objects to loop through. `for` will run the code chunk once for each object. If you wish to save the output of your loop, you can assign it to an object that exists outside of the loop.
Repetition plays an important role in data science. It is the basis for simulation, as well as for estimates of variance and probability. Loops are not the only way to create repetition in R (consider `replicate` for example), but they are one of the most popular ways.
Unfortunately, loops in R can sometimes be slower than loops in other languages. As a result, R’s loops get a bad rap. This reputation is not entirely deserved, but it does highlight an important issue. Speed is essential to data analysis. When your code runs fast, you can work with bigger data and do more to it before you run out of time or computational power. [Speed](speed.html#speed) will teach you how to write fast `for` loops and fast code in general with R. There, you will learn to write vectorized code, a style of lightning\-fast code that takes advantage of all of R’s strengths.
11\.1 Expected Values
---------------------
The expected value of a random event is a type of weighted average; it is the sum of each possible outcome of the event, weighted by the probability that each outcome occurs:
\\\[
E(x) \= \\sum\_{i \= 1}^{n}\\left( x\_{i} \\cdot P(x\_{i}) \\right)
\\]
You can think of the expected value as the average prize that you would observe if you played the slot machine an infinite number of times. Let’s use the formula to calculate some simple expected values. Then we will apply the formula to your slot machine.
Do you remember the `die` you created in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice)?
```
die <- c(1, 2, 3, 4, 5, 6)
```
Each time you roll the die, it returns a value selected at random (one through six). You can find the expected value of rolling the die with the formula:
\\\[
E(\\text{die}) \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)
\\]
The \\(\\text{die}\_{i}\\)s are the possible outcomes of rolling the die: 1, 2, 3, 4, 5, and 6; and the \\(P(\\text{die}\_{i})\\)’s are the probabilities associated with each of the outcomes. If your die is fair, each outcome will occur with the same probability: 1/6\. So our equation simplifies to:
\\\[
\\begin{array}{rl}
E(\\text{die}) \& \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)\\\\
\& \= 1 \\cdot \\frac{1}{6} \+ 2 \\cdot \\frac{1}{6} \+ 3 \\cdot \\frac{1}{6} \+ 4 \\cdot \\frac{1}{6} \+ 5 \\cdot \\frac{1}{6} \+ 6 \\cdot \\frac{1}{6}\\\\
\& \= 3\.5\\\\
\\end{array}
\\]
Hence, the expected value of rolling a fair die is 3\.5\. You may notice that this is also the average value of the die. The expected value will equal the average if every outcome has the same chance of occurring.
But what if each outcome has a different chance of occurring? For example, we weighted our dice in [Packages and Help Pages](packages.html#packages) so that each die rolled 1, 2, 3, 4, and 5 with probability 1/8 and 6 with probability 3/8\. You can use the same formula to calculate the expected value in these conditions:
\\\[
\\begin{array}{rl}
E(die) \& \= 1 \\cdot \\frac{1}{8} \+ 2 \\cdot \\frac{1}{8} \+ 3 \\cdot \\frac{1}{8} \+ 4 \\cdot \\frac{1}{8} \+ 5 \\cdot \\frac{1}{8} \+ 6 \\cdot \\frac{3}{8}\\\\
\& \= 4\.125\\\\
\\end{array}
\\]
Hence, the expected value of a loaded die does not equal the average value of its outcomes. If you rolled a loaded die an infinite number of times, the average outcome would be 4\.125, which is higher than what you would expect from a fair die.
Notice that we did the same three things to calculate both of these expected values. We have:
* Listed out all of the possible outcomes
* Determined the *value* of each outcome (here just the value of the die)
* Calculated the probability that each outcome occurred
The expected value was then just the sum of the values in step 2 multiplied by the probabilities in step 3\.
You can use these steps to calculate more sophisticated expected values. For example, you could calculate the expected value of rolling a pair of weighted dice. Let’s do this step by step.
First, list out all of the possible outcomes. A total of 36 different outcomes can appear when you roll two dice. For example, you might roll (1, 1\), which notates one on the first die and one on the second die. Or, you may roll (1, 2\), one on the first die and two on the second. And so on. Listing out these combinations can be tedious, but R has a function that can help.
11\.2 expand.grid
-----------------
The `expand.grid` function in R provides a quick way to write out every combination of the elements in *n* vectors. For example, you can list every combination of two dice. To do so, run `expand.grid` on two copies of `die`:
```
rolls <- expand.grid(die, die)
```
`expand.grid` will return a data frame that contains every way to pair an element from the first `die` vector with an element from the second `die` vector. This will capture all 36 possible combinations of values:
```
rolls
## Var1 Var2
## 1 1 1
## 2 2 1
## 3 3 1
## ...
## 34 4 6
## 35 5 6
## 36 6 6
```
You can use `expand.grid` with more than two vectors if you like. For example, you could list every combination of rolling three dice with `expand.grid(die, die, die)` and every combination of rolling four dice with `expand.grid(die, die, die, die)`, and so on. `expand.grid` will always return a data frame that contains each possible combination of *n* elements from the *n* vectors. Each combination will contain exactly one element from each vector.
You can determine the value of each roll once you’ve made your list of outcomes. This will be the sum of the two dice, which you can calculate using R’s element\-wise execution:
```
rolls$value <- rolls$Var1 + rolls$Var2
head(rolls, 3)
## Var1 Var2 value
## 1 1 2
## 2 1 3
## 3 1 4
```
R will match up the elements in each vector before adding them together. As a result, each element of `value` will refer to the elements of `Var1` and `Var2` that appear in the same row.
Next, you must determine the probability that each combination appears. You can calculate this with a basic rule of probability:
*The probability that* n *independent, random events all occur is equal to the product of the probabilities that each random event occurs*.
Or more succinctly:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
So the probability that we roll a (1, 1\) will be equal to the probability that we roll a one on the first die, 1/8, times the probability that we roll a one on the second die, 1/8:
\\\[
\\begin{array}{rl}
P(1 \\\& 1\) \& \= P(1\) \\cdot P(1\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And the probability that we roll a (1, 2\) will be:
\\\[
\\begin{array}{rl}
P(1 \\\& 2\) \& \= P(1\) \\cdot P(2\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And so on.
Let me suggest a three\-step process for calculating these probabilities in R. First, we can look up the probabilities of rolling the values in `Var1`. We’ll do this with the lookup table that follows:
```
prob <- c("1" = 1/8, "2" = 1/8, "3" = 1/8, "4" = 1/8, "5" = 1/8, "6" = 3/8)
prob
## 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375
```
If you subset this table by `rolls$Var1`, you will get a vector of probabilities perfectly keyed to the values of `Var1`:
```
rolls$Var1
## 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
prob[rolls$Var1]
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
rolls$prob1 <- prob[rolls$Var1]
head(rolls, 3)
## Var1 Var2 value prob1
## 1 1 2 0.125
## 2 1 3 0.125
## 3 1 4 0.125
```
Second, we can look up the probabilities of rolling the values in `Var2`:
```
rolls$prob2 <- prob[rolls$Var2]
head(rolls, 3)
## Var1 Var2 value prob1 prob2
## 1 1 2 0.125 0.125
## 2 1 3 0.125 0.125
## 3 1 4 0.125 0.125
```
Third, we can calculate the probability of rolling each combination by multiplying `prob1` by `prob2`:
```
rolls$prob <- rolls$prob1 * rolls$prob2
head(rolls, 3)
## Var1 Var2 value prob1 prob2 prob
## 1 1 2 0.125 0.125 0.015625
## 2 1 3 0.125 0.125 0.015625
## 3 1 4 0.125 0.125 0.015625
```
It is easy to calculate the expected value now that we have each outcome, the value of each outcome, and the probability of each outcome. The expected value will be the summation of the dice values multiplied by the dice probabilities:
```
sum(rolls$value * rolls$prob)
## 8.25
```
So the expected value of rolling two loaded dice is 8\.25\. If you rolled a pair of loaded dice an infinite number of times, the average sum would be 8\.25\. (If you are curious, the expected value of rolling a pair of fair dice is 7, which explains why 7 plays such a large role in dice games like craps.)
Now that you’ve warmed up, let’s use our method to calculate the expected value of the slot machine prize. We will follow the same steps we just took:
* We will list out every possible outcome of playing the machine. This will be a list of every combination of three slot symbols.
* We will calculate the probability of getting each combination when you play the machine.
* We will determine the prize that we would win for each combination.
When we are finished, we will have a data set that looks like this:
```
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
## ... and so on.
```
The expected value will then be the sum of the prizes multiplied by their probability of occuring:
\\\[
E(\\text{prize}) \= \\sum\_{i \= 1}^{n}\\left( \\text{prize}\_{i} \\cdot P(\\text{prize}\_{i}) \\right)
\\]
Ready to begin?
**Exercise 11\.1 (List the Combinations)** Use `expand.grid` to make a data frame that contains every possible combination of *three* symbols from the `wheel` vector:
```
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
```
Be sure to add the argument `stringsAsFactors = FALSE` to your `expand.grid` call; otherwise, `expand.grid` will save the combinations as factors, an unfortunate choice that will disrupt the `score` function.
*Solution.* To create a data frame of each combination of *three* symbols, you need to run `expand.grid` and give it *three* copies of `wheel`. The result will be a data frame with 343 rows, one for each unique combination of three slot symbols:
```
combos <- expand.grid(wheel, wheel, wheel, stringsAsFactors = FALSE)
combos
## Var1 Var2 Var3
## 1 DD DD DD
## 2 7 DD DD
## 3 BBB DD DD
## 4 BB DD DD
## 5 B DD DD
## 6 C DD DD
## ...
## 341 B 0 0
## 342 C 0 0
## 343 0 0 0
```
Now, let’s calculate the probability of getting each combination. You can use the probabilities contained in the `prob` argument of `get_symbols` to do this. These probabilities determine how frequently each symbol is chosen when your slot machine generates symbols. They were calculated after observing 345 plays of the Manitoba video lottery terminals. Zeroes have the largest chance of being selected (0\.52\) and cherries the least (0\.01\):
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52)
}
```
**Exercise 11\.2 (Make a Lookup Table)** Isolate the previous probabilities in a lookup table. What names will you use in your table?
*Solution.* Your names should match the input that you want to look up. In this case, the input will be the character strings that appear in `Var1`, `Var2`, and `Var3`. So your lookup table should look like this:
```
prob <- c("DD" = 0.03, "7" = 0.03, "BBB" = 0.06,
"BB" = 0.1, "B" = 0.25, "C" = 0.01, "0" = 0.52)
```
Now let’s look up our probabilities.
**Exercise 11\.3 (Lookup the Probabilities)** Look up the probabilities of getting the values in `Var1`. Then add them to `combos` as a column named `prob1`. Then do the same for `Var2` (`prob2`) and `Var3` (`prob3`).
*Solution.* Remember that you use R’s selection notation to look up values in a lookup table. The values that result will be keyed to the index that you use:
```
combos$prob1 <- prob[combos$Var1]
combos$prob2 <- prob[combos$Var2]
combos$prob3 <- prob[combos$Var3]
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3
## DD DD DD 0.03 0.03 0.03
## 7 DD DD 0.03 0.03 0.03
## BBB DD DD 0.06 0.03 0.03
```
Now how should we calculate the total probability of each combination? Our three slot symbols are all chosen independently, which means that the same rule that governed our dice probabilities governs our symbol probabilities:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
**Exercise 11\.4 (Calculate Probabilities for Each Combination)** Calculate the overall probabilities for each combination. Save them as a column named `prob` in `combos`, then check your work.
You can check that the math worked by summing the probabilities. The probabilities should add up to one, because one of the combinations *must* appear when you play the slot machine. In other words, a combination will appear, with probability of one.
You can calculate the probabilities of every possible combination in one fell swoop with some element\-wise execution:
```
combos$prob <- combos$prob1 * combos$prob2 * combos$prob3
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob
## DD DD DD 0.03 0.03 0.03 0.000027
## 7 DD DD 0.03 0.03 0.03 0.000027
## BBB DD DD 0.06 0.03 0.03 0.000054
```
The sum of the probabilities is one, which suggests that our math is correct:
```
sum(combos$prob)
## 1
```
You only need to do one more thing before you can calculate the expected value: you must determine the prize for each combination in `combos`. You can calculate the prize with `score`. For example, we can calculate the prize for the first row of `combos` like this:
```
symbols <- c(combos[1, 1], combos[1, 2], combos[1, 3])
## "DD" "DD" "DD"
score(symbols)
## 800
```
However there are 343 rows, which makes for tedious work if you plan to calculate the scores manually. It will be quicker to automate this task and have R do it for you, which you can do with a `for` loop.
11\.3 for Loops
---------------
A `for` loop repeats a chunk of code many times, once for each element in a set of input. `for` loops provide a way to tell R, “Do this for every value of that.” In R syntax, this looks like:
```
for (value in that) {
this
}
```
The `that` object should be a set of objects (often a vector of numbers or character strings). The for loop will run the code in that appears between the braces once for each member of `that`. For example, the for loop below runs `print("one run")` once for each element in a vector of character strings:
```
for (value in c("My", "first", "for", "loop")) {
print("one run")
}
## "one run"
## "one run"
## "one run"
## "one run"
```
The `value` symbol in a for loop acts like an argument in a function. The for loop will create an object named `value` and assign it a new value on each run of the loop. The code in your loop can access this value by calling the `value` object.
What values will the for loop assign to `value`? It will use the elements in the set that you run the loop on. `for` starts with the first element and then assigns a different element to `value` on each run of the for loop, until all of the elements have been assigned to `value`. For example, the for loop below will run `print(value)` four times and will print out one element of `c("My", "second", "for", "loop")` each time:
```
for (value in c("My", "second", "for", "loop")) {
print(value)
}
## "My"
## "second"
## "for"
## "loop"
```
On the first run, the for loop substituted `"My"` for `value` in `print(value)`. On the second run it substituted `"second"`, and so on until `for` had run `print(value)` once with every element in the set:
If you look at `value` after the loop runs, you will see that it still contains the value of the last element in the set:
```
value
## "loop"
```
I’ve been using the symbol `value` in my for loops, but there is nothing special about it. You can use any symbol you like in your loop to do the same thing as long as the symbol appears before `in` in the parentheses that follow `for`. For example, you could rewrite the previous loop with any of the following:
```
for (word in c("My", "second", "for", "loop")) {
print(word)
}
for (string in c("My", "second", "for", "loop")) {
print(string)
}
for (i in c("My", "second", "for", "loop")) {
print(i)
}
```
**Choose your symbols carefully**
R will run your loop in whichever environment you call it from. This is bad news if your loop uses object names that already exist in the environment. Your loop will overwrite the existing objects with the objects that it creates. This applies to the value symbol as well.
**For loops run on sets**
In many programming languages, `for` loops are designed to work with integers, not sets. You give the loop a starting value and an ending value, as well as an increment to advance the value by between loops. The `for` loop then runs until the loop value exceeds the ending value.
You can recreate this effect in R by having a `for` loop execute on a set of integers, but don’t lose track of the fact that R’s `for` loops execute on members of a set, not sequences of integers.
`for` loops are very useful in programming because they help you connect a piece of code with each element in a set. For example, we could use a `for` loop to run `score` once for each row in `combos`. However, R’s `for` loops have a shortcoming that you’ll want to know about before you start using them: `for` loops do not return output.
`for` loops are like Las Vegas: what happens in a `for` loop stays in a `for` loop. If you want to use the products of a `for` loop, you must write the `for` loop so that it saves its own output as it goes.
Our previous examples appeared to return output, but this was misleading. The examples worked because we called `print`, which always prints its arguments in the console (even if it is called from a function, a `for` loop, or anything else). Our `for` loops won’t return anything if you remove the `print` call:
```
for (value in c("My", "third", "for", "loop")) {
value
}
##
```
To save output from a `for` loop, you must write the loop so that it saves its own output as it runs. You can do this by creating an empty vector or list before you run the `for` loop. Then use the `for` loop to fill up the vector or list. When the `for` loop is finished, you’ll be able to access the vector or list, which will now have all of your results.
Let’s see this in action. The following code creates an empty vector of length 4:
```
chars <- vector(length = 4)
```
The next loop will fill it with strings:
```
words <- c("My", "fourth", "for", "loop")
for (i in 1:4) {
chars[i] <- words[i]
}
chars
## "My" "fourth" "for" "loop"
```
This approach will usually require you to change the sets that you execute your `for` loop on. Instead of executing on a set of objects, execute on a set of integers that you can use to index both your object and your storage vector. This approach is very common in R. You’ll find in practice that you use `for` loops not so much to run code, but to fill up vectors and lists with the results of code.
Let’s use a `for` loop to calculate the prize for each row in `combos`. To begin, create a new column in `combos` to store the results of the `for` loop:
```
combos$prize <- NA
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 NA
## 7 DD DD 0.03 0.03 0.03 0.000027 NA
## BBB DD DD 0.06 0.03 0.03 0.000054 NA
```
The code creates a new column named prize and fills it with `NA`s. R uses its recycling rules to populate every value of the column with `NA`.
**Exercise 11\.5 (Build a Loop)** Construct a `for` loop that will run `score` on all 343 rows of `combos`. The loop should run `score` on the first three entries of the \_i\_th row of `combos` and should store the results in the \_i\_th entry of `combos$prize`.
*Solution.* You can score the rows in `combos` with:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
After you run the for loop, `combos$prize` will contain the correct prize for each row. This exercise also tests the `score` function; `score` appears to work correctly for every possible slot combination:
```
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
```
We’re now ready to calculate the expected value of the prize. The expected value is the sum of `combos$prize` weighted by `combos$prob`. This is also the payout rate of the slot machine:
```
sum(combos$prize * combos$prob)
## 0.538014
```
Uh oh. The expected prize is about 0\.54, which means our slot machine only pays 54 cents on the dollar over the long run. Does this mean that the manufacturer of the Manitoba slot machines *was* lying?
No, because we ignored an important feature of the slot machine when we wrote `score`: a diamond is wild. You can treat a `DD` as any other symbol if it increases your prize, with one exception. You cannot make a `DD` a `C` unless you already have another `C` in your symbols (it’d be too easy if every `DD` automatically earned you $2\).
The best thing about `DD`s is that their effects are cumulative. For example, consider the combination `B`, `DD`, `B`. Not only does the `DD` count as a `B`, which would earn a prize of $10; the `DD` also doubles the prize to $20\.
Adding this behavior to our code is a little tougher than what we have done so far, but it involves all of the same principles. You can decide that your slot machine doesn’t use wilds and keep the code that we have. In that case, your slot machine will have a payout rate of about 54 percent. Or, you could rewrite your code to use wilds. If you do, you will find that your slot machine has a payout rate of 93 percent, one percent higher than the manufacturer’s claim. You can calculate this rate with the same method that we used in this section.
**Exercise 11\.6 (Challenge)** There are many ways to modify `score` that would count `DD`s as wild. If you would like to test your skill as an R programmer, try to write your own version of `score` that correctly handles diamonds.
If you would like a more modest challenge, study the following `score` code. It accounts for wild diamonds in a way that I find elegant and succinct. See if you can understand each step in the code and how it achieves its result.
*Solution.* Here is a version of score that handles wild diamonds:
```
score <- function(symbols) {
diamonds <- sum(symbols == "DD")
cherries <- sum(symbols == "C")
# identify case
# since diamonds are wild, only nondiamonds
# matter for three of a kind and all bars
slots <- symbols[symbols != "DD"]
same <- length(unique(slots)) == 1
bars <- slots %in% c("B", "BB", "BBB")
# assign prize
if (diamonds == 3) {
prize <- 100
} else if (same) {
payouts <- c("7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[slots[1]])
} else if (all(bars)) {
prize <- 5
} else if (cherries > 0) {
# diamonds count as cherries
# so long as there is one real cherry
prize <- c(0, 2, 5)[cherries + diamonds + 1]
} else {
prize <- 0
}
# double for each diamond
prize * 2^diamonds
}
```
**Exercise 11\.7 (Calculate the Expected Value)** Calculate the expected value of the slot machine when it uses the new `score` function. You can use the existing `combos` data frame, but you will need to build a `for` loop to recalculate `combos$prize`.
To update the expected value, just update `combos$prize`:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
Then recompute the expected value:
```
sum(combos$prize * combos$prob)
## 0.934356
```
This result vindicates the manufacturer’s claim. If anything, the slot machines seem more generous than the manufacturer stated.
11\.4 while Loops
-----------------
R has two companions to the `for` loop: the `while` loop and the `repeat` loop. A `while` loop reruns a chunk *while* a certain condition remains `TRUE`. To create a `while` loop, follow `while` by a condition and a chunk of code, like this:
```
while (condition) {
code
}
```
`while` will rerun `condition`, which should be a logical test, at the start of each loop. If `condition` evaluates to `TRUE`, `while` will run the code between its braces. If `condition` evaluates to `FALSE`, `while` will finish the loop.
Why might `condition` change from `TRUE` to `FALSE`? Presumably because the code inside your loop has changed whether the condition is still `TRUE`. If the code has no relationship to the condition, a `while` loop will run until you stop it. So be careful. You can stop a `while` loop by hitting Escape or by clicking on the stop\-sign icon at the top of the RStudio console pane. The icon will appear once the loop begins to run.
Like `for` loops, `while` loops do not return a result, so you must think about what you want the loop to return and save it to an object during the loop.
You can use `while` loops to do things that take a varying number of iterations, like calculating how long it takes to go broke playing slots (as follows). However, in practice, `while` loops are much less common than `for` loops in R:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
while (cash > 0) {
cash <- cash - 1 + play()
n <- n + 1
}
n
}
plays_till_broke(100)
## 260
```
11\.5 repeat Loops
------------------
`repeat` loops are even more basic than `while` loops. They will repeat a chunk of code until you tell them to stop (by hitting Escape) or until they encounter the command `break`, which will stop the loop.
You can use a `repeat` loop to recreate `plays_till_broke`, my function that simulates how long it takes to lose money while playing slots:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
repeat {
cash <- cash - 1 + play()
n <- n + 1
if (cash <= 0) {
break
}
}
n
}
plays_till_broke(100)
## 237
```
11\.6 Summary
-------------
You can repeat tasks in R with `for`, `while`, and `repeat` loops. To use `for`, give it a chunk of code to run and a set of objects to loop through. `for` will run the code chunk once for each object. If you wish to save the output of your loop, you can assign it to an object that exists outside of the loop.
Repetition plays an important role in data science. It is the basis for simulation, as well as for estimates of variance and probability. Loops are not the only way to create repetition in R (consider `replicate` for example), but they are one of the most popular ways.
Unfortunately, loops in R can sometimes be slower than loops in other languages. As a result, R’s loops get a bad rap. This reputation is not entirely deserved, but it does highlight an important issue. Speed is essential to data analysis. When your code runs fast, you can work with bigger data and do more to it before you run out of time or computational power. [Speed](speed.html#speed) will teach you how to write fast `for` loops and fast code in general with R. There, you will learn to write vectorized code, a style of lightning\-fast code that takes advantage of all of R’s strengths.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/loops.html |
11 Loops
========
Loops are R’s method for repeating a task, which makes them a useful tool for programming simulations. This chapter will teach you how to use R’s loop tools.
Let’s use the `score` function to solve a real\-world problem.
Your slot machine is modeled after real machines that were accused of fraud. The machines appeared to pay out 40 cents on the dollar, but the manufacturer claimed that they paid out 92 cents on the dollar. You can calculate the exact payout rate of your machine with the `score` program. The payout rate will be the expected value of the slot machine’s prize.
11\.1 Expected Values
---------------------
The expected value of a random event is a type of weighted average; it is the sum of each possible outcome of the event, weighted by the probability that each outcome occurs:
\\\[
E(x) \= \\sum\_{i \= 1}^{n}\\left( x\_{i} \\cdot P(x\_{i}) \\right)
\\]
You can think of the expected value as the average prize that you would observe if you played the slot machine an infinite number of times. Let’s use the formula to calculate some simple expected values. Then we will apply the formula to your slot machine.
Do you remember the `die` you created in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice)?
```
die <- c(1, 2, 3, 4, 5, 6)
```
Each time you roll the die, it returns a value selected at random (one through six). You can find the expected value of rolling the die with the formula:
\\\[
E(\\text{die}) \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)
\\]
The \\(\\text{die}\_{i}\\)s are the possible outcomes of rolling the die: 1, 2, 3, 4, 5, and 6; and the \\(P(\\text{die}\_{i})\\)’s are the probabilities associated with each of the outcomes. If your die is fair, each outcome will occur with the same probability: 1/6\. So our equation simplifies to:
\\\[
\\begin{array}{rl}
E(\\text{die}) \& \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)\\\\
\& \= 1 \\cdot \\frac{1}{6} \+ 2 \\cdot \\frac{1}{6} \+ 3 \\cdot \\frac{1}{6} \+ 4 \\cdot \\frac{1}{6} \+ 5 \\cdot \\frac{1}{6} \+ 6 \\cdot \\frac{1}{6}\\\\
\& \= 3\.5\\\\
\\end{array}
\\]
Hence, the expected value of rolling a fair die is 3\.5\. You may notice that this is also the average value of the die. The expected value will equal the average if every outcome has the same chance of occurring.
But what if each outcome has a different chance of occurring? For example, we weighted our dice in [Packages and Help Pages](packages.html#packages) so that each die rolled 1, 2, 3, 4, and 5 with probability 1/8 and 6 with probability 3/8\. You can use the same formula to calculate the expected value in these conditions:
\\\[
\\begin{array}{rl}
E(die) \& \= 1 \\cdot \\frac{1}{8} \+ 2 \\cdot \\frac{1}{8} \+ 3 \\cdot \\frac{1}{8} \+ 4 \\cdot \\frac{1}{8} \+ 5 \\cdot \\frac{1}{8} \+ 6 \\cdot \\frac{3}{8}\\\\
\& \= 4\.125\\\\
\\end{array}
\\]
Hence, the expected value of a loaded die does not equal the average value of its outcomes. If you rolled a loaded die an infinite number of times, the average outcome would be 4\.125, which is higher than what you would expect from a fair die.
Notice that we did the same three things to calculate both of these expected values. We have:
* Listed out all of the possible outcomes
* Determined the *value* of each outcome (here just the value of the die)
* Calculated the probability that each outcome occurred
The expected value was then just the sum of the values in step 2 multiplied by the probabilities in step 3\.
You can use these steps to calculate more sophisticated expected values. For example, you could calculate the expected value of rolling a pair of weighted dice. Let’s do this step by step.
First, list out all of the possible outcomes. A total of 36 different outcomes can appear when you roll two dice. For example, you might roll (1, 1\), which notates one on the first die and one on the second die. Or, you may roll (1, 2\), one on the first die and two on the second. And so on. Listing out these combinations can be tedious, but R has a function that can help.
11\.2 expand.grid
-----------------
The `expand.grid` function in R provides a quick way to write out every combination of the elements in *n* vectors. For example, you can list every combination of two dice. To do so, run `expand.grid` on two copies of `die`:
```
rolls <- expand.grid(die, die)
```
`expand.grid` will return a data frame that contains every way to pair an element from the first `die` vector with an element from the second `die` vector. This will capture all 36 possible combinations of values:
```
rolls
## Var1 Var2
## 1 1 1
## 2 2 1
## 3 3 1
## ...
## 34 4 6
## 35 5 6
## 36 6 6
```
You can use `expand.grid` with more than two vectors if you like. For example, you could list every combination of rolling three dice with `expand.grid(die, die, die)` and every combination of rolling four dice with `expand.grid(die, die, die, die)`, and so on. `expand.grid` will always return a data frame that contains each possible combination of *n* elements from the *n* vectors. Each combination will contain exactly one element from each vector.
You can determine the value of each roll once you’ve made your list of outcomes. This will be the sum of the two dice, which you can calculate using R’s element\-wise execution:
```
rolls$value <- rolls$Var1 + rolls$Var2
head(rolls, 3)
## Var1 Var2 value
## 1 1 2
## 2 1 3
## 3 1 4
```
R will match up the elements in each vector before adding them together. As a result, each element of `value` will refer to the elements of `Var1` and `Var2` that appear in the same row.
Next, you must determine the probability that each combination appears. You can calculate this with a basic rule of probability:
*The probability that* n *independent, random events all occur is equal to the product of the probabilities that each random event occurs*.
Or more succinctly:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
So the probability that we roll a (1, 1\) will be equal to the probability that we roll a one on the first die, 1/8, times the probability that we roll a one on the second die, 1/8:
\\\[
\\begin{array}{rl}
P(1 \\\& 1\) \& \= P(1\) \\cdot P(1\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And the probability that we roll a (1, 2\) will be:
\\\[
\\begin{array}{rl}
P(1 \\\& 2\) \& \= P(1\) \\cdot P(2\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And so on.
Let me suggest a three\-step process for calculating these probabilities in R. First, we can look up the probabilities of rolling the values in `Var1`. We’ll do this with the lookup table that follows:
```
prob <- c("1" = 1/8, "2" = 1/8, "3" = 1/8, "4" = 1/8, "5" = 1/8, "6" = 3/8)
prob
## 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375
```
If you subset this table by `rolls$Var1`, you will get a vector of probabilities perfectly keyed to the values of `Var1`:
```
rolls$Var1
## 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
prob[rolls$Var1]
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
rolls$prob1 <- prob[rolls$Var1]
head(rolls, 3)
## Var1 Var2 value prob1
## 1 1 2 0.125
## 2 1 3 0.125
## 3 1 4 0.125
```
Second, we can look up the probabilities of rolling the values in `Var2`:
```
rolls$prob2 <- prob[rolls$Var2]
head(rolls, 3)
## Var1 Var2 value prob1 prob2
## 1 1 2 0.125 0.125
## 2 1 3 0.125 0.125
## 3 1 4 0.125 0.125
```
Third, we can calculate the probability of rolling each combination by multiplying `prob1` by `prob2`:
```
rolls$prob <- rolls$prob1 * rolls$prob2
head(rolls, 3)
## Var1 Var2 value prob1 prob2 prob
## 1 1 2 0.125 0.125 0.015625
## 2 1 3 0.125 0.125 0.015625
## 3 1 4 0.125 0.125 0.015625
```
It is easy to calculate the expected value now that we have each outcome, the value of each outcome, and the probability of each outcome. The expected value will be the summation of the dice values multiplied by the dice probabilities:
```
sum(rolls$value * rolls$prob)
## 8.25
```
So the expected value of rolling two loaded dice is 8\.25\. If you rolled a pair of loaded dice an infinite number of times, the average sum would be 8\.25\. (If you are curious, the expected value of rolling a pair of fair dice is 7, which explains why 7 plays such a large role in dice games like craps.)
Now that you’ve warmed up, let’s use our method to calculate the expected value of the slot machine prize. We will follow the same steps we just took:
* We will list out every possible outcome of playing the machine. This will be a list of every combination of three slot symbols.
* We will calculate the probability of getting each combination when you play the machine.
* We will determine the prize that we would win for each combination.
When we are finished, we will have a data set that looks like this:
```
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
## ... and so on.
```
The expected value will then be the sum of the prizes multiplied by their probability of occuring:
\\\[
E(\\text{prize}) \= \\sum\_{i \= 1}^{n}\\left( \\text{prize}\_{i} \\cdot P(\\text{prize}\_{i}) \\right)
\\]
Ready to begin?
**Exercise 11\.1 (List the Combinations)** Use `expand.grid` to make a data frame that contains every possible combination of *three* symbols from the `wheel` vector:
```
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
```
Be sure to add the argument `stringsAsFactors = FALSE` to your `expand.grid` call; otherwise, `expand.grid` will save the combinations as factors, an unfortunate choice that will disrupt the `score` function.
*Solution.* To create a data frame of each combination of *three* symbols, you need to run `expand.grid` and give it *three* copies of `wheel`. The result will be a data frame with 343 rows, one for each unique combination of three slot symbols:
```
combos <- expand.grid(wheel, wheel, wheel, stringsAsFactors = FALSE)
combos
## Var1 Var2 Var3
## 1 DD DD DD
## 2 7 DD DD
## 3 BBB DD DD
## 4 BB DD DD
## 5 B DD DD
## 6 C DD DD
## ...
## 341 B 0 0
## 342 C 0 0
## 343 0 0 0
```
Now, let’s calculate the probability of getting each combination. You can use the probabilities contained in the `prob` argument of `get_symbols` to do this. These probabilities determine how frequently each symbol is chosen when your slot machine generates symbols. They were calculated after observing 345 plays of the Manitoba video lottery terminals. Zeroes have the largest chance of being selected (0\.52\) and cherries the least (0\.01\):
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52)
}
```
**Exercise 11\.2 (Make a Lookup Table)** Isolate the previous probabilities in a lookup table. What names will you use in your table?
*Solution.* Your names should match the input that you want to look up. In this case, the input will be the character strings that appear in `Var1`, `Var2`, and `Var3`. So your lookup table should look like this:
```
prob <- c("DD" = 0.03, "7" = 0.03, "BBB" = 0.06,
"BB" = 0.1, "B" = 0.25, "C" = 0.01, "0" = 0.52)
```
Now let’s look up our probabilities.
**Exercise 11\.3 (Lookup the Probabilities)** Look up the probabilities of getting the values in `Var1`. Then add them to `combos` as a column named `prob1`. Then do the same for `Var2` (`prob2`) and `Var3` (`prob3`).
*Solution.* Remember that you use R’s selection notation to look up values in a lookup table. The values that result will be keyed to the index that you use:
```
combos$prob1 <- prob[combos$Var1]
combos$prob2 <- prob[combos$Var2]
combos$prob3 <- prob[combos$Var3]
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3
## DD DD DD 0.03 0.03 0.03
## 7 DD DD 0.03 0.03 0.03
## BBB DD DD 0.06 0.03 0.03
```
Now how should we calculate the total probability of each combination? Our three slot symbols are all chosen independently, which means that the same rule that governed our dice probabilities governs our symbol probabilities:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
**Exercise 11\.4 (Calculate Probabilities for Each Combination)** Calculate the overall probabilities for each combination. Save them as a column named `prob` in `combos`, then check your work.
You can check that the math worked by summing the probabilities. The probabilities should add up to one, because one of the combinations *must* appear when you play the slot machine. In other words, a combination will appear, with probability of one.
You can calculate the probabilities of every possible combination in one fell swoop with some element\-wise execution:
```
combos$prob <- combos$prob1 * combos$prob2 * combos$prob3
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob
## DD DD DD 0.03 0.03 0.03 0.000027
## 7 DD DD 0.03 0.03 0.03 0.000027
## BBB DD DD 0.06 0.03 0.03 0.000054
```
The sum of the probabilities is one, which suggests that our math is correct:
```
sum(combos$prob)
## 1
```
You only need to do one more thing before you can calculate the expected value: you must determine the prize for each combination in `combos`. You can calculate the prize with `score`. For example, we can calculate the prize for the first row of `combos` like this:
```
symbols <- c(combos[1, 1], combos[1, 2], combos[1, 3])
## "DD" "DD" "DD"
score(symbols)
## 800
```
However there are 343 rows, which makes for tedious work if you plan to calculate the scores manually. It will be quicker to automate this task and have R do it for you, which you can do with a `for` loop.
11\.3 for Loops
---------------
A `for` loop repeats a chunk of code many times, once for each element in a set of input. `for` loops provide a way to tell R, “Do this for every value of that.” In R syntax, this looks like:
```
for (value in that) {
this
}
```
The `that` object should be a set of objects (often a vector of numbers or character strings). The for loop will run the code in that appears between the braces once for each member of `that`. For example, the for loop below runs `print("one run")` once for each element in a vector of character strings:
```
for (value in c("My", "first", "for", "loop")) {
print("one run")
}
## "one run"
## "one run"
## "one run"
## "one run"
```
The `value` symbol in a for loop acts like an argument in a function. The for loop will create an object named `value` and assign it a new value on each run of the loop. The code in your loop can access this value by calling the `value` object.
What values will the for loop assign to `value`? It will use the elements in the set that you run the loop on. `for` starts with the first element and then assigns a different element to `value` on each run of the for loop, until all of the elements have been assigned to `value`. For example, the for loop below will run `print(value)` four times and will print out one element of `c("My", "second", "for", "loop")` each time:
```
for (value in c("My", "second", "for", "loop")) {
print(value)
}
## "My"
## "second"
## "for"
## "loop"
```
On the first run, the for loop substituted `"My"` for `value` in `print(value)`. On the second run it substituted `"second"`, and so on until `for` had run `print(value)` once with every element in the set:
If you look at `value` after the loop runs, you will see that it still contains the value of the last element in the set:
```
value
## "loop"
```
I’ve been using the symbol `value` in my for loops, but there is nothing special about it. You can use any symbol you like in your loop to do the same thing as long as the symbol appears before `in` in the parentheses that follow `for`. For example, you could rewrite the previous loop with any of the following:
```
for (word in c("My", "second", "for", "loop")) {
print(word)
}
for (string in c("My", "second", "for", "loop")) {
print(string)
}
for (i in c("My", "second", "for", "loop")) {
print(i)
}
```
**Choose your symbols carefully**
R will run your loop in whichever environment you call it from. This is bad news if your loop uses object names that already exist in the environment. Your loop will overwrite the existing objects with the objects that it creates. This applies to the value symbol as well.
**For loops run on sets**
In many programming languages, `for` loops are designed to work with integers, not sets. You give the loop a starting value and an ending value, as well as an increment to advance the value by between loops. The `for` loop then runs until the loop value exceeds the ending value.
You can recreate this effect in R by having a `for` loop execute on a set of integers, but don’t lose track of the fact that R’s `for` loops execute on members of a set, not sequences of integers.
`for` loops are very useful in programming because they help you connect a piece of code with each element in a set. For example, we could use a `for` loop to run `score` once for each row in `combos`. However, R’s `for` loops have a shortcoming that you’ll want to know about before you start using them: `for` loops do not return output.
`for` loops are like Las Vegas: what happens in a `for` loop stays in a `for` loop. If you want to use the products of a `for` loop, you must write the `for` loop so that it saves its own output as it goes.
Our previous examples appeared to return output, but this was misleading. The examples worked because we called `print`, which always prints its arguments in the console (even if it is called from a function, a `for` loop, or anything else). Our `for` loops won’t return anything if you remove the `print` call:
```
for (value in c("My", "third", "for", "loop")) {
value
}
##
```
To save output from a `for` loop, you must write the loop so that it saves its own output as it runs. You can do this by creating an empty vector or list before you run the `for` loop. Then use the `for` loop to fill up the vector or list. When the `for` loop is finished, you’ll be able to access the vector or list, which will now have all of your results.
Let’s see this in action. The following code creates an empty vector of length 4:
```
chars <- vector(length = 4)
```
The next loop will fill it with strings:
```
words <- c("My", "fourth", "for", "loop")
for (i in 1:4) {
chars[i] <- words[i]
}
chars
## "My" "fourth" "for" "loop"
```
This approach will usually require you to change the sets that you execute your `for` loop on. Instead of executing on a set of objects, execute on a set of integers that you can use to index both your object and your storage vector. This approach is very common in R. You’ll find in practice that you use `for` loops not so much to run code, but to fill up vectors and lists with the results of code.
Let’s use a `for` loop to calculate the prize for each row in `combos`. To begin, create a new column in `combos` to store the results of the `for` loop:
```
combos$prize <- NA
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 NA
## 7 DD DD 0.03 0.03 0.03 0.000027 NA
## BBB DD DD 0.06 0.03 0.03 0.000054 NA
```
The code creates a new column named prize and fills it with `NA`s. R uses its recycling rules to populate every value of the column with `NA`.
**Exercise 11\.5 (Build a Loop)** Construct a `for` loop that will run `score` on all 343 rows of `combos`. The loop should run `score` on the first three entries of the \_i\_th row of `combos` and should store the results in the \_i\_th entry of `combos$prize`.
*Solution.* You can score the rows in `combos` with:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
After you run the for loop, `combos$prize` will contain the correct prize for each row. This exercise also tests the `score` function; `score` appears to work correctly for every possible slot combination:
```
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
```
We’re now ready to calculate the expected value of the prize. The expected value is the sum of `combos$prize` weighted by `combos$prob`. This is also the payout rate of the slot machine:
```
sum(combos$prize * combos$prob)
## 0.538014
```
Uh oh. The expected prize is about 0\.54, which means our slot machine only pays 54 cents on the dollar over the long run. Does this mean that the manufacturer of the Manitoba slot machines *was* lying?
No, because we ignored an important feature of the slot machine when we wrote `score`: a diamond is wild. You can treat a `DD` as any other symbol if it increases your prize, with one exception. You cannot make a `DD` a `C` unless you already have another `C` in your symbols (it’d be too easy if every `DD` automatically earned you $2\).
The best thing about `DD`s is that their effects are cumulative. For example, consider the combination `B`, `DD`, `B`. Not only does the `DD` count as a `B`, which would earn a prize of $10; the `DD` also doubles the prize to $20\.
Adding this behavior to our code is a little tougher than what we have done so far, but it involves all of the same principles. You can decide that your slot machine doesn’t use wilds and keep the code that we have. In that case, your slot machine will have a payout rate of about 54 percent. Or, you could rewrite your code to use wilds. If you do, you will find that your slot machine has a payout rate of 93 percent, one percent higher than the manufacturer’s claim. You can calculate this rate with the same method that we used in this section.
**Exercise 11\.6 (Challenge)** There are many ways to modify `score` that would count `DD`s as wild. If you would like to test your skill as an R programmer, try to write your own version of `score` that correctly handles diamonds.
If you would like a more modest challenge, study the following `score` code. It accounts for wild diamonds in a way that I find elegant and succinct. See if you can understand each step in the code and how it achieves its result.
*Solution.* Here is a version of score that handles wild diamonds:
```
score <- function(symbols) {
diamonds <- sum(symbols == "DD")
cherries <- sum(symbols == "C")
# identify case
# since diamonds are wild, only nondiamonds
# matter for three of a kind and all bars
slots <- symbols[symbols != "DD"]
same <- length(unique(slots)) == 1
bars <- slots %in% c("B", "BB", "BBB")
# assign prize
if (diamonds == 3) {
prize <- 100
} else if (same) {
payouts <- c("7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[slots[1]])
} else if (all(bars)) {
prize <- 5
} else if (cherries > 0) {
# diamonds count as cherries
# so long as there is one real cherry
prize <- c(0, 2, 5)[cherries + diamonds + 1]
} else {
prize <- 0
}
# double for each diamond
prize * 2^diamonds
}
```
**Exercise 11\.7 (Calculate the Expected Value)** Calculate the expected value of the slot machine when it uses the new `score` function. You can use the existing `combos` data frame, but you will need to build a `for` loop to recalculate `combos$prize`.
To update the expected value, just update `combos$prize`:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
Then recompute the expected value:
```
sum(combos$prize * combos$prob)
## 0.934356
```
This result vindicates the manufacturer’s claim. If anything, the slot machines seem more generous than the manufacturer stated.
11\.4 while Loops
-----------------
R has two companions to the `for` loop: the `while` loop and the `repeat` loop. A `while` loop reruns a chunk *while* a certain condition remains `TRUE`. To create a `while` loop, follow `while` by a condition and a chunk of code, like this:
```
while (condition) {
code
}
```
`while` will rerun `condition`, which should be a logical test, at the start of each loop. If `condition` evaluates to `TRUE`, `while` will run the code between its braces. If `condition` evaluates to `FALSE`, `while` will finish the loop.
Why might `condition` change from `TRUE` to `FALSE`? Presumably because the code inside your loop has changed whether the condition is still `TRUE`. If the code has no relationship to the condition, a `while` loop will run until you stop it. So be careful. You can stop a `while` loop by hitting Escape or by clicking on the stop\-sign icon at the top of the RStudio console pane. The icon will appear once the loop begins to run.
Like `for` loops, `while` loops do not return a result, so you must think about what you want the loop to return and save it to an object during the loop.
You can use `while` loops to do things that take a varying number of iterations, like calculating how long it takes to go broke playing slots (as follows). However, in practice, `while` loops are much less common than `for` loops in R:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
while (cash > 0) {
cash <- cash - 1 + play()
n <- n + 1
}
n
}
plays_till_broke(100)
## 260
```
11\.5 repeat Loops
------------------
`repeat` loops are even more basic than `while` loops. They will repeat a chunk of code until you tell them to stop (by hitting Escape) or until they encounter the command `break`, which will stop the loop.
You can use a `repeat` loop to recreate `plays_till_broke`, my function that simulates how long it takes to lose money while playing slots:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
repeat {
cash <- cash - 1 + play()
n <- n + 1
if (cash <= 0) {
break
}
}
n
}
plays_till_broke(100)
## 237
```
11\.6 Summary
-------------
You can repeat tasks in R with `for`, `while`, and `repeat` loops. To use `for`, give it a chunk of code to run and a set of objects to loop through. `for` will run the code chunk once for each object. If you wish to save the output of your loop, you can assign it to an object that exists outside of the loop.
Repetition plays an important role in data science. It is the basis for simulation, as well as for estimates of variance and probability. Loops are not the only way to create repetition in R (consider `replicate` for example), but they are one of the most popular ways.
Unfortunately, loops in R can sometimes be slower than loops in other languages. As a result, R’s loops get a bad rap. This reputation is not entirely deserved, but it does highlight an important issue. Speed is essential to data analysis. When your code runs fast, you can work with bigger data and do more to it before you run out of time or computational power. [Speed](speed.html#speed) will teach you how to write fast `for` loops and fast code in general with R. There, you will learn to write vectorized code, a style of lightning\-fast code that takes advantage of all of R’s strengths.
11\.1 Expected Values
---------------------
The expected value of a random event is a type of weighted average; it is the sum of each possible outcome of the event, weighted by the probability that each outcome occurs:
\\\[
E(x) \= \\sum\_{i \= 1}^{n}\\left( x\_{i} \\cdot P(x\_{i}) \\right)
\\]
You can think of the expected value as the average prize that you would observe if you played the slot machine an infinite number of times. Let’s use the formula to calculate some simple expected values. Then we will apply the formula to your slot machine.
Do you remember the `die` you created in [Project 1: Weighted Dice](project-1-weighted-dice.html#project-1-weighted-dice)?
```
die <- c(1, 2, 3, 4, 5, 6)
```
Each time you roll the die, it returns a value selected at random (one through six). You can find the expected value of rolling the die with the formula:
\\\[
E(\\text{die}) \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)
\\]
The \\(\\text{die}\_{i}\\)s are the possible outcomes of rolling the die: 1, 2, 3, 4, 5, and 6; and the \\(P(\\text{die}\_{i})\\)’s are the probabilities associated with each of the outcomes. If your die is fair, each outcome will occur with the same probability: 1/6\. So our equation simplifies to:
\\\[
\\begin{array}{rl}
E(\\text{die}) \& \= \\sum\_{i \= 1}^{n}\\left( \\text{die}\_{i} \\cdot P(\\text{die}\_{i}) \\right)\\\\
\& \= 1 \\cdot \\frac{1}{6} \+ 2 \\cdot \\frac{1}{6} \+ 3 \\cdot \\frac{1}{6} \+ 4 \\cdot \\frac{1}{6} \+ 5 \\cdot \\frac{1}{6} \+ 6 \\cdot \\frac{1}{6}\\\\
\& \= 3\.5\\\\
\\end{array}
\\]
Hence, the expected value of rolling a fair die is 3\.5\. You may notice that this is also the average value of the die. The expected value will equal the average if every outcome has the same chance of occurring.
But what if each outcome has a different chance of occurring? For example, we weighted our dice in [Packages and Help Pages](packages.html#packages) so that each die rolled 1, 2, 3, 4, and 5 with probability 1/8 and 6 with probability 3/8\. You can use the same formula to calculate the expected value in these conditions:
\\\[
\\begin{array}{rl}
E(die) \& \= 1 \\cdot \\frac{1}{8} \+ 2 \\cdot \\frac{1}{8} \+ 3 \\cdot \\frac{1}{8} \+ 4 \\cdot \\frac{1}{8} \+ 5 \\cdot \\frac{1}{8} \+ 6 \\cdot \\frac{3}{8}\\\\
\& \= 4\.125\\\\
\\end{array}
\\]
Hence, the expected value of a loaded die does not equal the average value of its outcomes. If you rolled a loaded die an infinite number of times, the average outcome would be 4\.125, which is higher than what you would expect from a fair die.
Notice that we did the same three things to calculate both of these expected values. We have:
* Listed out all of the possible outcomes
* Determined the *value* of each outcome (here just the value of the die)
* Calculated the probability that each outcome occurred
The expected value was then just the sum of the values in step 2 multiplied by the probabilities in step 3\.
You can use these steps to calculate more sophisticated expected values. For example, you could calculate the expected value of rolling a pair of weighted dice. Let’s do this step by step.
First, list out all of the possible outcomes. A total of 36 different outcomes can appear when you roll two dice. For example, you might roll (1, 1\), which notates one on the first die and one on the second die. Or, you may roll (1, 2\), one on the first die and two on the second. And so on. Listing out these combinations can be tedious, but R has a function that can help.
11\.2 expand.grid
-----------------
The `expand.grid` function in R provides a quick way to write out every combination of the elements in *n* vectors. For example, you can list every combination of two dice. To do so, run `expand.grid` on two copies of `die`:
```
rolls <- expand.grid(die, die)
```
`expand.grid` will return a data frame that contains every way to pair an element from the first `die` vector with an element from the second `die` vector. This will capture all 36 possible combinations of values:
```
rolls
## Var1 Var2
## 1 1 1
## 2 2 1
## 3 3 1
## ...
## 34 4 6
## 35 5 6
## 36 6 6
```
You can use `expand.grid` with more than two vectors if you like. For example, you could list every combination of rolling three dice with `expand.grid(die, die, die)` and every combination of rolling four dice with `expand.grid(die, die, die, die)`, and so on. `expand.grid` will always return a data frame that contains each possible combination of *n* elements from the *n* vectors. Each combination will contain exactly one element from each vector.
You can determine the value of each roll once you’ve made your list of outcomes. This will be the sum of the two dice, which you can calculate using R’s element\-wise execution:
```
rolls$value <- rolls$Var1 + rolls$Var2
head(rolls, 3)
## Var1 Var2 value
## 1 1 2
## 2 1 3
## 3 1 4
```
R will match up the elements in each vector before adding them together. As a result, each element of `value` will refer to the elements of `Var1` and `Var2` that appear in the same row.
Next, you must determine the probability that each combination appears. You can calculate this with a basic rule of probability:
*The probability that* n *independent, random events all occur is equal to the product of the probabilities that each random event occurs*.
Or more succinctly:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
So the probability that we roll a (1, 1\) will be equal to the probability that we roll a one on the first die, 1/8, times the probability that we roll a one on the second die, 1/8:
\\\[
\\begin{array}{rl}
P(1 \\\& 1\) \& \= P(1\) \\cdot P(1\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And the probability that we roll a (1, 2\) will be:
\\\[
\\begin{array}{rl}
P(1 \\\& 2\) \& \= P(1\) \\cdot P(2\) \\\\
\& \= \\frac{1}{8} \\cdot \\frac{1}{8}\\\\
\& \= \\frac{1}{64}
\\end{array}
\\]
And so on.
Let me suggest a three\-step process for calculating these probabilities in R. First, we can look up the probabilities of rolling the values in `Var1`. We’ll do this with the lookup table that follows:
```
prob <- c("1" = 1/8, "2" = 1/8, "3" = 1/8, "4" = 1/8, "5" = 1/8, "6" = 3/8)
prob
## 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375
```
If you subset this table by `rolls$Var1`, you will get a vector of probabilities perfectly keyed to the values of `Var1`:
```
rolls$Var1
## 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
prob[rolls$Var1]
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
## 1 2 3 4 5 6 1 2 3 4 5 6
## 0.125 0.125 0.125 0.125 0.125 0.375 0.125 0.125 0.125 0.125 0.125 0.375
rolls$prob1 <- prob[rolls$Var1]
head(rolls, 3)
## Var1 Var2 value prob1
## 1 1 2 0.125
## 2 1 3 0.125
## 3 1 4 0.125
```
Second, we can look up the probabilities of rolling the values in `Var2`:
```
rolls$prob2 <- prob[rolls$Var2]
head(rolls, 3)
## Var1 Var2 value prob1 prob2
## 1 1 2 0.125 0.125
## 2 1 3 0.125 0.125
## 3 1 4 0.125 0.125
```
Third, we can calculate the probability of rolling each combination by multiplying `prob1` by `prob2`:
```
rolls$prob <- rolls$prob1 * rolls$prob2
head(rolls, 3)
## Var1 Var2 value prob1 prob2 prob
## 1 1 2 0.125 0.125 0.015625
## 2 1 3 0.125 0.125 0.015625
## 3 1 4 0.125 0.125 0.015625
```
It is easy to calculate the expected value now that we have each outcome, the value of each outcome, and the probability of each outcome. The expected value will be the summation of the dice values multiplied by the dice probabilities:
```
sum(rolls$value * rolls$prob)
## 8.25
```
So the expected value of rolling two loaded dice is 8\.25\. If you rolled a pair of loaded dice an infinite number of times, the average sum would be 8\.25\. (If you are curious, the expected value of rolling a pair of fair dice is 7, which explains why 7 plays such a large role in dice games like craps.)
Now that you’ve warmed up, let’s use our method to calculate the expected value of the slot machine prize. We will follow the same steps we just took:
* We will list out every possible outcome of playing the machine. This will be a list of every combination of three slot symbols.
* We will calculate the probability of getting each combination when you play the machine.
* We will determine the prize that we would win for each combination.
When we are finished, we will have a data set that looks like this:
```
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
## ... and so on.
```
The expected value will then be the sum of the prizes multiplied by their probability of occuring:
\\\[
E(\\text{prize}) \= \\sum\_{i \= 1}^{n}\\left( \\text{prize}\_{i} \\cdot P(\\text{prize}\_{i}) \\right)
\\]
Ready to begin?
**Exercise 11\.1 (List the Combinations)** Use `expand.grid` to make a data frame that contains every possible combination of *three* symbols from the `wheel` vector:
```
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
```
Be sure to add the argument `stringsAsFactors = FALSE` to your `expand.grid` call; otherwise, `expand.grid` will save the combinations as factors, an unfortunate choice that will disrupt the `score` function.
*Solution.* To create a data frame of each combination of *three* symbols, you need to run `expand.grid` and give it *three* copies of `wheel`. The result will be a data frame with 343 rows, one for each unique combination of three slot symbols:
```
combos <- expand.grid(wheel, wheel, wheel, stringsAsFactors = FALSE)
combos
## Var1 Var2 Var3
## 1 DD DD DD
## 2 7 DD DD
## 3 BBB DD DD
## 4 BB DD DD
## 5 B DD DD
## 6 C DD DD
## ...
## 341 B 0 0
## 342 C 0 0
## 343 0 0 0
```
Now, let’s calculate the probability of getting each combination. You can use the probabilities contained in the `prob` argument of `get_symbols` to do this. These probabilities determine how frequently each symbol is chosen when your slot machine generates symbols. They were calculated after observing 345 plays of the Manitoba video lottery terminals. Zeroes have the largest chance of being selected (0\.52\) and cherries the least (0\.01\):
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52)
}
```
**Exercise 11\.2 (Make a Lookup Table)** Isolate the previous probabilities in a lookup table. What names will you use in your table?
*Solution.* Your names should match the input that you want to look up. In this case, the input will be the character strings that appear in `Var1`, `Var2`, and `Var3`. So your lookup table should look like this:
```
prob <- c("DD" = 0.03, "7" = 0.03, "BBB" = 0.06,
"BB" = 0.1, "B" = 0.25, "C" = 0.01, "0" = 0.52)
```
Now let’s look up our probabilities.
**Exercise 11\.3 (Lookup the Probabilities)** Look up the probabilities of getting the values in `Var1`. Then add them to `combos` as a column named `prob1`. Then do the same for `Var2` (`prob2`) and `Var3` (`prob3`).
*Solution.* Remember that you use R’s selection notation to look up values in a lookup table. The values that result will be keyed to the index that you use:
```
combos$prob1 <- prob[combos$Var1]
combos$prob2 <- prob[combos$Var2]
combos$prob3 <- prob[combos$Var3]
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3
## DD DD DD 0.03 0.03 0.03
## 7 DD DD 0.03 0.03 0.03
## BBB DD DD 0.06 0.03 0.03
```
Now how should we calculate the total probability of each combination? Our three slot symbols are all chosen independently, which means that the same rule that governed our dice probabilities governs our symbol probabilities:
\\\[
P(A \\\& B \\\& C \\\& ...) \= P(A) \\cdot P(B) \\cdot P(C) \\cdot ...
\\]
**Exercise 11\.4 (Calculate Probabilities for Each Combination)** Calculate the overall probabilities for each combination. Save them as a column named `prob` in `combos`, then check your work.
You can check that the math worked by summing the probabilities. The probabilities should add up to one, because one of the combinations *must* appear when you play the slot machine. In other words, a combination will appear, with probability of one.
You can calculate the probabilities of every possible combination in one fell swoop with some element\-wise execution:
```
combos$prob <- combos$prob1 * combos$prob2 * combos$prob3
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob
## DD DD DD 0.03 0.03 0.03 0.000027
## 7 DD DD 0.03 0.03 0.03 0.000027
## BBB DD DD 0.06 0.03 0.03 0.000054
```
The sum of the probabilities is one, which suggests that our math is correct:
```
sum(combos$prob)
## 1
```
You only need to do one more thing before you can calculate the expected value: you must determine the prize for each combination in `combos`. You can calculate the prize with `score`. For example, we can calculate the prize for the first row of `combos` like this:
```
symbols <- c(combos[1, 1], combos[1, 2], combos[1, 3])
## "DD" "DD" "DD"
score(symbols)
## 800
```
However there are 343 rows, which makes for tedious work if you plan to calculate the scores manually. It will be quicker to automate this task and have R do it for you, which you can do with a `for` loop.
11\.3 for Loops
---------------
A `for` loop repeats a chunk of code many times, once for each element in a set of input. `for` loops provide a way to tell R, “Do this for every value of that.” In R syntax, this looks like:
```
for (value in that) {
this
}
```
The `that` object should be a set of objects (often a vector of numbers or character strings). The for loop will run the code in that appears between the braces once for each member of `that`. For example, the for loop below runs `print("one run")` once for each element in a vector of character strings:
```
for (value in c("My", "first", "for", "loop")) {
print("one run")
}
## "one run"
## "one run"
## "one run"
## "one run"
```
The `value` symbol in a for loop acts like an argument in a function. The for loop will create an object named `value` and assign it a new value on each run of the loop. The code in your loop can access this value by calling the `value` object.
What values will the for loop assign to `value`? It will use the elements in the set that you run the loop on. `for` starts with the first element and then assigns a different element to `value` on each run of the for loop, until all of the elements have been assigned to `value`. For example, the for loop below will run `print(value)` four times and will print out one element of `c("My", "second", "for", "loop")` each time:
```
for (value in c("My", "second", "for", "loop")) {
print(value)
}
## "My"
## "second"
## "for"
## "loop"
```
On the first run, the for loop substituted `"My"` for `value` in `print(value)`. On the second run it substituted `"second"`, and so on until `for` had run `print(value)` once with every element in the set:
If you look at `value` after the loop runs, you will see that it still contains the value of the last element in the set:
```
value
## "loop"
```
I’ve been using the symbol `value` in my for loops, but there is nothing special about it. You can use any symbol you like in your loop to do the same thing as long as the symbol appears before `in` in the parentheses that follow `for`. For example, you could rewrite the previous loop with any of the following:
```
for (word in c("My", "second", "for", "loop")) {
print(word)
}
for (string in c("My", "second", "for", "loop")) {
print(string)
}
for (i in c("My", "second", "for", "loop")) {
print(i)
}
```
**Choose your symbols carefully**
R will run your loop in whichever environment you call it from. This is bad news if your loop uses object names that already exist in the environment. Your loop will overwrite the existing objects with the objects that it creates. This applies to the value symbol as well.
**For loops run on sets**
In many programming languages, `for` loops are designed to work with integers, not sets. You give the loop a starting value and an ending value, as well as an increment to advance the value by between loops. The `for` loop then runs until the loop value exceeds the ending value.
You can recreate this effect in R by having a `for` loop execute on a set of integers, but don’t lose track of the fact that R’s `for` loops execute on members of a set, not sequences of integers.
`for` loops are very useful in programming because they help you connect a piece of code with each element in a set. For example, we could use a `for` loop to run `score` once for each row in `combos`. However, R’s `for` loops have a shortcoming that you’ll want to know about before you start using them: `for` loops do not return output.
`for` loops are like Las Vegas: what happens in a `for` loop stays in a `for` loop. If you want to use the products of a `for` loop, you must write the `for` loop so that it saves its own output as it goes.
Our previous examples appeared to return output, but this was misleading. The examples worked because we called `print`, which always prints its arguments in the console (even if it is called from a function, a `for` loop, or anything else). Our `for` loops won’t return anything if you remove the `print` call:
```
for (value in c("My", "third", "for", "loop")) {
value
}
##
```
To save output from a `for` loop, you must write the loop so that it saves its own output as it runs. You can do this by creating an empty vector or list before you run the `for` loop. Then use the `for` loop to fill up the vector or list. When the `for` loop is finished, you’ll be able to access the vector or list, which will now have all of your results.
Let’s see this in action. The following code creates an empty vector of length 4:
```
chars <- vector(length = 4)
```
The next loop will fill it with strings:
```
words <- c("My", "fourth", "for", "loop")
for (i in 1:4) {
chars[i] <- words[i]
}
chars
## "My" "fourth" "for" "loop"
```
This approach will usually require you to change the sets that you execute your `for` loop on. Instead of executing on a set of objects, execute on a set of integers that you can use to index both your object and your storage vector. This approach is very common in R. You’ll find in practice that you use `for` loops not so much to run code, but to fill up vectors and lists with the results of code.
Let’s use a `for` loop to calculate the prize for each row in `combos`. To begin, create a new column in `combos` to store the results of the `for` loop:
```
combos$prize <- NA
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 NA
## 7 DD DD 0.03 0.03 0.03 0.000027 NA
## BBB DD DD 0.06 0.03 0.03 0.000054 NA
```
The code creates a new column named prize and fills it with `NA`s. R uses its recycling rules to populate every value of the column with `NA`.
**Exercise 11\.5 (Build a Loop)** Construct a `for` loop that will run `score` on all 343 rows of `combos`. The loop should run `score` on the first three entries of the \_i\_th row of `combos` and should store the results in the \_i\_th entry of `combos$prize`.
*Solution.* You can score the rows in `combos` with:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
After you run the for loop, `combos$prize` will contain the correct prize for each row. This exercise also tests the `score` function; `score` appears to work correctly for every possible slot combination:
```
head(combos, 3)
## Var1 Var2 Var3 prob1 prob2 prob3 prob prize
## DD DD DD 0.03 0.03 0.03 0.000027 800
## 7 DD DD 0.03 0.03 0.03 0.000027 0
## BBB DD DD 0.06 0.03 0.03 0.000054 0
```
We’re now ready to calculate the expected value of the prize. The expected value is the sum of `combos$prize` weighted by `combos$prob`. This is also the payout rate of the slot machine:
```
sum(combos$prize * combos$prob)
## 0.538014
```
Uh oh. The expected prize is about 0\.54, which means our slot machine only pays 54 cents on the dollar over the long run. Does this mean that the manufacturer of the Manitoba slot machines *was* lying?
No, because we ignored an important feature of the slot machine when we wrote `score`: a diamond is wild. You can treat a `DD` as any other symbol if it increases your prize, with one exception. You cannot make a `DD` a `C` unless you already have another `C` in your symbols (it’d be too easy if every `DD` automatically earned you $2\).
The best thing about `DD`s is that their effects are cumulative. For example, consider the combination `B`, `DD`, `B`. Not only does the `DD` count as a `B`, which would earn a prize of $10; the `DD` also doubles the prize to $20\.
Adding this behavior to our code is a little tougher than what we have done so far, but it involves all of the same principles. You can decide that your slot machine doesn’t use wilds and keep the code that we have. In that case, your slot machine will have a payout rate of about 54 percent. Or, you could rewrite your code to use wilds. If you do, you will find that your slot machine has a payout rate of 93 percent, one percent higher than the manufacturer’s claim. You can calculate this rate with the same method that we used in this section.
**Exercise 11\.6 (Challenge)** There are many ways to modify `score` that would count `DD`s as wild. If you would like to test your skill as an R programmer, try to write your own version of `score` that correctly handles diamonds.
If you would like a more modest challenge, study the following `score` code. It accounts for wild diamonds in a way that I find elegant and succinct. See if you can understand each step in the code and how it achieves its result.
*Solution.* Here is a version of score that handles wild diamonds:
```
score <- function(symbols) {
diamonds <- sum(symbols == "DD")
cherries <- sum(symbols == "C")
# identify case
# since diamonds are wild, only nondiamonds
# matter for three of a kind and all bars
slots <- symbols[symbols != "DD"]
same <- length(unique(slots)) == 1
bars <- slots %in% c("B", "BB", "BBB")
# assign prize
if (diamonds == 3) {
prize <- 100
} else if (same) {
payouts <- c("7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[slots[1]])
} else if (all(bars)) {
prize <- 5
} else if (cherries > 0) {
# diamonds count as cherries
# so long as there is one real cherry
prize <- c(0, 2, 5)[cherries + diamonds + 1]
} else {
prize <- 0
}
# double for each diamond
prize * 2^diamonds
}
```
**Exercise 11\.7 (Calculate the Expected Value)** Calculate the expected value of the slot machine when it uses the new `score` function. You can use the existing `combos` data frame, but you will need to build a `for` loop to recalculate `combos$prize`.
To update the expected value, just update `combos$prize`:
```
for (i in 1:nrow(combos)) {
symbols <- c(combos[i, 1], combos[i, 2], combos[i, 3])
combos$prize[i] <- score(symbols)
}
```
Then recompute the expected value:
```
sum(combos$prize * combos$prob)
## 0.934356
```
This result vindicates the manufacturer’s claim. If anything, the slot machines seem more generous than the manufacturer stated.
11\.4 while Loops
-----------------
R has two companions to the `for` loop: the `while` loop and the `repeat` loop. A `while` loop reruns a chunk *while* a certain condition remains `TRUE`. To create a `while` loop, follow `while` by a condition and a chunk of code, like this:
```
while (condition) {
code
}
```
`while` will rerun `condition`, which should be a logical test, at the start of each loop. If `condition` evaluates to `TRUE`, `while` will run the code between its braces. If `condition` evaluates to `FALSE`, `while` will finish the loop.
Why might `condition` change from `TRUE` to `FALSE`? Presumably because the code inside your loop has changed whether the condition is still `TRUE`. If the code has no relationship to the condition, a `while` loop will run until you stop it. So be careful. You can stop a `while` loop by hitting Escape or by clicking on the stop\-sign icon at the top of the RStudio console pane. The icon will appear once the loop begins to run.
Like `for` loops, `while` loops do not return a result, so you must think about what you want the loop to return and save it to an object during the loop.
You can use `while` loops to do things that take a varying number of iterations, like calculating how long it takes to go broke playing slots (as follows). However, in practice, `while` loops are much less common than `for` loops in R:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
while (cash > 0) {
cash <- cash - 1 + play()
n <- n + 1
}
n
}
plays_till_broke(100)
## 260
```
11\.5 repeat Loops
------------------
`repeat` loops are even more basic than `while` loops. They will repeat a chunk of code until you tell them to stop (by hitting Escape) or until they encounter the command `break`, which will stop the loop.
You can use a `repeat` loop to recreate `plays_till_broke`, my function that simulates how long it takes to lose money while playing slots:
```
plays_till_broke <- function(start_with) {
cash <- start_with
n <- 0
repeat {
cash <- cash - 1 + play()
n <- n + 1
if (cash <= 0) {
break
}
}
n
}
plays_till_broke(100)
## 237
```
11\.6 Summary
-------------
You can repeat tasks in R with `for`, `while`, and `repeat` loops. To use `for`, give it a chunk of code to run and a set of objects to loop through. `for` will run the code chunk once for each object. If you wish to save the output of your loop, you can assign it to an object that exists outside of the loop.
Repetition plays an important role in data science. It is the basis for simulation, as well as for estimates of variance and probability. Loops are not the only way to create repetition in R (consider `replicate` for example), but they are one of the most popular ways.
Unfortunately, loops in R can sometimes be slower than loops in other languages. As a result, R’s loops get a bad rap. This reputation is not entirely deserved, but it does highlight an important issue. Speed is essential to data analysis. When your code runs fast, you can work with bigger data and do more to it before you run out of time or computational power. [Speed](speed.html#speed) will teach you how to write fast `for` loops and fast code in general with R. There, you will learn to write vectorized code, a style of lightning\-fast code that takes advantage of all of R’s strengths.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/speed.html |
12 Speed
========
As a data scientist, you need speed. You can work with bigger data and do more ambitious tasks when your code runs fast. This chapter will show you a specific way to write fast code in R. You will then use the method to simulate 10 million plays of your slot machine.
12\.1 Vectorized Code
---------------------
You can write a piece of code in many different ways, but the fastest R code will usually take advantage of three things: logical tests, subsetting, and element\-wise execution. These are the things that R does best. Code that uses these things usually has a certain quality: it is *vectorized*; the code can take a vector of values as input and manipulate each value in the vector at the same time.
To see what vectorized code looks like, compare these two examples of an absolute value function. Each takes a vector of numbers and transforms it into a vector of absolute values (e.g., positive numbers). The first example is not vectorized; `abs_loop` uses a `for` loop to manipulate each element of the vector one at a time:
```
abs_loop <- function(vec){
for (i in 1:length(vec)) {
if (vec[i] < 0) {
vec[i] <- -vec[i]
}
}
vec
}
```
The second example, `abs_set`, is a vectorized version of `abs_loop`. It uses logical subsetting to manipulate every negative number in the vector at the same time:
```
abs_sets <- function(vec){
negs <- vec < 0
vec[negs] <- vec[negs] * -1
vec
}
```
`abs_set` is much faster than `abs_loop` because it relies on operations that R does quickly: logical tests, subsetting, and element\-wise execution.
You can use the `system.time` function to see just how fast `abs_set` is. `system.time` takes an R expression, runs it, and then displays how much time elapsed while the expression ran.
To compare `abs_loop` and `abs_set`, first make a long vector of positive and negative numbers. `long` will contain 10 million values:
```
long <- rep(c(-1, 1), 5000000)
```
`rep` repeats a value, or vector of values, many times. To use `rep`, give it a vector of values and then the number of times to repeat the vector. R will return the results as a new, longer vector.
You can then use `system.time` to measure how much time it takes each function to evaluate `long`:
```
system.time(abs_loop(long))
## user system elapsed
## 15.982 0.032 16.018
system.time(abs_sets(long))
## user system elapsed
## 0.529 0.063 0.592
```
Don’t confuse `system.time` with `Sys.time`, which returns the current time.
The first two columns of the output of `system.time` report how many seconds your computer spent executing the call on the user side and system sides of your process, a dichotomy that will vary from OS to OS.
The last column displays how many seconds elapsed while R ran the expression. The results show that `abs_set` calculated the absolute value 30 times faster than `abs_loop` when applied to a vector of 10 million numbers. You can expect similar speed\-ups whenever you write vectorized code.
**Exercise 12\.1 (How fast is abs?)** Many preexisting R functions are already vectorized and have been optimized to perform quickly. You can make your code faster by relying on these functions whenever possible. For example, R comes with a built\-in absolute value function, `abs`.
Check to see how much faster `abs` computes the absolute value of `long` than `abs_loop` and `abs_set` do.
*Solution.* You can measure the speed of `abs` with `system.time`. It takes `abs` a lightning\-fast 0\.05 seconds to calculate the absolute value of 10 million numbers. This is 0\.592 / 0\.054 \= 10\.96 times faster than `abs_set` and nearly 300 times faster than `abs_loop`:
```
system.time(abs(long))
## user system elapsed
## 0.037 0.018 0.054
```
12\.2 How to Write Vectorized Code
----------------------------------
Vectorized code is easy to write in R because most R functions are already vectorized. Code based on these functions can easily be made vectorized and therefore fast. To create vectorized code:
1. Use vectorized functions to complete the sequential steps in your program.
2. Use logical subsetting to handle parallel cases. Try to manipulate every element in a case at once.
`abs_loop` and `abs_set` illustrate these rules. The functions both handle two cases and perform one sequential step, Figure [12\.1](speed.html#fig:abs). If a number is positive, the functions leave it alone. If a number is negative, the functions multiply it by negative one.
Figure 12\.1: abs\_loop uses a for loop to sift data into one of two cases: negative numbers and nonnegative numbers.
You can identify all of the elements of a vector that fall into a case with a logical test. R will execute the test in element\-wise fashion and return a `TRUE` for every element that belongs in the case. For example, `vec < 0` identifies every value of `vec` that belongs to the negative case. You can use the same logical test to extract the set of negative values with logical subsetting:
```
vec <- c(1, -2, 3, -4, 5, -6, 7, -8, 9, -10)
vec < 0
## FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
vec[vec < 0]
## -2 -4 -6 -8 -10
```
The plan in Figure [12\.1](speed.html#fig:abs) now requires a sequential step: you must multiply each of the negative values by negative one. All of R’s arithmetic operators are vectorized, so you can use `*` to complete this step in vectorized fashion. `*` will multiply each number in `vec[vec < 0]` by negative one at the same time:
```
vec[vec < 0] * -1
## 2 4 6 8 10
```
Finally, you can use R’s assignment operator, which is also vectorized, to save the new set over the old set in the original `vec` object. Since `<-` is vectorized, the elements of the new set will be paired up to the elements of the old set, in order, and then element\-wise assignment will occur. As a result, each negative value will be replaced by its positive partner, as in Figure [12\.2](speed.html#fig:assignment).
Figure 12\.2: Use logical subsetting to modify groups of values in place. R’s arithmetic and assignment operators are vectorized, which lets you manipulate and update multiple values at once.
**Exercise 12\.2 (Vectorize a Function)** The following function converts a vector of slot symbols to a vector of new slot symbols. Can you vectorize it? How much faster does the vectorized version work?
```
change_symbols <- function(vec){
for (i in 1:length(vec)){
if (vec[i] == "DD") {
vec[i] <- "joker"
} else if (vec[i] == "C") {
vec[i] <- "ace"
} else if (vec[i] == "7") {
vec[i] <- "king"
}else if (vec[i] == "B") {
vec[i] <- "queen"
} else if (vec[i] == "BB") {
vec[i] <- "jack"
} else if (vec[i] == "BBB") {
vec[i] <- "ten"
} else {
vec[i] <- "nine"
}
}
vec
}
vec <- c("DD", "C", "7", "B", "BB", "BBB", "0")
change_symbols(vec)
## "joker" "ace" "king" "queen" "jack" "ten" "nine"
many <- rep(vec, 1000000)
system.time(change_symbols(many))
## user system elapsed
## 30.057 0.031 30.079
```
*Solution.* `change_symbols` uses a `for` loop to sort values into seven different cases, as demonstrated in Figure [12\.3](speed.html#fig:change).
To vectorize `change_symbols`, create a logical test that can identify each case:
```
vec[vec == "DD"]
## "DD"
vec[vec == "C"]
## "C"
vec[vec == "7"]
## "7"
vec[vec == "B"]
## "B"
vec[vec == "BB"]
## "BB"
vec[vec == "BBB"]
## "BBB"
vec[vec == "0"]
## "0"
```
Figure 12\.3: change\_many does something different for each of seven cases.
Then write code that can change the symbols for each case:
```
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
```
When you combine this into a function, you have a vectorized version of `change_symbols` that runs about 14 times faster:
```
change_vec <- function (vec) {
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
vec
}
system.time(change_vec(many))
## user system elapsed
## 1.994 0.059 2.051
```
Or, even better, use a lookup table. Lookup tables are a vectorized method because they rely on R’s vectorized selection operations:
```
change_vec2 <- function(vec){
tb <- c("DD" = "joker", "C" = "ace", "7" = "king", "B" = "queen",
"BB" = "jack", "BBB" = "ten", "0" = "nine")
unname(tb[vec])
}
system.time(change_vec(many))
## user system elapsed
## 0.687 0.059 0.746
```
Here, a lookup table is 40 times faster than the original function.
`abs_loop` and `change_many` illustrate a characteristic of vectorized code: programmers often write slower, nonvectorized code by relying on unnecessary `for` loops, like the one in `change_many`. I think this is the result of a general misunderstanding about R. `for` loops do not behave the same way in R as they do in other languages, which means you should write code differently in R than you would in other languages.
When you write in languages like C and Fortran, you must compile your code before your computer can run it. This compilation step optimizes how the `for` loops in the code use your computer’s memory, which makes the `for` loops very fast. As a result, many programmers use `for` loops frequently when they write in C and Fortran.
When you write in R, however, you do not compile your code. You skip this step, which makes programming in R a more user\-friendly experience. Unfortunately, this also means you do not give your loops the speed boost they would receive in C or Fortran. As a result, your loops will run slower than the other operations we have studied: logical tests, subsetting, and element\-wise execution. If you can write your code with the faster operations instead of a `for` loop, you should do so. No matter which language you write in, you should try to use the features of the language that run the fastest.
**if and for**
A good way to spot `for` loops that could be vectorized is to look for combinations of `if` and `for`. `if` can only be applied to one value at a time, which means it is often used in conjunction with a `for` loop. The `for` loop helps apply `if` to an entire vector of values. This combination can usually be replaced with logical subsetting, which will do the same thing but run much faster.
This doesn’t mean that you should never use `for` loops in R. There are still many places in R where `for` loops make sense. `for` loops perform a basic task that you cannot always recreate with vectorized code. `for` loops are also easy to understand and run reasonably fast in R, so long as you take a few precautions.
12\.3 How to Write Fast for Loops in R
--------------------------------------
You can dramatically increase the speed of your `for` loops by doing two things to optimize each loop. First, do as much as you can outside of the `for` loop. Every line of code that you place inside of the `for` loop will be run many, many times. If a line of code only needs to be run once, place it outside of the loop to avoid repetition.
Second, make sure that any storage objects that you use with the loop are large enough to contain *all* of the results of the loop. For example, both loops below will need to store one million values. The first loop stores its values in an object named `output` that begins with a length of *one million*:
```
system.time({
output <- rep(NA, 1000000)
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1.709 0.015 1.724
```
The second loop stores its values in an object named `output` that begins with a length of *one*. R will expand the object to a length of one million as it runs the loop. The code in this loop is very similar to the code in the first loop, but the loop takes *37 minutes* longer to run than the first loop:
```
system.time({
output <- NA
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1689.537 560.951 2249.927
```
The two loops do the same thing, so what accounts for the difference? In the second loop, R has to increase the length of `output` by one for each run of the loop. To do this, R needs to find a new place in your computer’s memory that can contain the larger object. R must then copy the `output` vector over and erase the old version of `output` before moving on to the next run of the loop. By the end of the loop, R has rewritten `output` in your computer’s memory one million times.
In the first case, the size of `output` never changes; R can define one `output` object in memory and use it for each run of the `for` loop.
The authors of R use low\-level languages like C and Fortran to write basic R functions, many of which use `for` loops. These functions are compiled and optimized before they become a part of R, which makes them quite fast.
Whenever you see `.Primitive`, `.Internal`, or `.Call` written in a function’s definition, you can be confident the function is calling code from another language. You’ll get all of the speed advantages of that language by using the function.
12\.4 Vectorized Code in Practice
---------------------------------
To see how vectorized code can help you as a data scientist, consider our slot machine project. In [Loops](loops.html#loops), you calculated the exact payout rate for your slot machine, but you could have estimated this payout rate with a simulation. If you played the slot machine many, many times, the average prize over all of the plays would be a good estimate of the true payout rate.
This method of estimation is based on the law of large numbers and is similar to many statistical simulations. To run this simulation, you could use a `for` loop:
```
winnings <- vector(length = 1000000)
for (i in 1:1000000) {
winnings[i] <- play()
}
mean(winnings)
## 0.9366984
```
The estimated payout rate after 10 million runs is 0\.937, which is very close to the true payout rate of 0\.934\. Note that I’m using the modified `score` function that treats diamonds as wilds.
If you run this simulation, you will notice that it takes a while to run. In fact, the simulation takes 342,308 seconds to run, which is about 5\.7 minutes. This is not particularly impressive, and you can do better by using vectorized code:
```
system.time(for (i in 1:1000000) {
winnings[i] <- play()
})
## user system elapsed
## 342.041 0.355 342.308
```
The current `score` function is not vectorized. It takes a single slot combination and uses an `if` tree to assign a prize to it. This combination of an `if` tree with a `for` loop suggests that you could write a piece of vectorized code that takes *many* slot combinations and then uses logical subsetting to operate on them all at once.
For example, you could rewrite `get_symbols` to generate *n* slot combinations and return them as an *n* x 3 matrix, like the one that follows. Each row of the matrix will contain one slot combination to be scored:
```
get_many_symbols <- function(n) {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
vec <- sample(wheel, size = 3 * n, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
matrix(vec, ncol = 3)
}
get_many_symbols(5)
## [,1] [,2] [,3]
## [1,] "B" "0" "B"
## [2,] "0" "BB" "7"
## [3,] "0" "0" "BBB"
## [4,] "0" "0" "B"
## [5,] "BBB" "0" "0"
```
You could also rewrite `play` to take a parameter, `n`, and return `n` prizes, in a data frame:
```
play_many <- function(n) {
symb_mat <- get_many_symbols(n = n)
data.frame(w1 = symb_mat[,1], w2 = symb_mat[,2],
w3 = symb_mat[,3], prize = score_many(symb_mat))
}
```
This new function would make it easy to simulate a million, or even 10 million plays of the slot machine, which will be our goal. When we’re finished, you will be able to estimate the payout rate with:
```
# plays <- play_many(10000000))
# mean(plays$prize)
```
Now you just need to write `score_many`, a vectorized (matix\-ized?) version of `score` that takes an *n* x 3 matrix and returns *n* prizes. It will be difficult to write this function because `score` is already quite complicated. I would not expect you to feel confident doing this on your own until you have more practice and experience than we’ve been able to develop here.
Should you like to test your skills and write a version of `score_many`, I recommend that you use the function `rowSums` within your code. It calculates the sum of each row of numbers (or logicals) in a matrix.
If you would like to test yourself in a more modest way, I recommend that you study the following model `score_many` function until you understand how each part works and how the parts work together to create a vectorized function. To do this, it will be helpful to create a concrete example, like this:
```
symbols <- matrix(
c("DD", "DD", "DD",
"C", "DD", "0",
"B", "B", "B",
"B", "BB", "BBB",
"C", "C", "0",
"7", "DD", "DD"), nrow = 6, byrow = TRUE)
symbols
## [,1] [,2] [,3]
## [1,] "DD" "DD" "DD"
## [2,] "C" "DD" "0"
## [3,] "B" "B" "B"
## [4,] "B" "BB" "BBB"
## [5,] "C" "C" "0"
## [6,] "7" "DD" "DD"
```
Then you can run each line of `score_many` against the example and examine the results as you go.
**Exercise 12\.3 (Test Your Understanding)** Study the model `score_many` function until you are satisfied that you understand how it works and could write a similar function yourself.
**Exercise 12\.4 (Advanced Challenge)** Instead of examining the model answer, write your own vectorized version of `score`. Assume that the data is stored in an *n* × 3 matrix where each row of the matrix contains one combination of slots to be scored.
You can use the version of `score` that treats diamonds as wild or the version of `score` that doesn’t. However, the model answer will use the version treating diamonds as wild.
*Solution.* `score_many` is a vectorized version of `score`. You can use it to run the simulation at the start of this section in a little over 20 seconds. This is 17 times faster than using a `for` loop:
```
# symbols should be a matrix with a column for each slot machine window
score_many <- function(symbols) {
# Step 1: Assign base prize based on cherries and diamonds ---------
## Count the number of cherries and diamonds in each combination
cherries <- rowSums(symbols == "C")
diamonds <- rowSums(symbols == "DD")
## Wild diamonds count as cherries
prize <- c(0, 2, 5)[cherries + diamonds + 1]
## ...but not if there are zero real cherries
### (cherries is coerced to FALSE where cherries == 0)
prize[!cherries] <- 0
# Step 2: Change prize for combinations that contain three of a kind
same <- symbols[, 1] == symbols[, 2] &
symbols[, 2] == symbols[, 3]
payoffs <- c("DD" = 100, "7" = 80, "BBB" = 40,
"BB" = 25, "B" = 10, "C" = 10, "0" = 0)
prize[same] <- payoffs[symbols[same, 1]]
# Step 3: Change prize for combinations that contain all bars ------
bars <- symbols == "B" | symbols == "BB" | symbols == "BBB"
all_bars <- bars[, 1] & bars[, 2] & bars[, 3] & !same
prize[all_bars] <- 5
# Step 4: Handle wilds ---------------------------------------------
## combos with two diamonds
two_wilds <- diamonds == 2
### Identify the nonwild symbol
one <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 2] == symbols[, 3]
two <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 1] == symbols[, 3]
three <- two_wilds & symbols[, 1] == symbols[, 2] &
symbols[, 2] != symbols[, 3]
### Treat as three of a kind
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
## combos with one wild
one_wild <- diamonds == 1
### Treat as all bars (if appropriate)
wild_bars <- one_wild & (rowSums(bars) == 2)
prize[wild_bars] <- 5
### Treat as three of a kind (if appropriate)
one <- one_wild & symbols[, 1] == symbols[, 2]
two <- one_wild & symbols[, 2] == symbols[, 3]
three <- one_wild & symbols[, 3] == symbols[, 1]
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
# Step 5: Double prize for every diamond in combo ------------------
unname(prize * 2^diamonds)
}
system.time(play_many(10000000))
## user system elapsed
## 20.942 1.433 22.367
```
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
12\.5 Summary
-------------
Fast code is an important component of data science because you can do more with fast code than you can do with slow code. You can work with larger data sets before computational constraints intervene, and you can do more computation before time constraints intervene. The fastest code in R will rely on the things that R does best: logical tests, subsetting, and element\-wise execution. I’ve called this type of code vectorized code because code written with these operations will take a vector of values as input and operate on each element of the vector at the same time. The majority of the code written in R is already vectorized.
If you use these operations, but your code does not appear vectorized, analyze the sequential steps and parallel cases in your program. Ensure that you’ve used vectorized functions to handle the steps and logical subsetting to handle the cases. Be aware, however, that some tasks cannot be vectorized.
12\.6 Project 3 Wrap\-up
------------------------
You have now written your first program in R, and it is a program that you should be proud of. `play` is not a simple `hello world` exercise, but a real program that does a real task in a complicated way.
Writing new programs in R will always be challenging because programming depends so much on your own creativity, problem\-solving ability, and experience writing similar types of programs. However, you can use the suggestions in this chapter to make even the most complicated program manageable: divide tasks into simple steps and cases, work with concrete examples, and describe possible solutions in English.
This project completes the education you began in [The Very Basics](basics.html#basics). You can now use R to handle data, which has augmented your ability to analyze data. You can:
* Load and store data in your computer—not on paper or in your mind
* Accurately recall and change individual values without relying on your memory
* Instruct your computer to do tedious, or complex, tasks on your behalf
These skills solve an important logistical problem faced by every data scientist: *how can you store and manipulate data without making errors?* However, this is not the only problem that you will face as a data scientist. The next problem will appear when you try to understand the information contained in your data. It is nearly impossible to spot insights or to discover patterns in raw data. A third problem will appear when you try to use your data set to reason about reality, which includes things not contained in your data set. What exactly does your data imply about things outside of the data set? How certain can you be?
I refer to these problems as the logistical, tactical, and strategic problems of data science, as shown in Figure [12\.4](speed.html#fig:venn). You’ll face them whenever you try to learn from data:
* **A logistical problem:** \- How can you store and manipulate data without making errors?
* **A tactical problem** \- How can you discover the information contained in your data?
* **A strategic problem** \- How can you use the data to draw conclusions about the world at large?
Figure 12\.4: The three core skill sets of data science: computer programming, data comprehension, and scientific reasoning.
A well\-rounded data scientist will need to be able to solve each of these problems in many different situations. By learning to program in R, you have mastered the logistical problem, which is a prerequisite for solving the tactical and strategic problems.
If you would like to learn how to reason with data, or how to transform, visualize, and explore your data sets with R tools, I recommend the book [*R for Data Science*](http://r4ds.had.co.nz/), the companion volume to this book. *R for Data Science* teaches a simple workflow for transforming, visualizing, and modeling data in R, as well as how to report results with the R Markdown package.
12\.1 Vectorized Code
---------------------
You can write a piece of code in many different ways, but the fastest R code will usually take advantage of three things: logical tests, subsetting, and element\-wise execution. These are the things that R does best. Code that uses these things usually has a certain quality: it is *vectorized*; the code can take a vector of values as input and manipulate each value in the vector at the same time.
To see what vectorized code looks like, compare these two examples of an absolute value function. Each takes a vector of numbers and transforms it into a vector of absolute values (e.g., positive numbers). The first example is not vectorized; `abs_loop` uses a `for` loop to manipulate each element of the vector one at a time:
```
abs_loop <- function(vec){
for (i in 1:length(vec)) {
if (vec[i] < 0) {
vec[i] <- -vec[i]
}
}
vec
}
```
The second example, `abs_set`, is a vectorized version of `abs_loop`. It uses logical subsetting to manipulate every negative number in the vector at the same time:
```
abs_sets <- function(vec){
negs <- vec < 0
vec[negs] <- vec[negs] * -1
vec
}
```
`abs_set` is much faster than `abs_loop` because it relies on operations that R does quickly: logical tests, subsetting, and element\-wise execution.
You can use the `system.time` function to see just how fast `abs_set` is. `system.time` takes an R expression, runs it, and then displays how much time elapsed while the expression ran.
To compare `abs_loop` and `abs_set`, first make a long vector of positive and negative numbers. `long` will contain 10 million values:
```
long <- rep(c(-1, 1), 5000000)
```
`rep` repeats a value, or vector of values, many times. To use `rep`, give it a vector of values and then the number of times to repeat the vector. R will return the results as a new, longer vector.
You can then use `system.time` to measure how much time it takes each function to evaluate `long`:
```
system.time(abs_loop(long))
## user system elapsed
## 15.982 0.032 16.018
system.time(abs_sets(long))
## user system elapsed
## 0.529 0.063 0.592
```
Don’t confuse `system.time` with `Sys.time`, which returns the current time.
The first two columns of the output of `system.time` report how many seconds your computer spent executing the call on the user side and system sides of your process, a dichotomy that will vary from OS to OS.
The last column displays how many seconds elapsed while R ran the expression. The results show that `abs_set` calculated the absolute value 30 times faster than `abs_loop` when applied to a vector of 10 million numbers. You can expect similar speed\-ups whenever you write vectorized code.
**Exercise 12\.1 (How fast is abs?)** Many preexisting R functions are already vectorized and have been optimized to perform quickly. You can make your code faster by relying on these functions whenever possible. For example, R comes with a built\-in absolute value function, `abs`.
Check to see how much faster `abs` computes the absolute value of `long` than `abs_loop` and `abs_set` do.
*Solution.* You can measure the speed of `abs` with `system.time`. It takes `abs` a lightning\-fast 0\.05 seconds to calculate the absolute value of 10 million numbers. This is 0\.592 / 0\.054 \= 10\.96 times faster than `abs_set` and nearly 300 times faster than `abs_loop`:
```
system.time(abs(long))
## user system elapsed
## 0.037 0.018 0.054
```
12\.2 How to Write Vectorized Code
----------------------------------
Vectorized code is easy to write in R because most R functions are already vectorized. Code based on these functions can easily be made vectorized and therefore fast. To create vectorized code:
1. Use vectorized functions to complete the sequential steps in your program.
2. Use logical subsetting to handle parallel cases. Try to manipulate every element in a case at once.
`abs_loop` and `abs_set` illustrate these rules. The functions both handle two cases and perform one sequential step, Figure [12\.1](speed.html#fig:abs). If a number is positive, the functions leave it alone. If a number is negative, the functions multiply it by negative one.
Figure 12\.1: abs\_loop uses a for loop to sift data into one of two cases: negative numbers and nonnegative numbers.
You can identify all of the elements of a vector that fall into a case with a logical test. R will execute the test in element\-wise fashion and return a `TRUE` for every element that belongs in the case. For example, `vec < 0` identifies every value of `vec` that belongs to the negative case. You can use the same logical test to extract the set of negative values with logical subsetting:
```
vec <- c(1, -2, 3, -4, 5, -6, 7, -8, 9, -10)
vec < 0
## FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
vec[vec < 0]
## -2 -4 -6 -8 -10
```
The plan in Figure [12\.1](speed.html#fig:abs) now requires a sequential step: you must multiply each of the negative values by negative one. All of R’s arithmetic operators are vectorized, so you can use `*` to complete this step in vectorized fashion. `*` will multiply each number in `vec[vec < 0]` by negative one at the same time:
```
vec[vec < 0] * -1
## 2 4 6 8 10
```
Finally, you can use R’s assignment operator, which is also vectorized, to save the new set over the old set in the original `vec` object. Since `<-` is vectorized, the elements of the new set will be paired up to the elements of the old set, in order, and then element\-wise assignment will occur. As a result, each negative value will be replaced by its positive partner, as in Figure [12\.2](speed.html#fig:assignment).
Figure 12\.2: Use logical subsetting to modify groups of values in place. R’s arithmetic and assignment operators are vectorized, which lets you manipulate and update multiple values at once.
**Exercise 12\.2 (Vectorize a Function)** The following function converts a vector of slot symbols to a vector of new slot symbols. Can you vectorize it? How much faster does the vectorized version work?
```
change_symbols <- function(vec){
for (i in 1:length(vec)){
if (vec[i] == "DD") {
vec[i] <- "joker"
} else if (vec[i] == "C") {
vec[i] <- "ace"
} else if (vec[i] == "7") {
vec[i] <- "king"
}else if (vec[i] == "B") {
vec[i] <- "queen"
} else if (vec[i] == "BB") {
vec[i] <- "jack"
} else if (vec[i] == "BBB") {
vec[i] <- "ten"
} else {
vec[i] <- "nine"
}
}
vec
}
vec <- c("DD", "C", "7", "B", "BB", "BBB", "0")
change_symbols(vec)
## "joker" "ace" "king" "queen" "jack" "ten" "nine"
many <- rep(vec, 1000000)
system.time(change_symbols(many))
## user system elapsed
## 30.057 0.031 30.079
```
*Solution.* `change_symbols` uses a `for` loop to sort values into seven different cases, as demonstrated in Figure [12\.3](speed.html#fig:change).
To vectorize `change_symbols`, create a logical test that can identify each case:
```
vec[vec == "DD"]
## "DD"
vec[vec == "C"]
## "C"
vec[vec == "7"]
## "7"
vec[vec == "B"]
## "B"
vec[vec == "BB"]
## "BB"
vec[vec == "BBB"]
## "BBB"
vec[vec == "0"]
## "0"
```
Figure 12\.3: change\_many does something different for each of seven cases.
Then write code that can change the symbols for each case:
```
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
```
When you combine this into a function, you have a vectorized version of `change_symbols` that runs about 14 times faster:
```
change_vec <- function (vec) {
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
vec
}
system.time(change_vec(many))
## user system elapsed
## 1.994 0.059 2.051
```
Or, even better, use a lookup table. Lookup tables are a vectorized method because they rely on R’s vectorized selection operations:
```
change_vec2 <- function(vec){
tb <- c("DD" = "joker", "C" = "ace", "7" = "king", "B" = "queen",
"BB" = "jack", "BBB" = "ten", "0" = "nine")
unname(tb[vec])
}
system.time(change_vec(many))
## user system elapsed
## 0.687 0.059 0.746
```
Here, a lookup table is 40 times faster than the original function.
`abs_loop` and `change_many` illustrate a characteristic of vectorized code: programmers often write slower, nonvectorized code by relying on unnecessary `for` loops, like the one in `change_many`. I think this is the result of a general misunderstanding about R. `for` loops do not behave the same way in R as they do in other languages, which means you should write code differently in R than you would in other languages.
When you write in languages like C and Fortran, you must compile your code before your computer can run it. This compilation step optimizes how the `for` loops in the code use your computer’s memory, which makes the `for` loops very fast. As a result, many programmers use `for` loops frequently when they write in C and Fortran.
When you write in R, however, you do not compile your code. You skip this step, which makes programming in R a more user\-friendly experience. Unfortunately, this also means you do not give your loops the speed boost they would receive in C or Fortran. As a result, your loops will run slower than the other operations we have studied: logical tests, subsetting, and element\-wise execution. If you can write your code with the faster operations instead of a `for` loop, you should do so. No matter which language you write in, you should try to use the features of the language that run the fastest.
**if and for**
A good way to spot `for` loops that could be vectorized is to look for combinations of `if` and `for`. `if` can only be applied to one value at a time, which means it is often used in conjunction with a `for` loop. The `for` loop helps apply `if` to an entire vector of values. This combination can usually be replaced with logical subsetting, which will do the same thing but run much faster.
This doesn’t mean that you should never use `for` loops in R. There are still many places in R where `for` loops make sense. `for` loops perform a basic task that you cannot always recreate with vectorized code. `for` loops are also easy to understand and run reasonably fast in R, so long as you take a few precautions.
12\.3 How to Write Fast for Loops in R
--------------------------------------
You can dramatically increase the speed of your `for` loops by doing two things to optimize each loop. First, do as much as you can outside of the `for` loop. Every line of code that you place inside of the `for` loop will be run many, many times. If a line of code only needs to be run once, place it outside of the loop to avoid repetition.
Second, make sure that any storage objects that you use with the loop are large enough to contain *all* of the results of the loop. For example, both loops below will need to store one million values. The first loop stores its values in an object named `output` that begins with a length of *one million*:
```
system.time({
output <- rep(NA, 1000000)
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1.709 0.015 1.724
```
The second loop stores its values in an object named `output` that begins with a length of *one*. R will expand the object to a length of one million as it runs the loop. The code in this loop is very similar to the code in the first loop, but the loop takes *37 minutes* longer to run than the first loop:
```
system.time({
output <- NA
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1689.537 560.951 2249.927
```
The two loops do the same thing, so what accounts for the difference? In the second loop, R has to increase the length of `output` by one for each run of the loop. To do this, R needs to find a new place in your computer’s memory that can contain the larger object. R must then copy the `output` vector over and erase the old version of `output` before moving on to the next run of the loop. By the end of the loop, R has rewritten `output` in your computer’s memory one million times.
In the first case, the size of `output` never changes; R can define one `output` object in memory and use it for each run of the `for` loop.
The authors of R use low\-level languages like C and Fortran to write basic R functions, many of which use `for` loops. These functions are compiled and optimized before they become a part of R, which makes them quite fast.
Whenever you see `.Primitive`, `.Internal`, or `.Call` written in a function’s definition, you can be confident the function is calling code from another language. You’ll get all of the speed advantages of that language by using the function.
12\.4 Vectorized Code in Practice
---------------------------------
To see how vectorized code can help you as a data scientist, consider our slot machine project. In [Loops](loops.html#loops), you calculated the exact payout rate for your slot machine, but you could have estimated this payout rate with a simulation. If you played the slot machine many, many times, the average prize over all of the plays would be a good estimate of the true payout rate.
This method of estimation is based on the law of large numbers and is similar to many statistical simulations. To run this simulation, you could use a `for` loop:
```
winnings <- vector(length = 1000000)
for (i in 1:1000000) {
winnings[i] <- play()
}
mean(winnings)
## 0.9366984
```
The estimated payout rate after 10 million runs is 0\.937, which is very close to the true payout rate of 0\.934\. Note that I’m using the modified `score` function that treats diamonds as wilds.
If you run this simulation, you will notice that it takes a while to run. In fact, the simulation takes 342,308 seconds to run, which is about 5\.7 minutes. This is not particularly impressive, and you can do better by using vectorized code:
```
system.time(for (i in 1:1000000) {
winnings[i] <- play()
})
## user system elapsed
## 342.041 0.355 342.308
```
The current `score` function is not vectorized. It takes a single slot combination and uses an `if` tree to assign a prize to it. This combination of an `if` tree with a `for` loop suggests that you could write a piece of vectorized code that takes *many* slot combinations and then uses logical subsetting to operate on them all at once.
For example, you could rewrite `get_symbols` to generate *n* slot combinations and return them as an *n* x 3 matrix, like the one that follows. Each row of the matrix will contain one slot combination to be scored:
```
get_many_symbols <- function(n) {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
vec <- sample(wheel, size = 3 * n, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
matrix(vec, ncol = 3)
}
get_many_symbols(5)
## [,1] [,2] [,3]
## [1,] "B" "0" "B"
## [2,] "0" "BB" "7"
## [3,] "0" "0" "BBB"
## [4,] "0" "0" "B"
## [5,] "BBB" "0" "0"
```
You could also rewrite `play` to take a parameter, `n`, and return `n` prizes, in a data frame:
```
play_many <- function(n) {
symb_mat <- get_many_symbols(n = n)
data.frame(w1 = symb_mat[,1], w2 = symb_mat[,2],
w3 = symb_mat[,3], prize = score_many(symb_mat))
}
```
This new function would make it easy to simulate a million, or even 10 million plays of the slot machine, which will be our goal. When we’re finished, you will be able to estimate the payout rate with:
```
# plays <- play_many(10000000))
# mean(plays$prize)
```
Now you just need to write `score_many`, a vectorized (matix\-ized?) version of `score` that takes an *n* x 3 matrix and returns *n* prizes. It will be difficult to write this function because `score` is already quite complicated. I would not expect you to feel confident doing this on your own until you have more practice and experience than we’ve been able to develop here.
Should you like to test your skills and write a version of `score_many`, I recommend that you use the function `rowSums` within your code. It calculates the sum of each row of numbers (or logicals) in a matrix.
If you would like to test yourself in a more modest way, I recommend that you study the following model `score_many` function until you understand how each part works and how the parts work together to create a vectorized function. To do this, it will be helpful to create a concrete example, like this:
```
symbols <- matrix(
c("DD", "DD", "DD",
"C", "DD", "0",
"B", "B", "B",
"B", "BB", "BBB",
"C", "C", "0",
"7", "DD", "DD"), nrow = 6, byrow = TRUE)
symbols
## [,1] [,2] [,3]
## [1,] "DD" "DD" "DD"
## [2,] "C" "DD" "0"
## [3,] "B" "B" "B"
## [4,] "B" "BB" "BBB"
## [5,] "C" "C" "0"
## [6,] "7" "DD" "DD"
```
Then you can run each line of `score_many` against the example and examine the results as you go.
**Exercise 12\.3 (Test Your Understanding)** Study the model `score_many` function until you are satisfied that you understand how it works and could write a similar function yourself.
**Exercise 12\.4 (Advanced Challenge)** Instead of examining the model answer, write your own vectorized version of `score`. Assume that the data is stored in an *n* × 3 matrix where each row of the matrix contains one combination of slots to be scored.
You can use the version of `score` that treats diamonds as wild or the version of `score` that doesn’t. However, the model answer will use the version treating diamonds as wild.
*Solution.* `score_many` is a vectorized version of `score`. You can use it to run the simulation at the start of this section in a little over 20 seconds. This is 17 times faster than using a `for` loop:
```
# symbols should be a matrix with a column for each slot machine window
score_many <- function(symbols) {
# Step 1: Assign base prize based on cherries and diamonds ---------
## Count the number of cherries and diamonds in each combination
cherries <- rowSums(symbols == "C")
diamonds <- rowSums(symbols == "DD")
## Wild diamonds count as cherries
prize <- c(0, 2, 5)[cherries + diamonds + 1]
## ...but not if there are zero real cherries
### (cherries is coerced to FALSE where cherries == 0)
prize[!cherries] <- 0
# Step 2: Change prize for combinations that contain three of a kind
same <- symbols[, 1] == symbols[, 2] &
symbols[, 2] == symbols[, 3]
payoffs <- c("DD" = 100, "7" = 80, "BBB" = 40,
"BB" = 25, "B" = 10, "C" = 10, "0" = 0)
prize[same] <- payoffs[symbols[same, 1]]
# Step 3: Change prize for combinations that contain all bars ------
bars <- symbols == "B" | symbols == "BB" | symbols == "BBB"
all_bars <- bars[, 1] & bars[, 2] & bars[, 3] & !same
prize[all_bars] <- 5
# Step 4: Handle wilds ---------------------------------------------
## combos with two diamonds
two_wilds <- diamonds == 2
### Identify the nonwild symbol
one <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 2] == symbols[, 3]
two <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 1] == symbols[, 3]
three <- two_wilds & symbols[, 1] == symbols[, 2] &
symbols[, 2] != symbols[, 3]
### Treat as three of a kind
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
## combos with one wild
one_wild <- diamonds == 1
### Treat as all bars (if appropriate)
wild_bars <- one_wild & (rowSums(bars) == 2)
prize[wild_bars] <- 5
### Treat as three of a kind (if appropriate)
one <- one_wild & symbols[, 1] == symbols[, 2]
two <- one_wild & symbols[, 2] == symbols[, 3]
three <- one_wild & symbols[, 3] == symbols[, 1]
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
# Step 5: Double prize for every diamond in combo ------------------
unname(prize * 2^diamonds)
}
system.time(play_many(10000000))
## user system elapsed
## 20.942 1.433 22.367
```
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
12\.5 Summary
-------------
Fast code is an important component of data science because you can do more with fast code than you can do with slow code. You can work with larger data sets before computational constraints intervene, and you can do more computation before time constraints intervene. The fastest code in R will rely on the things that R does best: logical tests, subsetting, and element\-wise execution. I’ve called this type of code vectorized code because code written with these operations will take a vector of values as input and operate on each element of the vector at the same time. The majority of the code written in R is already vectorized.
If you use these operations, but your code does not appear vectorized, analyze the sequential steps and parallel cases in your program. Ensure that you’ve used vectorized functions to handle the steps and logical subsetting to handle the cases. Be aware, however, that some tasks cannot be vectorized.
12\.6 Project 3 Wrap\-up
------------------------
You have now written your first program in R, and it is a program that you should be proud of. `play` is not a simple `hello world` exercise, but a real program that does a real task in a complicated way.
Writing new programs in R will always be challenging because programming depends so much on your own creativity, problem\-solving ability, and experience writing similar types of programs. However, you can use the suggestions in this chapter to make even the most complicated program manageable: divide tasks into simple steps and cases, work with concrete examples, and describe possible solutions in English.
This project completes the education you began in [The Very Basics](basics.html#basics). You can now use R to handle data, which has augmented your ability to analyze data. You can:
* Load and store data in your computer—not on paper or in your mind
* Accurately recall and change individual values without relying on your memory
* Instruct your computer to do tedious, or complex, tasks on your behalf
These skills solve an important logistical problem faced by every data scientist: *how can you store and manipulate data without making errors?* However, this is not the only problem that you will face as a data scientist. The next problem will appear when you try to understand the information contained in your data. It is nearly impossible to spot insights or to discover patterns in raw data. A third problem will appear when you try to use your data set to reason about reality, which includes things not contained in your data set. What exactly does your data imply about things outside of the data set? How certain can you be?
I refer to these problems as the logistical, tactical, and strategic problems of data science, as shown in Figure [12\.4](speed.html#fig:venn). You’ll face them whenever you try to learn from data:
* **A logistical problem:** \- How can you store and manipulate data without making errors?
* **A tactical problem** \- How can you discover the information contained in your data?
* **A strategic problem** \- How can you use the data to draw conclusions about the world at large?
Figure 12\.4: The three core skill sets of data science: computer programming, data comprehension, and scientific reasoning.
A well\-rounded data scientist will need to be able to solve each of these problems in many different situations. By learning to program in R, you have mastered the logistical problem, which is a prerequisite for solving the tactical and strategic problems.
If you would like to learn how to reason with data, or how to transform, visualize, and explore your data sets with R tools, I recommend the book [*R for Data Science*](http://r4ds.had.co.nz/), the companion volume to this book. *R for Data Science* teaches a simple workflow for transforming, visualizing, and modeling data in R, as well as how to report results with the R Markdown package.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/speed.html |
12 Speed
========
As a data scientist, you need speed. You can work with bigger data and do more ambitious tasks when your code runs fast. This chapter will show you a specific way to write fast code in R. You will then use the method to simulate 10 million plays of your slot machine.
12\.1 Vectorized Code
---------------------
You can write a piece of code in many different ways, but the fastest R code will usually take advantage of three things: logical tests, subsetting, and element\-wise execution. These are the things that R does best. Code that uses these things usually has a certain quality: it is *vectorized*; the code can take a vector of values as input and manipulate each value in the vector at the same time.
To see what vectorized code looks like, compare these two examples of an absolute value function. Each takes a vector of numbers and transforms it into a vector of absolute values (e.g., positive numbers). The first example is not vectorized; `abs_loop` uses a `for` loop to manipulate each element of the vector one at a time:
```
abs_loop <- function(vec){
for (i in 1:length(vec)) {
if (vec[i] < 0) {
vec[i] <- -vec[i]
}
}
vec
}
```
The second example, `abs_set`, is a vectorized version of `abs_loop`. It uses logical subsetting to manipulate every negative number in the vector at the same time:
```
abs_sets <- function(vec){
negs <- vec < 0
vec[negs] <- vec[negs] * -1
vec
}
```
`abs_set` is much faster than `abs_loop` because it relies on operations that R does quickly: logical tests, subsetting, and element\-wise execution.
You can use the `system.time` function to see just how fast `abs_set` is. `system.time` takes an R expression, runs it, and then displays how much time elapsed while the expression ran.
To compare `abs_loop` and `abs_set`, first make a long vector of positive and negative numbers. `long` will contain 10 million values:
```
long <- rep(c(-1, 1), 5000000)
```
`rep` repeats a value, or vector of values, many times. To use `rep`, give it a vector of values and then the number of times to repeat the vector. R will return the results as a new, longer vector.
You can then use `system.time` to measure how much time it takes each function to evaluate `long`:
```
system.time(abs_loop(long))
## user system elapsed
## 15.982 0.032 16.018
system.time(abs_sets(long))
## user system elapsed
## 0.529 0.063 0.592
```
Don’t confuse `system.time` with `Sys.time`, which returns the current time.
The first two columns of the output of `system.time` report how many seconds your computer spent executing the call on the user side and system sides of your process, a dichotomy that will vary from OS to OS.
The last column displays how many seconds elapsed while R ran the expression. The results show that `abs_set` calculated the absolute value 30 times faster than `abs_loop` when applied to a vector of 10 million numbers. You can expect similar speed\-ups whenever you write vectorized code.
**Exercise 12\.1 (How fast is abs?)** Many preexisting R functions are already vectorized and have been optimized to perform quickly. You can make your code faster by relying on these functions whenever possible. For example, R comes with a built\-in absolute value function, `abs`.
Check to see how much faster `abs` computes the absolute value of `long` than `abs_loop` and `abs_set` do.
*Solution.* You can measure the speed of `abs` with `system.time`. It takes `abs` a lightning\-fast 0\.05 seconds to calculate the absolute value of 10 million numbers. This is 0\.592 / 0\.054 \= 10\.96 times faster than `abs_set` and nearly 300 times faster than `abs_loop`:
```
system.time(abs(long))
## user system elapsed
## 0.037 0.018 0.054
```
12\.2 How to Write Vectorized Code
----------------------------------
Vectorized code is easy to write in R because most R functions are already vectorized. Code based on these functions can easily be made vectorized and therefore fast. To create vectorized code:
1. Use vectorized functions to complete the sequential steps in your program.
2. Use logical subsetting to handle parallel cases. Try to manipulate every element in a case at once.
`abs_loop` and `abs_set` illustrate these rules. The functions both handle two cases and perform one sequential step, Figure [12\.1](speed.html#fig:abs). If a number is positive, the functions leave it alone. If a number is negative, the functions multiply it by negative one.
Figure 12\.1: abs\_loop uses a for loop to sift data into one of two cases: negative numbers and nonnegative numbers.
You can identify all of the elements of a vector that fall into a case with a logical test. R will execute the test in element\-wise fashion and return a `TRUE` for every element that belongs in the case. For example, `vec < 0` identifies every value of `vec` that belongs to the negative case. You can use the same logical test to extract the set of negative values with logical subsetting:
```
vec <- c(1, -2, 3, -4, 5, -6, 7, -8, 9, -10)
vec < 0
## FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
vec[vec < 0]
## -2 -4 -6 -8 -10
```
The plan in Figure [12\.1](speed.html#fig:abs) now requires a sequential step: you must multiply each of the negative values by negative one. All of R’s arithmetic operators are vectorized, so you can use `*` to complete this step in vectorized fashion. `*` will multiply each number in `vec[vec < 0]` by negative one at the same time:
```
vec[vec < 0] * -1
## 2 4 6 8 10
```
Finally, you can use R’s assignment operator, which is also vectorized, to save the new set over the old set in the original `vec` object. Since `<-` is vectorized, the elements of the new set will be paired up to the elements of the old set, in order, and then element\-wise assignment will occur. As a result, each negative value will be replaced by its positive partner, as in Figure [12\.2](speed.html#fig:assignment).
Figure 12\.2: Use logical subsetting to modify groups of values in place. R’s arithmetic and assignment operators are vectorized, which lets you manipulate and update multiple values at once.
**Exercise 12\.2 (Vectorize a Function)** The following function converts a vector of slot symbols to a vector of new slot symbols. Can you vectorize it? How much faster does the vectorized version work?
```
change_symbols <- function(vec){
for (i in 1:length(vec)){
if (vec[i] == "DD") {
vec[i] <- "joker"
} else if (vec[i] == "C") {
vec[i] <- "ace"
} else if (vec[i] == "7") {
vec[i] <- "king"
}else if (vec[i] == "B") {
vec[i] <- "queen"
} else if (vec[i] == "BB") {
vec[i] <- "jack"
} else if (vec[i] == "BBB") {
vec[i] <- "ten"
} else {
vec[i] <- "nine"
}
}
vec
}
vec <- c("DD", "C", "7", "B", "BB", "BBB", "0")
change_symbols(vec)
## "joker" "ace" "king" "queen" "jack" "ten" "nine"
many <- rep(vec, 1000000)
system.time(change_symbols(many))
## user system elapsed
## 30.057 0.031 30.079
```
*Solution.* `change_symbols` uses a `for` loop to sort values into seven different cases, as demonstrated in Figure [12\.3](speed.html#fig:change).
To vectorize `change_symbols`, create a logical test that can identify each case:
```
vec[vec == "DD"]
## "DD"
vec[vec == "C"]
## "C"
vec[vec == "7"]
## "7"
vec[vec == "B"]
## "B"
vec[vec == "BB"]
## "BB"
vec[vec == "BBB"]
## "BBB"
vec[vec == "0"]
## "0"
```
Figure 12\.3: change\_many does something different for each of seven cases.
Then write code that can change the symbols for each case:
```
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
```
When you combine this into a function, you have a vectorized version of `change_symbols` that runs about 14 times faster:
```
change_vec <- function (vec) {
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
vec
}
system.time(change_vec(many))
## user system elapsed
## 1.994 0.059 2.051
```
Or, even better, use a lookup table. Lookup tables are a vectorized method because they rely on R’s vectorized selection operations:
```
change_vec2 <- function(vec){
tb <- c("DD" = "joker", "C" = "ace", "7" = "king", "B" = "queen",
"BB" = "jack", "BBB" = "ten", "0" = "nine")
unname(tb[vec])
}
system.time(change_vec(many))
## user system elapsed
## 0.687 0.059 0.746
```
Here, a lookup table is 40 times faster than the original function.
`abs_loop` and `change_many` illustrate a characteristic of vectorized code: programmers often write slower, nonvectorized code by relying on unnecessary `for` loops, like the one in `change_many`. I think this is the result of a general misunderstanding about R. `for` loops do not behave the same way in R as they do in other languages, which means you should write code differently in R than you would in other languages.
When you write in languages like C and Fortran, you must compile your code before your computer can run it. This compilation step optimizes how the `for` loops in the code use your computer’s memory, which makes the `for` loops very fast. As a result, many programmers use `for` loops frequently when they write in C and Fortran.
When you write in R, however, you do not compile your code. You skip this step, which makes programming in R a more user\-friendly experience. Unfortunately, this also means you do not give your loops the speed boost they would receive in C or Fortran. As a result, your loops will run slower than the other operations we have studied: logical tests, subsetting, and element\-wise execution. If you can write your code with the faster operations instead of a `for` loop, you should do so. No matter which language you write in, you should try to use the features of the language that run the fastest.
**if and for**
A good way to spot `for` loops that could be vectorized is to look for combinations of `if` and `for`. `if` can only be applied to one value at a time, which means it is often used in conjunction with a `for` loop. The `for` loop helps apply `if` to an entire vector of values. This combination can usually be replaced with logical subsetting, which will do the same thing but run much faster.
This doesn’t mean that you should never use `for` loops in R. There are still many places in R where `for` loops make sense. `for` loops perform a basic task that you cannot always recreate with vectorized code. `for` loops are also easy to understand and run reasonably fast in R, so long as you take a few precautions.
12\.3 How to Write Fast for Loops in R
--------------------------------------
You can dramatically increase the speed of your `for` loops by doing two things to optimize each loop. First, do as much as you can outside of the `for` loop. Every line of code that you place inside of the `for` loop will be run many, many times. If a line of code only needs to be run once, place it outside of the loop to avoid repetition.
Second, make sure that any storage objects that you use with the loop are large enough to contain *all* of the results of the loop. For example, both loops below will need to store one million values. The first loop stores its values in an object named `output` that begins with a length of *one million*:
```
system.time({
output <- rep(NA, 1000000)
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1.709 0.015 1.724
```
The second loop stores its values in an object named `output` that begins with a length of *one*. R will expand the object to a length of one million as it runs the loop. The code in this loop is very similar to the code in the first loop, but the loop takes *37 minutes* longer to run than the first loop:
```
system.time({
output <- NA
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1689.537 560.951 2249.927
```
The two loops do the same thing, so what accounts for the difference? In the second loop, R has to increase the length of `output` by one for each run of the loop. To do this, R needs to find a new place in your computer’s memory that can contain the larger object. R must then copy the `output` vector over and erase the old version of `output` before moving on to the next run of the loop. By the end of the loop, R has rewritten `output` in your computer’s memory one million times.
In the first case, the size of `output` never changes; R can define one `output` object in memory and use it for each run of the `for` loop.
The authors of R use low\-level languages like C and Fortran to write basic R functions, many of which use `for` loops. These functions are compiled and optimized before they become a part of R, which makes them quite fast.
Whenever you see `.Primitive`, `.Internal`, or `.Call` written in a function’s definition, you can be confident the function is calling code from another language. You’ll get all of the speed advantages of that language by using the function.
12\.4 Vectorized Code in Practice
---------------------------------
To see how vectorized code can help you as a data scientist, consider our slot machine project. In [Loops](loops.html#loops), you calculated the exact payout rate for your slot machine, but you could have estimated this payout rate with a simulation. If you played the slot machine many, many times, the average prize over all of the plays would be a good estimate of the true payout rate.
This method of estimation is based on the law of large numbers and is similar to many statistical simulations. To run this simulation, you could use a `for` loop:
```
winnings <- vector(length = 1000000)
for (i in 1:1000000) {
winnings[i] <- play()
}
mean(winnings)
## 0.9366984
```
The estimated payout rate after 10 million runs is 0\.937, which is very close to the true payout rate of 0\.934\. Note that I’m using the modified `score` function that treats diamonds as wilds.
If you run this simulation, you will notice that it takes a while to run. In fact, the simulation takes 342,308 seconds to run, which is about 5\.7 minutes. This is not particularly impressive, and you can do better by using vectorized code:
```
system.time(for (i in 1:1000000) {
winnings[i] <- play()
})
## user system elapsed
## 342.041 0.355 342.308
```
The current `score` function is not vectorized. It takes a single slot combination and uses an `if` tree to assign a prize to it. This combination of an `if` tree with a `for` loop suggests that you could write a piece of vectorized code that takes *many* slot combinations and then uses logical subsetting to operate on them all at once.
For example, you could rewrite `get_symbols` to generate *n* slot combinations and return them as an *n* x 3 matrix, like the one that follows. Each row of the matrix will contain one slot combination to be scored:
```
get_many_symbols <- function(n) {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
vec <- sample(wheel, size = 3 * n, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
matrix(vec, ncol = 3)
}
get_many_symbols(5)
## [,1] [,2] [,3]
## [1,] "B" "0" "B"
## [2,] "0" "BB" "7"
## [3,] "0" "0" "BBB"
## [4,] "0" "0" "B"
## [5,] "BBB" "0" "0"
```
You could also rewrite `play` to take a parameter, `n`, and return `n` prizes, in a data frame:
```
play_many <- function(n) {
symb_mat <- get_many_symbols(n = n)
data.frame(w1 = symb_mat[,1], w2 = symb_mat[,2],
w3 = symb_mat[,3], prize = score_many(symb_mat))
}
```
This new function would make it easy to simulate a million, or even 10 million plays of the slot machine, which will be our goal. When we’re finished, you will be able to estimate the payout rate with:
```
# plays <- play_many(10000000))
# mean(plays$prize)
```
Now you just need to write `score_many`, a vectorized (matix\-ized?) version of `score` that takes an *n* x 3 matrix and returns *n* prizes. It will be difficult to write this function because `score` is already quite complicated. I would not expect you to feel confident doing this on your own until you have more practice and experience than we’ve been able to develop here.
Should you like to test your skills and write a version of `score_many`, I recommend that you use the function `rowSums` within your code. It calculates the sum of each row of numbers (or logicals) in a matrix.
If you would like to test yourself in a more modest way, I recommend that you study the following model `score_many` function until you understand how each part works and how the parts work together to create a vectorized function. To do this, it will be helpful to create a concrete example, like this:
```
symbols <- matrix(
c("DD", "DD", "DD",
"C", "DD", "0",
"B", "B", "B",
"B", "BB", "BBB",
"C", "C", "0",
"7", "DD", "DD"), nrow = 6, byrow = TRUE)
symbols
## [,1] [,2] [,3]
## [1,] "DD" "DD" "DD"
## [2,] "C" "DD" "0"
## [3,] "B" "B" "B"
## [4,] "B" "BB" "BBB"
## [5,] "C" "C" "0"
## [6,] "7" "DD" "DD"
```
Then you can run each line of `score_many` against the example and examine the results as you go.
**Exercise 12\.3 (Test Your Understanding)** Study the model `score_many` function until you are satisfied that you understand how it works and could write a similar function yourself.
**Exercise 12\.4 (Advanced Challenge)** Instead of examining the model answer, write your own vectorized version of `score`. Assume that the data is stored in an *n* × 3 matrix where each row of the matrix contains one combination of slots to be scored.
You can use the version of `score` that treats diamonds as wild or the version of `score` that doesn’t. However, the model answer will use the version treating diamonds as wild.
*Solution.* `score_many` is a vectorized version of `score`. You can use it to run the simulation at the start of this section in a little over 20 seconds. This is 17 times faster than using a `for` loop:
```
# symbols should be a matrix with a column for each slot machine window
score_many <- function(symbols) {
# Step 1: Assign base prize based on cherries and diamonds ---------
## Count the number of cherries and diamonds in each combination
cherries <- rowSums(symbols == "C")
diamonds <- rowSums(symbols == "DD")
## Wild diamonds count as cherries
prize <- c(0, 2, 5)[cherries + diamonds + 1]
## ...but not if there are zero real cherries
### (cherries is coerced to FALSE where cherries == 0)
prize[!cherries] <- 0
# Step 2: Change prize for combinations that contain three of a kind
same <- symbols[, 1] == symbols[, 2] &
symbols[, 2] == symbols[, 3]
payoffs <- c("DD" = 100, "7" = 80, "BBB" = 40,
"BB" = 25, "B" = 10, "C" = 10, "0" = 0)
prize[same] <- payoffs[symbols[same, 1]]
# Step 3: Change prize for combinations that contain all bars ------
bars <- symbols == "B" | symbols == "BB" | symbols == "BBB"
all_bars <- bars[, 1] & bars[, 2] & bars[, 3] & !same
prize[all_bars] <- 5
# Step 4: Handle wilds ---------------------------------------------
## combos with two diamonds
two_wilds <- diamonds == 2
### Identify the nonwild symbol
one <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 2] == symbols[, 3]
two <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 1] == symbols[, 3]
three <- two_wilds & symbols[, 1] == symbols[, 2] &
symbols[, 2] != symbols[, 3]
### Treat as three of a kind
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
## combos with one wild
one_wild <- diamonds == 1
### Treat as all bars (if appropriate)
wild_bars <- one_wild & (rowSums(bars) == 2)
prize[wild_bars] <- 5
### Treat as three of a kind (if appropriate)
one <- one_wild & symbols[, 1] == symbols[, 2]
two <- one_wild & symbols[, 2] == symbols[, 3]
three <- one_wild & symbols[, 3] == symbols[, 1]
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
# Step 5: Double prize for every diamond in combo ------------------
unname(prize * 2^diamonds)
}
system.time(play_many(10000000))
## user system elapsed
## 20.942 1.433 22.367
```
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
12\.5 Summary
-------------
Fast code is an important component of data science because you can do more with fast code than you can do with slow code. You can work with larger data sets before computational constraints intervene, and you can do more computation before time constraints intervene. The fastest code in R will rely on the things that R does best: logical tests, subsetting, and element\-wise execution. I’ve called this type of code vectorized code because code written with these operations will take a vector of values as input and operate on each element of the vector at the same time. The majority of the code written in R is already vectorized.
If you use these operations, but your code does not appear vectorized, analyze the sequential steps and parallel cases in your program. Ensure that you’ve used vectorized functions to handle the steps and logical subsetting to handle the cases. Be aware, however, that some tasks cannot be vectorized.
12\.6 Project 3 Wrap\-up
------------------------
You have now written your first program in R, and it is a program that you should be proud of. `play` is not a simple `hello world` exercise, but a real program that does a real task in a complicated way.
Writing new programs in R will always be challenging because programming depends so much on your own creativity, problem\-solving ability, and experience writing similar types of programs. However, you can use the suggestions in this chapter to make even the most complicated program manageable: divide tasks into simple steps and cases, work with concrete examples, and describe possible solutions in English.
This project completes the education you began in [The Very Basics](basics.html#basics). You can now use R to handle data, which has augmented your ability to analyze data. You can:
* Load and store data in your computer—not on paper or in your mind
* Accurately recall and change individual values without relying on your memory
* Instruct your computer to do tedious, or complex, tasks on your behalf
These skills solve an important logistical problem faced by every data scientist: *how can you store and manipulate data without making errors?* However, this is not the only problem that you will face as a data scientist. The next problem will appear when you try to understand the information contained in your data. It is nearly impossible to spot insights or to discover patterns in raw data. A third problem will appear when you try to use your data set to reason about reality, which includes things not contained in your data set. What exactly does your data imply about things outside of the data set? How certain can you be?
I refer to these problems as the logistical, tactical, and strategic problems of data science, as shown in Figure [12\.4](speed.html#fig:venn). You’ll face them whenever you try to learn from data:
* **A logistical problem:** \- How can you store and manipulate data without making errors?
* **A tactical problem** \- How can you discover the information contained in your data?
* **A strategic problem** \- How can you use the data to draw conclusions about the world at large?
Figure 12\.4: The three core skill sets of data science: computer programming, data comprehension, and scientific reasoning.
A well\-rounded data scientist will need to be able to solve each of these problems in many different situations. By learning to program in R, you have mastered the logistical problem, which is a prerequisite for solving the tactical and strategic problems.
If you would like to learn how to reason with data, or how to transform, visualize, and explore your data sets with R tools, I recommend the book [*R for Data Science*](http://r4ds.had.co.nz/), the companion volume to this book. *R for Data Science* teaches a simple workflow for transforming, visualizing, and modeling data in R, as well as how to report results with the R Markdown package.
12\.1 Vectorized Code
---------------------
You can write a piece of code in many different ways, but the fastest R code will usually take advantage of three things: logical tests, subsetting, and element\-wise execution. These are the things that R does best. Code that uses these things usually has a certain quality: it is *vectorized*; the code can take a vector of values as input and manipulate each value in the vector at the same time.
To see what vectorized code looks like, compare these two examples of an absolute value function. Each takes a vector of numbers and transforms it into a vector of absolute values (e.g., positive numbers). The first example is not vectorized; `abs_loop` uses a `for` loop to manipulate each element of the vector one at a time:
```
abs_loop <- function(vec){
for (i in 1:length(vec)) {
if (vec[i] < 0) {
vec[i] <- -vec[i]
}
}
vec
}
```
The second example, `abs_set`, is a vectorized version of `abs_loop`. It uses logical subsetting to manipulate every negative number in the vector at the same time:
```
abs_sets <- function(vec){
negs <- vec < 0
vec[negs] <- vec[negs] * -1
vec
}
```
`abs_set` is much faster than `abs_loop` because it relies on operations that R does quickly: logical tests, subsetting, and element\-wise execution.
You can use the `system.time` function to see just how fast `abs_set` is. `system.time` takes an R expression, runs it, and then displays how much time elapsed while the expression ran.
To compare `abs_loop` and `abs_set`, first make a long vector of positive and negative numbers. `long` will contain 10 million values:
```
long <- rep(c(-1, 1), 5000000)
```
`rep` repeats a value, or vector of values, many times. To use `rep`, give it a vector of values and then the number of times to repeat the vector. R will return the results as a new, longer vector.
You can then use `system.time` to measure how much time it takes each function to evaluate `long`:
```
system.time(abs_loop(long))
## user system elapsed
## 15.982 0.032 16.018
system.time(abs_sets(long))
## user system elapsed
## 0.529 0.063 0.592
```
Don’t confuse `system.time` with `Sys.time`, which returns the current time.
The first two columns of the output of `system.time` report how many seconds your computer spent executing the call on the user side and system sides of your process, a dichotomy that will vary from OS to OS.
The last column displays how many seconds elapsed while R ran the expression. The results show that `abs_set` calculated the absolute value 30 times faster than `abs_loop` when applied to a vector of 10 million numbers. You can expect similar speed\-ups whenever you write vectorized code.
**Exercise 12\.1 (How fast is abs?)** Many preexisting R functions are already vectorized and have been optimized to perform quickly. You can make your code faster by relying on these functions whenever possible. For example, R comes with a built\-in absolute value function, `abs`.
Check to see how much faster `abs` computes the absolute value of `long` than `abs_loop` and `abs_set` do.
*Solution.* You can measure the speed of `abs` with `system.time`. It takes `abs` a lightning\-fast 0\.05 seconds to calculate the absolute value of 10 million numbers. This is 0\.592 / 0\.054 \= 10\.96 times faster than `abs_set` and nearly 300 times faster than `abs_loop`:
```
system.time(abs(long))
## user system elapsed
## 0.037 0.018 0.054
```
12\.2 How to Write Vectorized Code
----------------------------------
Vectorized code is easy to write in R because most R functions are already vectorized. Code based on these functions can easily be made vectorized and therefore fast. To create vectorized code:
1. Use vectorized functions to complete the sequential steps in your program.
2. Use logical subsetting to handle parallel cases. Try to manipulate every element in a case at once.
`abs_loop` and `abs_set` illustrate these rules. The functions both handle two cases and perform one sequential step, Figure [12\.1](speed.html#fig:abs). If a number is positive, the functions leave it alone. If a number is negative, the functions multiply it by negative one.
Figure 12\.1: abs\_loop uses a for loop to sift data into one of two cases: negative numbers and nonnegative numbers.
You can identify all of the elements of a vector that fall into a case with a logical test. R will execute the test in element\-wise fashion and return a `TRUE` for every element that belongs in the case. For example, `vec < 0` identifies every value of `vec` that belongs to the negative case. You can use the same logical test to extract the set of negative values with logical subsetting:
```
vec <- c(1, -2, 3, -4, 5, -6, 7, -8, 9, -10)
vec < 0
## FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
vec[vec < 0]
## -2 -4 -6 -8 -10
```
The plan in Figure [12\.1](speed.html#fig:abs) now requires a sequential step: you must multiply each of the negative values by negative one. All of R’s arithmetic operators are vectorized, so you can use `*` to complete this step in vectorized fashion. `*` will multiply each number in `vec[vec < 0]` by negative one at the same time:
```
vec[vec < 0] * -1
## 2 4 6 8 10
```
Finally, you can use R’s assignment operator, which is also vectorized, to save the new set over the old set in the original `vec` object. Since `<-` is vectorized, the elements of the new set will be paired up to the elements of the old set, in order, and then element\-wise assignment will occur. As a result, each negative value will be replaced by its positive partner, as in Figure [12\.2](speed.html#fig:assignment).
Figure 12\.2: Use logical subsetting to modify groups of values in place. R’s arithmetic and assignment operators are vectorized, which lets you manipulate and update multiple values at once.
**Exercise 12\.2 (Vectorize a Function)** The following function converts a vector of slot symbols to a vector of new slot symbols. Can you vectorize it? How much faster does the vectorized version work?
```
change_symbols <- function(vec){
for (i in 1:length(vec)){
if (vec[i] == "DD") {
vec[i] <- "joker"
} else if (vec[i] == "C") {
vec[i] <- "ace"
} else if (vec[i] == "7") {
vec[i] <- "king"
}else if (vec[i] == "B") {
vec[i] <- "queen"
} else if (vec[i] == "BB") {
vec[i] <- "jack"
} else if (vec[i] == "BBB") {
vec[i] <- "ten"
} else {
vec[i] <- "nine"
}
}
vec
}
vec <- c("DD", "C", "7", "B", "BB", "BBB", "0")
change_symbols(vec)
## "joker" "ace" "king" "queen" "jack" "ten" "nine"
many <- rep(vec, 1000000)
system.time(change_symbols(many))
## user system elapsed
## 30.057 0.031 30.079
```
*Solution.* `change_symbols` uses a `for` loop to sort values into seven different cases, as demonstrated in Figure [12\.3](speed.html#fig:change).
To vectorize `change_symbols`, create a logical test that can identify each case:
```
vec[vec == "DD"]
## "DD"
vec[vec == "C"]
## "C"
vec[vec == "7"]
## "7"
vec[vec == "B"]
## "B"
vec[vec == "BB"]
## "BB"
vec[vec == "BBB"]
## "BBB"
vec[vec == "0"]
## "0"
```
Figure 12\.3: change\_many does something different for each of seven cases.
Then write code that can change the symbols for each case:
```
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
```
When you combine this into a function, you have a vectorized version of `change_symbols` that runs about 14 times faster:
```
change_vec <- function (vec) {
vec[vec == "DD"] <- "joker"
vec[vec == "C"] <- "ace"
vec[vec == "7"] <- "king"
vec[vec == "B"] <- "queen"
vec[vec == "BB"] <- "jack"
vec[vec == "BBB"] <- "ten"
vec[vec == "0"] <- "nine"
vec
}
system.time(change_vec(many))
## user system elapsed
## 1.994 0.059 2.051
```
Or, even better, use a lookup table. Lookup tables are a vectorized method because they rely on R’s vectorized selection operations:
```
change_vec2 <- function(vec){
tb <- c("DD" = "joker", "C" = "ace", "7" = "king", "B" = "queen",
"BB" = "jack", "BBB" = "ten", "0" = "nine")
unname(tb[vec])
}
system.time(change_vec(many))
## user system elapsed
## 0.687 0.059 0.746
```
Here, a lookup table is 40 times faster than the original function.
`abs_loop` and `change_many` illustrate a characteristic of vectorized code: programmers often write slower, nonvectorized code by relying on unnecessary `for` loops, like the one in `change_many`. I think this is the result of a general misunderstanding about R. `for` loops do not behave the same way in R as they do in other languages, which means you should write code differently in R than you would in other languages.
When you write in languages like C and Fortran, you must compile your code before your computer can run it. This compilation step optimizes how the `for` loops in the code use your computer’s memory, which makes the `for` loops very fast. As a result, many programmers use `for` loops frequently when they write in C and Fortran.
When you write in R, however, you do not compile your code. You skip this step, which makes programming in R a more user\-friendly experience. Unfortunately, this also means you do not give your loops the speed boost they would receive in C or Fortran. As a result, your loops will run slower than the other operations we have studied: logical tests, subsetting, and element\-wise execution. If you can write your code with the faster operations instead of a `for` loop, you should do so. No matter which language you write in, you should try to use the features of the language that run the fastest.
**if and for**
A good way to spot `for` loops that could be vectorized is to look for combinations of `if` and `for`. `if` can only be applied to one value at a time, which means it is often used in conjunction with a `for` loop. The `for` loop helps apply `if` to an entire vector of values. This combination can usually be replaced with logical subsetting, which will do the same thing but run much faster.
This doesn’t mean that you should never use `for` loops in R. There are still many places in R where `for` loops make sense. `for` loops perform a basic task that you cannot always recreate with vectorized code. `for` loops are also easy to understand and run reasonably fast in R, so long as you take a few precautions.
12\.3 How to Write Fast for Loops in R
--------------------------------------
You can dramatically increase the speed of your `for` loops by doing two things to optimize each loop. First, do as much as you can outside of the `for` loop. Every line of code that you place inside of the `for` loop will be run many, many times. If a line of code only needs to be run once, place it outside of the loop to avoid repetition.
Second, make sure that any storage objects that you use with the loop are large enough to contain *all* of the results of the loop. For example, both loops below will need to store one million values. The first loop stores its values in an object named `output` that begins with a length of *one million*:
```
system.time({
output <- rep(NA, 1000000)
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1.709 0.015 1.724
```
The second loop stores its values in an object named `output` that begins with a length of *one*. R will expand the object to a length of one million as it runs the loop. The code in this loop is very similar to the code in the first loop, but the loop takes *37 minutes* longer to run than the first loop:
```
system.time({
output <- NA
for (i in 1:1000000) {
output[i] <- i + 1
}
})
## user system elapsed
## 1689.537 560.951 2249.927
```
The two loops do the same thing, so what accounts for the difference? In the second loop, R has to increase the length of `output` by one for each run of the loop. To do this, R needs to find a new place in your computer’s memory that can contain the larger object. R must then copy the `output` vector over and erase the old version of `output` before moving on to the next run of the loop. By the end of the loop, R has rewritten `output` in your computer’s memory one million times.
In the first case, the size of `output` never changes; R can define one `output` object in memory and use it for each run of the `for` loop.
The authors of R use low\-level languages like C and Fortran to write basic R functions, many of which use `for` loops. These functions are compiled and optimized before they become a part of R, which makes them quite fast.
Whenever you see `.Primitive`, `.Internal`, or `.Call` written in a function’s definition, you can be confident the function is calling code from another language. You’ll get all of the speed advantages of that language by using the function.
12\.4 Vectorized Code in Practice
---------------------------------
To see how vectorized code can help you as a data scientist, consider our slot machine project. In [Loops](loops.html#loops), you calculated the exact payout rate for your slot machine, but you could have estimated this payout rate with a simulation. If you played the slot machine many, many times, the average prize over all of the plays would be a good estimate of the true payout rate.
This method of estimation is based on the law of large numbers and is similar to many statistical simulations. To run this simulation, you could use a `for` loop:
```
winnings <- vector(length = 1000000)
for (i in 1:1000000) {
winnings[i] <- play()
}
mean(winnings)
## 0.9366984
```
The estimated payout rate after 10 million runs is 0\.937, which is very close to the true payout rate of 0\.934\. Note that I’m using the modified `score` function that treats diamonds as wilds.
If you run this simulation, you will notice that it takes a while to run. In fact, the simulation takes 342,308 seconds to run, which is about 5\.7 minutes. This is not particularly impressive, and you can do better by using vectorized code:
```
system.time(for (i in 1:1000000) {
winnings[i] <- play()
})
## user system elapsed
## 342.041 0.355 342.308
```
The current `score` function is not vectorized. It takes a single slot combination and uses an `if` tree to assign a prize to it. This combination of an `if` tree with a `for` loop suggests that you could write a piece of vectorized code that takes *many* slot combinations and then uses logical subsetting to operate on them all at once.
For example, you could rewrite `get_symbols` to generate *n* slot combinations and return them as an *n* x 3 matrix, like the one that follows. Each row of the matrix will contain one slot combination to be scored:
```
get_many_symbols <- function(n) {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
vec <- sample(wheel, size = 3 * n, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
matrix(vec, ncol = 3)
}
get_many_symbols(5)
## [,1] [,2] [,3]
## [1,] "B" "0" "B"
## [2,] "0" "BB" "7"
## [3,] "0" "0" "BBB"
## [4,] "0" "0" "B"
## [5,] "BBB" "0" "0"
```
You could also rewrite `play` to take a parameter, `n`, and return `n` prizes, in a data frame:
```
play_many <- function(n) {
symb_mat <- get_many_symbols(n = n)
data.frame(w1 = symb_mat[,1], w2 = symb_mat[,2],
w3 = symb_mat[,3], prize = score_many(symb_mat))
}
```
This new function would make it easy to simulate a million, or even 10 million plays of the slot machine, which will be our goal. When we’re finished, you will be able to estimate the payout rate with:
```
# plays <- play_many(10000000))
# mean(plays$prize)
```
Now you just need to write `score_many`, a vectorized (matix\-ized?) version of `score` that takes an *n* x 3 matrix and returns *n* prizes. It will be difficult to write this function because `score` is already quite complicated. I would not expect you to feel confident doing this on your own until you have more practice and experience than we’ve been able to develop here.
Should you like to test your skills and write a version of `score_many`, I recommend that you use the function `rowSums` within your code. It calculates the sum of each row of numbers (or logicals) in a matrix.
If you would like to test yourself in a more modest way, I recommend that you study the following model `score_many` function until you understand how each part works and how the parts work together to create a vectorized function. To do this, it will be helpful to create a concrete example, like this:
```
symbols <- matrix(
c("DD", "DD", "DD",
"C", "DD", "0",
"B", "B", "B",
"B", "BB", "BBB",
"C", "C", "0",
"7", "DD", "DD"), nrow = 6, byrow = TRUE)
symbols
## [,1] [,2] [,3]
## [1,] "DD" "DD" "DD"
## [2,] "C" "DD" "0"
## [3,] "B" "B" "B"
## [4,] "B" "BB" "BBB"
## [5,] "C" "C" "0"
## [6,] "7" "DD" "DD"
```
Then you can run each line of `score_many` against the example and examine the results as you go.
**Exercise 12\.3 (Test Your Understanding)** Study the model `score_many` function until you are satisfied that you understand how it works and could write a similar function yourself.
**Exercise 12\.4 (Advanced Challenge)** Instead of examining the model answer, write your own vectorized version of `score`. Assume that the data is stored in an *n* × 3 matrix where each row of the matrix contains one combination of slots to be scored.
You can use the version of `score` that treats diamonds as wild or the version of `score` that doesn’t. However, the model answer will use the version treating diamonds as wild.
*Solution.* `score_many` is a vectorized version of `score`. You can use it to run the simulation at the start of this section in a little over 20 seconds. This is 17 times faster than using a `for` loop:
```
# symbols should be a matrix with a column for each slot machine window
score_many <- function(symbols) {
# Step 1: Assign base prize based on cherries and diamonds ---------
## Count the number of cherries and diamonds in each combination
cherries <- rowSums(symbols == "C")
diamonds <- rowSums(symbols == "DD")
## Wild diamonds count as cherries
prize <- c(0, 2, 5)[cherries + diamonds + 1]
## ...but not if there are zero real cherries
### (cherries is coerced to FALSE where cherries == 0)
prize[!cherries] <- 0
# Step 2: Change prize for combinations that contain three of a kind
same <- symbols[, 1] == symbols[, 2] &
symbols[, 2] == symbols[, 3]
payoffs <- c("DD" = 100, "7" = 80, "BBB" = 40,
"BB" = 25, "B" = 10, "C" = 10, "0" = 0)
prize[same] <- payoffs[symbols[same, 1]]
# Step 3: Change prize for combinations that contain all bars ------
bars <- symbols == "B" | symbols == "BB" | symbols == "BBB"
all_bars <- bars[, 1] & bars[, 2] & bars[, 3] & !same
prize[all_bars] <- 5
# Step 4: Handle wilds ---------------------------------------------
## combos with two diamonds
two_wilds <- diamonds == 2
### Identify the nonwild symbol
one <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 2] == symbols[, 3]
two <- two_wilds & symbols[, 1] != symbols[, 2] &
symbols[, 1] == symbols[, 3]
three <- two_wilds & symbols[, 1] == symbols[, 2] &
symbols[, 2] != symbols[, 3]
### Treat as three of a kind
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
## combos with one wild
one_wild <- diamonds == 1
### Treat as all bars (if appropriate)
wild_bars <- one_wild & (rowSums(bars) == 2)
prize[wild_bars] <- 5
### Treat as three of a kind (if appropriate)
one <- one_wild & symbols[, 1] == symbols[, 2]
two <- one_wild & symbols[, 2] == symbols[, 3]
three <- one_wild & symbols[, 3] == symbols[, 1]
prize[one] <- payoffs[symbols[one, 1]]
prize[two] <- payoffs[symbols[two, 2]]
prize[three] <- payoffs[symbols[three, 3]]
# Step 5: Double prize for every diamond in combo ------------------
unname(prize * 2^diamonds)
}
system.time(play_many(10000000))
## user system elapsed
## 20.942 1.433 22.367
```
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
### 12\.4\.1 Loops Versus Vectorized Code
In many languages, `for` loops run very fast. As a result, programmers learn to use `for` loops whenever possible when they code. Often these programmers continue to rely on `for` loops when they begin to program in R, usually without taking the simple steps needed to optimize R’s `for` loops. These programmers may become disillusioned with R when their code does not work as fast as they would like. If you think that this may be happening to you, examine how often you are using `for` loops and what you are using them to do. If you find yourself using `for` loops for every task, there is a good chance that you are “speaking R with a C accent.” The cure is to learn to write and use vectorized code.
This doesn’t mean that `for` loops have no place in R. `for` loops are a very useful feature; they can do many things that vectorized code cannot do. You also should not become a slave to vectorized code. Sometimes it would take more time to rewrite code in vectorized format than to let a `for` loop run. For example, would it be faster to let the slot simulation run for 5\.7 minutes or to rewrite `score`?
12\.5 Summary
-------------
Fast code is an important component of data science because you can do more with fast code than you can do with slow code. You can work with larger data sets before computational constraints intervene, and you can do more computation before time constraints intervene. The fastest code in R will rely on the things that R does best: logical tests, subsetting, and element\-wise execution. I’ve called this type of code vectorized code because code written with these operations will take a vector of values as input and operate on each element of the vector at the same time. The majority of the code written in R is already vectorized.
If you use these operations, but your code does not appear vectorized, analyze the sequential steps and parallel cases in your program. Ensure that you’ve used vectorized functions to handle the steps and logical subsetting to handle the cases. Be aware, however, that some tasks cannot be vectorized.
12\.6 Project 3 Wrap\-up
------------------------
You have now written your first program in R, and it is a program that you should be proud of. `play` is not a simple `hello world` exercise, but a real program that does a real task in a complicated way.
Writing new programs in R will always be challenging because programming depends so much on your own creativity, problem\-solving ability, and experience writing similar types of programs. However, you can use the suggestions in this chapter to make even the most complicated program manageable: divide tasks into simple steps and cases, work with concrete examples, and describe possible solutions in English.
This project completes the education you began in [The Very Basics](basics.html#basics). You can now use R to handle data, which has augmented your ability to analyze data. You can:
* Load and store data in your computer—not on paper or in your mind
* Accurately recall and change individual values without relying on your memory
* Instruct your computer to do tedious, or complex, tasks on your behalf
These skills solve an important logistical problem faced by every data scientist: *how can you store and manipulate data without making errors?* However, this is not the only problem that you will face as a data scientist. The next problem will appear when you try to understand the information contained in your data. It is nearly impossible to spot insights or to discover patterns in raw data. A third problem will appear when you try to use your data set to reason about reality, which includes things not contained in your data set. What exactly does your data imply about things outside of the data set? How certain can you be?
I refer to these problems as the logistical, tactical, and strategic problems of data science, as shown in Figure [12\.4](speed.html#fig:venn). You’ll face them whenever you try to learn from data:
* **A logistical problem:** \- How can you store and manipulate data without making errors?
* **A tactical problem** \- How can you discover the information contained in your data?
* **A strategic problem** \- How can you use the data to draw conclusions about the world at large?
Figure 12\.4: The three core skill sets of data science: computer programming, data comprehension, and scientific reasoning.
A well\-rounded data scientist will need to be able to solve each of these problems in many different situations. By learning to program in R, you have mastered the logistical problem, which is a prerequisite for solving the tactical and strategic problems.
If you would like to learn how to reason with data, or how to transform, visualize, and explore your data sets with R tools, I recommend the book [*R for Data Science*](http://r4ds.had.co.nz/), the companion volume to this book. *R for Data Science* teaches a simple workflow for transforming, visualizing, and modeling data in R, as well as how to report results with the R Markdown package.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/starting.html |
A Installing R and RStudio
==========================
To get started with R, you need to acquire your own copy. This appendix will show you how to download R as well as RStudio, a software application that makes R easier to use. You’ll go from downloading R to opening your first R session.
Both R and RStudio are free and easy to download.
A.1 How to Download and Install R
---------------------------------
R is maintained by an international team of developers who make the language available through the web page of [The Comprehensive R Archive Network](http://cran.r-project.org). The top of the web page provides three links for downloading R. Follow the link that describes your operating system: Windows, Mac, or Linux.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
A.2 Using R
-----------
R isn’t a program that you can open and start using, like Microsoft Word or Internet Explorer. Instead, R is a computer language, like C, C\+\+, or UNIX. You use R by writing commands in the R language and asking your computer to interpret them. In the old days, people ran R code in a UNIX terminal window—as if they were hackers in a movie from the 1980s. Now almost everyone uses R with an application called RStudio, and I recommend that you do, too.
**R and UNIX**
You can still run R in a UNIX or BASH window by typing the command:
```
R
```
which opens an R interpreter. You can then do your work and close the interpreter by running *`q()`* when you are finished.
A.3 RStudio
-----------
RStudio *is* an application like Microsoft Word—except that instead of helping you write in English, RStudio helps you write in R. I use RStudio throughout the book because it makes using R much easier. Also, the RStudio interface looks the same for Windows, Mac OS, and Linux. That will help me match the book to your personal experience.
You can [download RStudio](http://www.rstudio.com/ide) for free. Just click the “Download RStudio” button and follow the simple instructions that follow. Once you’ve installed RStudio, you can open it like any other program on your computer—usually by clicking an icon on your desktop.
**The R GUIs**
Windows and Mac users usually do not program from a terminal window, so the Windows and Mac downloads for R come with a simple program that opens a terminal\-like window for you to run R code in. This is what opens when you click the R icon on your Windows or Mac computer. These programs do a little more than the basic terminal window, but not much. You may hear people refer to them as the Windows or Mac R GUIs.
When you open RStudio, a window appears with three panes in it, as in Figure [A.1](starting.html#fig:layout). The largest pane is a console window. This is where you’ll run your R code and see results. The console window is exactly what you’d see if you ran R from a UNIX console or the Windows or Mac GUIs. Everything else you see is unique to RStudio. Hidden in the other panes are a text editor, a graphics window, a debugger, a file manager, and much more. You’ll learn about these panes as they become useful throughout the course of this book.
Figure A.1: The RStudio IDE for R.
**Do I still need to download R?**
Even if you use RStudio, you’ll still need to download R to your computer. RStudio helps you use the version of R that lives on your computer, but it doesn’t come
with a version of R on its own.
A.4 Opening R
-------------
Now that you have both R and RStudio on your computer, you can begin using R by opening the RStudio program. Open RStudio just as you would any program, by clicking on its icon or by typing “RStudio” at the Windows Run prompt.
A.1 How to Download and Install R
---------------------------------
R is maintained by an international team of developers who make the language available through the web page of [The Comprehensive R Archive Network](http://cran.r-project.org). The top of the web page provides three links for downloading R. Follow the link that describes your operating system: Windows, Mac, or Linux.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
A.2 Using R
-----------
R isn’t a program that you can open and start using, like Microsoft Word or Internet Explorer. Instead, R is a computer language, like C, C\+\+, or UNIX. You use R by writing commands in the R language and asking your computer to interpret them. In the old days, people ran R code in a UNIX terminal window—as if they were hackers in a movie from the 1980s. Now almost everyone uses R with an application called RStudio, and I recommend that you do, too.
**R and UNIX**
You can still run R in a UNIX or BASH window by typing the command:
```
R
```
which opens an R interpreter. You can then do your work and close the interpreter by running *`q()`* when you are finished.
A.3 RStudio
-----------
RStudio *is* an application like Microsoft Word—except that instead of helping you write in English, RStudio helps you write in R. I use RStudio throughout the book because it makes using R much easier. Also, the RStudio interface looks the same for Windows, Mac OS, and Linux. That will help me match the book to your personal experience.
You can [download RStudio](http://www.rstudio.com/ide) for free. Just click the “Download RStudio” button and follow the simple instructions that follow. Once you’ve installed RStudio, you can open it like any other program on your computer—usually by clicking an icon on your desktop.
**The R GUIs**
Windows and Mac users usually do not program from a terminal window, so the Windows and Mac downloads for R come with a simple program that opens a terminal\-like window for you to run R code in. This is what opens when you click the R icon on your Windows or Mac computer. These programs do a little more than the basic terminal window, but not much. You may hear people refer to them as the Windows or Mac R GUIs.
When you open RStudio, a window appears with three panes in it, as in Figure [A.1](starting.html#fig:layout). The largest pane is a console window. This is where you’ll run your R code and see results. The console window is exactly what you’d see if you ran R from a UNIX console or the Windows or Mac GUIs. Everything else you see is unique to RStudio. Hidden in the other panes are a text editor, a graphics window, a debugger, a file manager, and much more. You’ll learn about these panes as they become useful throughout the course of this book.
Figure A.1: The RStudio IDE for R.
**Do I still need to download R?**
Even if you use RStudio, you’ll still need to download R to your computer. RStudio helps you use the version of R that lives on your computer, but it doesn’t come
with a version of R on its own.
A.4 Opening R
-------------
Now that you have both R and RStudio on your computer, you can begin using R by opening the RStudio program. Open RStudio just as you would any program, by clicking on its icon or by typing “RStudio” at the Windows Run prompt.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/starting.html |
A Installing R and RStudio
==========================
To get started with R, you need to acquire your own copy. This appendix will show you how to download R as well as RStudio, a software application that makes R easier to use. You’ll go from downloading R to opening your first R session.
Both R and RStudio are free and easy to download.
A.1 How to Download and Install R
---------------------------------
R is maintained by an international team of developers who make the language available through the web page of [The Comprehensive R Archive Network](http://cran.r-project.org). The top of the web page provides three links for downloading R. Follow the link that describes your operating system: Windows, Mac, or Linux.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
A.2 Using R
-----------
R isn’t a program that you can open and start using, like Microsoft Word or Internet Explorer. Instead, R is a computer language, like C, C\+\+, or UNIX. You use R by writing commands in the R language and asking your computer to interpret them. In the old days, people ran R code in a UNIX terminal window—as if they were hackers in a movie from the 1980s. Now almost everyone uses R with an application called RStudio, and I recommend that you do, too.
**R and UNIX**
You can still run R in a UNIX or BASH window by typing the command:
```
R
```
which opens an R interpreter. You can then do your work and close the interpreter by running *`q()`* when you are finished.
A.3 RStudio
-----------
RStudio *is* an application like Microsoft Word—except that instead of helping you write in English, RStudio helps you write in R. I use RStudio throughout the book because it makes using R much easier. Also, the RStudio interface looks the same for Windows, Mac OS, and Linux. That will help me match the book to your personal experience.
You can [download RStudio](http://www.rstudio.com/ide) for free. Just click the “Download RStudio” button and follow the simple instructions that follow. Once you’ve installed RStudio, you can open it like any other program on your computer—usually by clicking an icon on your desktop.
**The R GUIs**
Windows and Mac users usually do not program from a terminal window, so the Windows and Mac downloads for R come with a simple program that opens a terminal\-like window for you to run R code in. This is what opens when you click the R icon on your Windows or Mac computer. These programs do a little more than the basic terminal window, but not much. You may hear people refer to them as the Windows or Mac R GUIs.
When you open RStudio, a window appears with three panes in it, as in Figure [A.1](starting.html#fig:layout). The largest pane is a console window. This is where you’ll run your R code and see results. The console window is exactly what you’d see if you ran R from a UNIX console or the Windows or Mac GUIs. Everything else you see is unique to RStudio. Hidden in the other panes are a text editor, a graphics window, a debugger, a file manager, and much more. You’ll learn about these panes as they become useful throughout the course of this book.
Figure A.1: The RStudio IDE for R.
**Do I still need to download R?**
Even if you use RStudio, you’ll still need to download R to your computer. RStudio helps you use the version of R that lives on your computer, but it doesn’t come
with a version of R on its own.
A.4 Opening R
-------------
Now that you have both R and RStudio on your computer, you can begin using R by opening the RStudio program. Open RStudio just as you would any program, by clicking on its icon or by typing “RStudio” at the Windows Run prompt.
A.1 How to Download and Install R
---------------------------------
R is maintained by an international team of developers who make the language available through the web page of [The Comprehensive R Archive Network](http://cran.r-project.org). The top of the web page provides three links for downloading R. Follow the link that describes your operating system: Windows, Mac, or Linux.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
### A.1\.1 Windows
To install R on Windows, click the “Download R for Windows” link. Then click the “base” link. Next, click the first link at the top of the new page. This link should say something like “Download R 3\.0\.3 for Windows,” except the 3\.0\.3 will be replaced by the most current version of R. The link downloads an installer program, which installs the most up\-to\-date version of R for Windows. Run this program and step through the installation wizard that appears. The wizard will install R into your program files folders and place a shortcut in your Start menu. Note that you’ll need to have all of the appropriate administration privileges to install new software on your machine.
### A.1\.2 Mac
To install R on a Mac, click the “Download R for Mac” link. Next, click on the `R-3.0.3` package link (or the package link for the most current release of R). An installer will download to guide you through the installation process, which is very easy. The installer lets you customize your installation, but the defaults will be suitable for most users. I’ve never found a reason to change them. If your computer requires a password before installing new progams, you’ll need it here.
**Binaries Versus Source**
R can be installed from precompiled binaries or built from source on any operating system. For Windows and Mac machines, installing R from binaries is extremely easy. The binary comes preloaded in its own installer. Although you can build R from source on these platforms, the process is much more complicated and won’t provide much benefit for most users. For Linux systems, the opposite is true. Precompiled binaries can be found for some systems, but it is much more common to build R from source files when installing on Linux. The download pages on [CRAN’s website](http://cran.r-project.org) provide information about building R from source for the Windows, Mac, and Linux platforms.
### A.1\.3 Linux
R comes preinstalled on many Linux systems, but you’ll want the newest version of R if yours is out of date. [The CRAN website](http://cran.r-project.org) provides files to build R from source on Debian, Redhat, SUSE, and Ubuntu systems under the link “Download R for Linux.” Click the link and then follow the directory trail to the version of Linux you wish to install on. The exact installation procedure will vary depending on the Linux system you use. CRAN guides the process by grouping each set of source files with documentation or README files that explain how to install on your system.
**32\-bit Versus 64\-bit**
R comes in both 32\-bit and 64\-bit versions. Which should you use? In most cases, it won’t matter. Both versions use 32\-bit integers, which means they compute numbers to the same numerical precision. The difference occurs in the way each version manages memory. 64\-bit R uses 64\-bit memory pointers, and 32\-bit R uses 32\-bit memory pointers. This means 64\-bit R has a larger memory space to use (and search through).
As a rule of thumb, 32\-bit builds of R are faster than 64\-bit builds, though not always. On the other hand, 64\-bit builds can handle larger files and data sets with fewer memory management problems. In either version, the maximum allowable vector size tops out at around 2 billion elements. If your operating system doesn’t support 64\-bit programs, or your RAM is less than 4 GB, 32\-bit R is for you. The Windows and Mac installers will automatically install both versions if your system supports 64\-bit R.
A.2 Using R
-----------
R isn’t a program that you can open and start using, like Microsoft Word or Internet Explorer. Instead, R is a computer language, like C, C\+\+, or UNIX. You use R by writing commands in the R language and asking your computer to interpret them. In the old days, people ran R code in a UNIX terminal window—as if they were hackers in a movie from the 1980s. Now almost everyone uses R with an application called RStudio, and I recommend that you do, too.
**R and UNIX**
You can still run R in a UNIX or BASH window by typing the command:
```
R
```
which opens an R interpreter. You can then do your work and close the interpreter by running *`q()`* when you are finished.
A.3 RStudio
-----------
RStudio *is* an application like Microsoft Word—except that instead of helping you write in English, RStudio helps you write in R. I use RStudio throughout the book because it makes using R much easier. Also, the RStudio interface looks the same for Windows, Mac OS, and Linux. That will help me match the book to your personal experience.
You can [download RStudio](http://www.rstudio.com/ide) for free. Just click the “Download RStudio” button and follow the simple instructions that follow. Once you’ve installed RStudio, you can open it like any other program on your computer—usually by clicking an icon on your desktop.
**The R GUIs**
Windows and Mac users usually do not program from a terminal window, so the Windows and Mac downloads for R come with a simple program that opens a terminal\-like window for you to run R code in. This is what opens when you click the R icon on your Windows or Mac computer. These programs do a little more than the basic terminal window, but not much. You may hear people refer to them as the Windows or Mac R GUIs.
When you open RStudio, a window appears with three panes in it, as in Figure [A.1](starting.html#fig:layout). The largest pane is a console window. This is where you’ll run your R code and see results. The console window is exactly what you’d see if you ran R from a UNIX console or the Windows or Mac GUIs. Everything else you see is unique to RStudio. Hidden in the other panes are a text editor, a graphics window, a debugger, a file manager, and much more. You’ll learn about these panes as they become useful throughout the course of this book.
Figure A.1: The RStudio IDE for R.
**Do I still need to download R?**
Even if you use RStudio, you’ll still need to download R to your computer. RStudio helps you use the version of R that lives on your computer, but it doesn’t come
with a version of R on its own.
A.4 Opening R
-------------
Now that you have both R and RStudio on your computer, you can begin using R by opening the RStudio program. Open RStudio just as you would any program, by clicking on its icon or by typing “RStudio” at the Windows Run prompt.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/packages2.html |
B R Packages
============
Many of R’s most useful functions do not come preloaded when you start R, but reside in *packages* that can be installed on top of R. R packages are similar to libraries in C, C\+\+, and Javascript, packages in Python, and gems in Ruby. An R package bundles together useful functions, help files, and data sets. You can use these functions within your own R code once you load the package they live in. Usually the contents of an R package are all related to a single type of task, which the package helps solve. R packages will let you take advantage of R’s most useful features: its large community of package writers (many of whom are active data scientists) and its prewritten routines for handling many common (and exotic) data\-science tasks.
**Base R**
You may hear R users (or me) refer to “base R.” What is base R? It is just the collection of R functions that gets loaded every time you start R. These functions provide the basics of the language, and you don’t have to load a package before you can use them.
B.1 Installing Packages
-----------------------
To use an R package, you must first install it on your computer and then load it in your current R session. The easiest way to install an R package is with the `install.packages` R function. Open R and type the following into the command line:
```
install.packages("<package name>")
```
This will search for the specified package in the collection of packages hosted on the CRAN site. When R finds the package, it will download it into a libraries folder on your computer. R can access the package here in future R sessions without reinstalling it. Anyone can write an R package and disseminate it as they like; however, almost all R packages are published through the CRAN website. CRAN tests each R package before publishing it. This doesn’t eliminate every bug inside a package, but it does mean that you can trust a package on CRAN to run in the current version of R on your OS.
You can install multiple packages at once by linking their names with R’s concatenate function, `c`. For example, to install the ggplot2, reshape2, and dplyr packages, run:
```
install.packages(c("ggplot2", "reshape2", "dplyr"))
```
If this is your first time installing a package, R will prompt you to choose an online mirror of to install from. Mirrors are listed by location. Your downloads should be quickest if you select a mirror that is close to you. If you want to download a new package, try the Austria mirror first. This is the main CRAN repository, and new packages can sometimes take a couple of days to make it around to all of the other mirrors.
B.2 Loading Packages
--------------------
Installing a package doesn’t immediately place its functions at your fingertips. It just places them on your computer. To use an R package, you next have to load it in your R session with the command:
```
library(<package name>)
```
Notice that the quotation marks have disappeared. You can use them if you like, but quotation marks are optional for the `library` command. (This is not true for the `install.packages` command).
`library` will make all of the package’s functions, data sets, and help files available to you until you close your current R session. The next time you begin an R session, you’ll have to reload the package with `library` if you want to use it, but you won’t have to reinstall it. You only have to install each package once. After that, a copy of the package will live in your R library. To see which packages you currently have in your R library, run:
```
library()
```
`library()` also shows the path to your actual R library, which is the folder that contains your R packages. You may notice many packages that you don’t remember installing. This is because R automatically downloads a set of useful packages when you first install R.
**Install packages from (almost) anywhere**
The `devtools` R package makes it easy to install packages from locations other than the CRAN website. devtools provides functions like `install_github`, `install_gitorious`, `install_bitbucket`, and `install_url`. These work similar to `install.packages`, but they search new locations for R packages. `install_github` is especially useful because many R developers provide development versions of their packages on GitHub. The development version of a package will contain a sneak peek of new functions and patches but may not be as stable or as bug free as the CRAN version.
Why does R make you bother with installing and loading packages? You can imagine an R where every package came preloaded, but this would be a very large and slow program. As of May 6, 2014, the CRAN website hosts 5,511 packages. It is simpler to only install and load the packages that you want to use when you want to use them. This keeps your copy of R fast because it has fewer functions and help pages to search through at any one time. The arrangement has other benefits as well. For example, it is possible to update your copy of an R package without updating your entire copy of R.
**What’s the best way to learn about R packages?**
It is difficult to use an R package if you don’t know that it exists. You could go to the CRAN website and click the Packages link to see a list of available packages, but you’ll have to wade through thousands of them. Moreover, many R packages do the same things.
How do you know which package does them best? The R\-packages [mailing list](http://stat.ethz.ch/mailman/listinfo/r-packages) is a place to start. It sends out announcements of new packages and maintains an archive of old announcements. Blogs that aggregate posts about R can also provide valuable leads. I recommend [R\-bloggers](www.r-bloggers.com). RStudio maintains a list of some of the most useful R packages in the Getting Started section of <http://support.rstudio.com>. Finally, CRAN groups together some of the most useful—and most respected—packages by [subject area](http://cran.r-project.org/web/views). This is an excellent place to learn about the packages designed for your area of work.
B.1 Installing Packages
-----------------------
To use an R package, you must first install it on your computer and then load it in your current R session. The easiest way to install an R package is with the `install.packages` R function. Open R and type the following into the command line:
```
install.packages("<package name>")
```
This will search for the specified package in the collection of packages hosted on the CRAN site. When R finds the package, it will download it into a libraries folder on your computer. R can access the package here in future R sessions without reinstalling it. Anyone can write an R package and disseminate it as they like; however, almost all R packages are published through the CRAN website. CRAN tests each R package before publishing it. This doesn’t eliminate every bug inside a package, but it does mean that you can trust a package on CRAN to run in the current version of R on your OS.
You can install multiple packages at once by linking their names with R’s concatenate function, `c`. For example, to install the ggplot2, reshape2, and dplyr packages, run:
```
install.packages(c("ggplot2", "reshape2", "dplyr"))
```
If this is your first time installing a package, R will prompt you to choose an online mirror of to install from. Mirrors are listed by location. Your downloads should be quickest if you select a mirror that is close to you. If you want to download a new package, try the Austria mirror first. This is the main CRAN repository, and new packages can sometimes take a couple of days to make it around to all of the other mirrors.
B.2 Loading Packages
--------------------
Installing a package doesn’t immediately place its functions at your fingertips. It just places them on your computer. To use an R package, you next have to load it in your R session with the command:
```
library(<package name>)
```
Notice that the quotation marks have disappeared. You can use them if you like, but quotation marks are optional for the `library` command. (This is not true for the `install.packages` command).
`library` will make all of the package’s functions, data sets, and help files available to you until you close your current R session. The next time you begin an R session, you’ll have to reload the package with `library` if you want to use it, but you won’t have to reinstall it. You only have to install each package once. After that, a copy of the package will live in your R library. To see which packages you currently have in your R library, run:
```
library()
```
`library()` also shows the path to your actual R library, which is the folder that contains your R packages. You may notice many packages that you don’t remember installing. This is because R automatically downloads a set of useful packages when you first install R.
**Install packages from (almost) anywhere**
The `devtools` R package makes it easy to install packages from locations other than the CRAN website. devtools provides functions like `install_github`, `install_gitorious`, `install_bitbucket`, and `install_url`. These work similar to `install.packages`, but they search new locations for R packages. `install_github` is especially useful because many R developers provide development versions of their packages on GitHub. The development version of a package will contain a sneak peek of new functions and patches but may not be as stable or as bug free as the CRAN version.
Why does R make you bother with installing and loading packages? You can imagine an R where every package came preloaded, but this would be a very large and slow program. As of May 6, 2014, the CRAN website hosts 5,511 packages. It is simpler to only install and load the packages that you want to use when you want to use them. This keeps your copy of R fast because it has fewer functions and help pages to search through at any one time. The arrangement has other benefits as well. For example, it is possible to update your copy of an R package without updating your entire copy of R.
**What’s the best way to learn about R packages?**
It is difficult to use an R package if you don’t know that it exists. You could go to the CRAN website and click the Packages link to see a list of available packages, but you’ll have to wade through thousands of them. Moreover, many R packages do the same things.
How do you know which package does them best? The R\-packages [mailing list](http://stat.ethz.ch/mailman/listinfo/r-packages) is a place to start. It sends out announcements of new packages and maintains an archive of old announcements. Blogs that aggregate posts about R can also provide valuable leads. I recommend [R\-bloggers](www.r-bloggers.com). RStudio maintains a list of some of the most useful R packages in the Getting Started section of <http://support.rstudio.com>. Finally, CRAN groups together some of the most useful—and most respected—packages by [subject area](http://cran.r-project.org/web/views). This is an excellent place to learn about the packages designed for your area of work.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/packages2.html |
B R Packages
============
Many of R’s most useful functions do not come preloaded when you start R, but reside in *packages* that can be installed on top of R. R packages are similar to libraries in C, C\+\+, and Javascript, packages in Python, and gems in Ruby. An R package bundles together useful functions, help files, and data sets. You can use these functions within your own R code once you load the package they live in. Usually the contents of an R package are all related to a single type of task, which the package helps solve. R packages will let you take advantage of R’s most useful features: its large community of package writers (many of whom are active data scientists) and its prewritten routines for handling many common (and exotic) data\-science tasks.
**Base R**
You may hear R users (or me) refer to “base R.” What is base R? It is just the collection of R functions that gets loaded every time you start R. These functions provide the basics of the language, and you don’t have to load a package before you can use them.
B.1 Installing Packages
-----------------------
To use an R package, you must first install it on your computer and then load it in your current R session. The easiest way to install an R package is with the `install.packages` R function. Open R and type the following into the command line:
```
install.packages("<package name>")
```
This will search for the specified package in the collection of packages hosted on the CRAN site. When R finds the package, it will download it into a libraries folder on your computer. R can access the package here in future R sessions without reinstalling it. Anyone can write an R package and disseminate it as they like; however, almost all R packages are published through the CRAN website. CRAN tests each R package before publishing it. This doesn’t eliminate every bug inside a package, but it does mean that you can trust a package on CRAN to run in the current version of R on your OS.
You can install multiple packages at once by linking their names with R’s concatenate function, `c`. For example, to install the ggplot2, reshape2, and dplyr packages, run:
```
install.packages(c("ggplot2", "reshape2", "dplyr"))
```
If this is your first time installing a package, R will prompt you to choose an online mirror of to install from. Mirrors are listed by location. Your downloads should be quickest if you select a mirror that is close to you. If you want to download a new package, try the Austria mirror first. This is the main CRAN repository, and new packages can sometimes take a couple of days to make it around to all of the other mirrors.
B.2 Loading Packages
--------------------
Installing a package doesn’t immediately place its functions at your fingertips. It just places them on your computer. To use an R package, you next have to load it in your R session with the command:
```
library(<package name>)
```
Notice that the quotation marks have disappeared. You can use them if you like, but quotation marks are optional for the `library` command. (This is not true for the `install.packages` command).
`library` will make all of the package’s functions, data sets, and help files available to you until you close your current R session. The next time you begin an R session, you’ll have to reload the package with `library` if you want to use it, but you won’t have to reinstall it. You only have to install each package once. After that, a copy of the package will live in your R library. To see which packages you currently have in your R library, run:
```
library()
```
`library()` also shows the path to your actual R library, which is the folder that contains your R packages. You may notice many packages that you don’t remember installing. This is because R automatically downloads a set of useful packages when you first install R.
**Install packages from (almost) anywhere**
The `devtools` R package makes it easy to install packages from locations other than the CRAN website. devtools provides functions like `install_github`, `install_gitorious`, `install_bitbucket`, and `install_url`. These work similar to `install.packages`, but they search new locations for R packages. `install_github` is especially useful because many R developers provide development versions of their packages on GitHub. The development version of a package will contain a sneak peek of new functions and patches but may not be as stable or as bug free as the CRAN version.
Why does R make you bother with installing and loading packages? You can imagine an R where every package came preloaded, but this would be a very large and slow program. As of May 6, 2014, the CRAN website hosts 5,511 packages. It is simpler to only install and load the packages that you want to use when you want to use them. This keeps your copy of R fast because it has fewer functions and help pages to search through at any one time. The arrangement has other benefits as well. For example, it is possible to update your copy of an R package without updating your entire copy of R.
**What’s the best way to learn about R packages?**
It is difficult to use an R package if you don’t know that it exists. You could go to the CRAN website and click the Packages link to see a list of available packages, but you’ll have to wade through thousands of them. Moreover, many R packages do the same things.
How do you know which package does them best? The R\-packages [mailing list](http://stat.ethz.ch/mailman/listinfo/r-packages) is a place to start. It sends out announcements of new packages and maintains an archive of old announcements. Blogs that aggregate posts about R can also provide valuable leads. I recommend [R\-bloggers](www.r-bloggers.com). RStudio maintains a list of some of the most useful R packages in the Getting Started section of <http://support.rstudio.com>. Finally, CRAN groups together some of the most useful—and most respected—packages by [subject area](http://cran.r-project.org/web/views). This is an excellent place to learn about the packages designed for your area of work.
B.1 Installing Packages
-----------------------
To use an R package, you must first install it on your computer and then load it in your current R session. The easiest way to install an R package is with the `install.packages` R function. Open R and type the following into the command line:
```
install.packages("<package name>")
```
This will search for the specified package in the collection of packages hosted on the CRAN site. When R finds the package, it will download it into a libraries folder on your computer. R can access the package here in future R sessions without reinstalling it. Anyone can write an R package and disseminate it as they like; however, almost all R packages are published through the CRAN website. CRAN tests each R package before publishing it. This doesn’t eliminate every bug inside a package, but it does mean that you can trust a package on CRAN to run in the current version of R on your OS.
You can install multiple packages at once by linking their names with R’s concatenate function, `c`. For example, to install the ggplot2, reshape2, and dplyr packages, run:
```
install.packages(c("ggplot2", "reshape2", "dplyr"))
```
If this is your first time installing a package, R will prompt you to choose an online mirror of to install from. Mirrors are listed by location. Your downloads should be quickest if you select a mirror that is close to you. If you want to download a new package, try the Austria mirror first. This is the main CRAN repository, and new packages can sometimes take a couple of days to make it around to all of the other mirrors.
B.2 Loading Packages
--------------------
Installing a package doesn’t immediately place its functions at your fingertips. It just places them on your computer. To use an R package, you next have to load it in your R session with the command:
```
library(<package name>)
```
Notice that the quotation marks have disappeared. You can use them if you like, but quotation marks are optional for the `library` command. (This is not true for the `install.packages` command).
`library` will make all of the package’s functions, data sets, and help files available to you until you close your current R session. The next time you begin an R session, you’ll have to reload the package with `library` if you want to use it, but you won’t have to reinstall it. You only have to install each package once. After that, a copy of the package will live in your R library. To see which packages you currently have in your R library, run:
```
library()
```
`library()` also shows the path to your actual R library, which is the folder that contains your R packages. You may notice many packages that you don’t remember installing. This is because R automatically downloads a set of useful packages when you first install R.
**Install packages from (almost) anywhere**
The `devtools` R package makes it easy to install packages from locations other than the CRAN website. devtools provides functions like `install_github`, `install_gitorious`, `install_bitbucket`, and `install_url`. These work similar to `install.packages`, but they search new locations for R packages. `install_github` is especially useful because many R developers provide development versions of their packages on GitHub. The development version of a package will contain a sneak peek of new functions and patches but may not be as stable or as bug free as the CRAN version.
Why does R make you bother with installing and loading packages? You can imagine an R where every package came preloaded, but this would be a very large and slow program. As of May 6, 2014, the CRAN website hosts 5,511 packages. It is simpler to only install and load the packages that you want to use when you want to use them. This keeps your copy of R fast because it has fewer functions and help pages to search through at any one time. The arrangement has other benefits as well. For example, it is possible to update your copy of an R package without updating your entire copy of R.
**What’s the best way to learn about R packages?**
It is difficult to use an R package if you don’t know that it exists. You could go to the CRAN website and click the Packages link to see a list of available packages, but you’ll have to wade through thousands of them. Moreover, many R packages do the same things.
How do you know which package does them best? The R\-packages [mailing list](http://stat.ethz.ch/mailman/listinfo/r-packages) is a place to start. It sends out announcements of new packages and maintains an archive of old announcements. Blogs that aggregate posts about R can also provide valuable leads. I recommend [R\-bloggers](www.r-bloggers.com). RStudio maintains a list of some of the most useful R packages in the Getting Started section of <http://support.rstudio.com>. Finally, CRAN groups together some of the most useful—and most respected—packages by [subject area](http://cran.r-project.org/web/views). This is an excellent place to learn about the packages designed for your area of work.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/updating.html |
C Updating R and Its Packages
=============================
The R Core Development Team continuously hones the R language by catching bugs, improving performance, and updating R to work with new technologies. As a result, new versions of R are released several times a year. The easiest way to stay current with R is to periodically check [the CRAN website](http://cran.r-project.org). The website is updated for each new release and makes the release available for download. You’ll have to install the new release. The process is the same as when you first installed R.
Don’t worry if you’re not interested in staying up\-to\-date on R Core’s doings. R changes only slightly between releases, and you’re not likely to notice the differences. However, updating to the current version of R is a good place to start if you ever encounter a bug that you can’t explain.
RStudio also constantly improves its product. You can acquire the newest updates just by downloading them from [RStudio](http://www.rstudio.com/ide).
C.1 R Packages
--------------
Package authors occasionally release new versions of their packages to add functions, fix bugs, or improve performance. The `update.packages` command checks whether you have the most current version of a package and installs the most current version if you do not. The syntax for `update.packages` follows that of `install.packages`. If you already have ggplot2, reshape2, and dplyr on your computer, it’d be a good idea to check for updates before you use them:
```
update.packages(c("ggplot2", "reshape2", "dplyr"))
```
You should start a new R session after updating packages. If you have a package loaded when you update it, you’ll have to close your R session and open a new one to begin using the updated version of the package.
C.1 R Packages
--------------
Package authors occasionally release new versions of their packages to add functions, fix bugs, or improve performance. The `update.packages` command checks whether you have the most current version of a package and installs the most current version if you do not. The syntax for `update.packages` follows that of `install.packages`. If you already have ggplot2, reshape2, and dplyr on your computer, it’d be a good idea to check for updates before you use them:
```
update.packages(c("ggplot2", "reshape2", "dplyr"))
```
You should start a new R session after updating packages. If you have a package loaded when you update it, you’ll have to close your R session and open a new one to begin using the updated version of the package.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/updating.html |
C Updating R and Its Packages
=============================
The R Core Development Team continuously hones the R language by catching bugs, improving performance, and updating R to work with new technologies. As a result, new versions of R are released several times a year. The easiest way to stay current with R is to periodically check [the CRAN website](http://cran.r-project.org). The website is updated for each new release and makes the release available for download. You’ll have to install the new release. The process is the same as when you first installed R.
Don’t worry if you’re not interested in staying up\-to\-date on R Core’s doings. R changes only slightly between releases, and you’re not likely to notice the differences. However, updating to the current version of R is a good place to start if you ever encounter a bug that you can’t explain.
RStudio also constantly improves its product. You can acquire the newest updates just by downloading them from [RStudio](http://www.rstudio.com/ide).
C.1 R Packages
--------------
Package authors occasionally release new versions of their packages to add functions, fix bugs, or improve performance. The `update.packages` command checks whether you have the most current version of a package and installs the most current version if you do not. The syntax for `update.packages` follows that of `install.packages`. If you already have ggplot2, reshape2, and dplyr on your computer, it’d be a good idea to check for updates before you use them:
```
update.packages(c("ggplot2", "reshape2", "dplyr"))
```
You should start a new R session after updating packages. If you have a package loaded when you update it, you’ll have to close your R session and open a new one to begin using the updated version of the package.
C.1 R Packages
--------------
Package authors occasionally release new versions of their packages to add functions, fix bugs, or improve performance. The `update.packages` command checks whether you have the most current version of a package and installs the most current version if you do not. The syntax for `update.packages` follows that of `install.packages`. If you already have ggplot2, reshape2, and dplyr on your computer, it’d be a good idea to check for updates before you use them:
```
update.packages(c("ggplot2", "reshape2", "dplyr"))
```
You should start a new R session after updating packages. If you have a package loaded when you update it, you’ll have to close your R session and open a new one to begin using the updated version of the package.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/dataio.html |
D Loading and Saving Data in R
==============================
This appendix will show you how to load and save data into R from plain\-text files, R files, and Excel spreadsheets. It will also show you the R packages that you can use to load data from databases and other common programs, like SAS and MATLAB.
D.1 Data Sets in Base R
-----------------------
R comes with many data sets preloaded in the `datasets` package, which comes with base R. These data sets are not very interesting, but they give you a chance to test code or make a point without having to load a data set from outside R. You can see a list of R’s data sets as well as a short description of each by running:
```
help(package = "datasets")
```
To use a data set, just type its name. Each data set is already presaved as an R object. For example:
```
iris
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
However, R’s data sets are no substitute for your own data, which you can load into R from a wide variety of file formats. But before you load any data files into R, you’ll need to determine where your *working directory* is.
D.2 Working Directory
---------------------
Each time you open R, it links itself to a directory on your computer, which R calls the working directory. This is where R will look for files when you attempt to load them, and it is where R will save files when you save them. The location of your working directory will vary on different computers. To determine which directory R is using as your working directory, run:
```
getwd()
## "/Users/garrettgrolemund"
```
You can place data files straight into the folder that is your working directory, or you can move your working directory to where your data files are. You can move your working directory to any folder on your computer with the function `setwd`. Just give `setwd` the file path to your new working directory. I prefer to set my working directory to a folder dedicated to whichever project I am currently working on. That way I can keep all of my data, scripts, graphs, and reports in the same place. For example:
```
setwd("~/Users/garrettgrolemund/Documents/Book_Project")
```
If the file path does not begin with your root directory, R will assume that it begins at your current working directory.
You can also change your working directory by clicking on Session \> Set Working Directory \> Choose Directory in the RStudio menu bar. The Windows and Mac GUIs have similar options. If you start R from a UNIX command line (as on Linux machines), the working directory will be whichever directory you were in when you called R.
You can see what files are in your working directory with `list.files()`. If you see the file that you would like to open in your working directory, then you are ready to proceed. How you open files in your working directory will depend on which type of file you would like to open.
D.3 Plain\-text Files
---------------------
Plain\-text files are one of the most common ways to save data. They are very simple and can be read by many different computer programs—even the most basic text editors. For this reason, public data often comes as plain\-text files. For example, the Census Bureau, the Social Security Administration, and the Bureau of Labor Statistics all make their data available as plain\-text files.
Here’s how the royal flush data set from [R Objects](r-objects.html#r-objects) would appear as a plain\-text file (I’ve added a value column):
```
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
```
A plain\-text file stores a table of data in a text document. Each row of the table is saved on its own line, and a simple convention is used to separate the cells within a row. Often cells are separated by a comma, but they can also be separated by a tab, a pipe delimiter (i.e., `|` ), or any other character. Each file only uses one method of separating cells, which minimizes confusion. Within each cell, data appears as you’d expect to see it, as words and numbers.
All plain\-text files can be saved with the extension *.txt* (for text), but sometimes a file will receive a special extension that advertises how it separates data\-cell entries. Since entries in the data set mentioned earlier are separated with a comma, this file would be a *comma\-separated\-values* file and would usually be saved with the extension *.csv*.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
D.4 R Files
-----------
R provides two file formats of its own for storing data, *.RDS* and *.RData*. RDS files can store a single R object, and RData files can store multiple R objects.
You can open a RDS file with `readRDS`. For example, if the royal flush data was saved as *poker.RDS*, you could open it with:
```
poker <- readRDS("poker.RDS")
```
Opening RData files is even easier. Simply run the function `load` with the file:
```
load("file.RData")
```
There’s no need to assign the output to an object. The R objects in your RData file will be loaded into your R session with their original names. RData files can contain multiple R objects, so loading one may read in multiple objects. `load` doesn’t tell you how many objects it is reading in, nor what their names are, so it pays to know a little about the RData file before you load it.
If worse comes to worst, you can keep an eye on the environment pane in RStudio as you load an RData file. It displays all of the objects that you have created or loaded during your R session. Another useful trick is to put parentheses around your load command like so, `(load("poker.RData"))`. This will cause R to print out the names of each object it loads from the file.
Both `readRDS` and `load` take a file path as their first argument, just like R’s other read and write functions. If your file is in your working directory, the file path will be the file name.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
D.5 Excel Spreadsheets
----------------------
Microsoft Excel is a popular spreadsheet program that has become almost industry standard in the business world. There is a good chance that you will need to work with an Excel spreadsheet in R at least once in your career. You can read spreadsheets into R and also save R data as a spreadsheet in a variety of ways.
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
D.6 Loading Files from Other Programs
-------------------------------------
You should follow the same advice I gave you for Excel files whenever you wish to work with file formats native to other programs: open the file in the original program and export the data as a plain\-text file, usually a CSV. This will ensure the most faithful transcription of the data in the file, and it will usually give you the most options for customizing how the data is transcribed.
Sometimes, however, you may acquire a file but not the program it came from. As a result, you won’t be able to open the file in its native program and export it as a text file. In this case, you can use one of the functions in Table [D.4](dataio.html#tab:others) to open the file. These functions mostly come in R’s `foreign` package. Each attempts to read in a different file format with as few hiccups as possible.
Table D.4: A number of functions will attempt to read the file types of other data\-analysis programs
| File format | Function | Library |
| --- | --- | --- |
| ERSI ArcGIS | `read.shapefile` | shapefiles |
| Matlab | `readMat` | R.matlab |
| minitab | `read.mtp` | foreign |
| SAS (permanent data set) | `read.ssd` | foreign |
| SAS (XPORT format) | `read.xport` | foreign |
| SPSS | `read.spss` | foreign |
| Stata | `read.dta` | foreign |
| Systat | `read.systat` | foreign |
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
D.1 Data Sets in Base R
-----------------------
R comes with many data sets preloaded in the `datasets` package, which comes with base R. These data sets are not very interesting, but they give you a chance to test code or make a point without having to load a data set from outside R. You can see a list of R’s data sets as well as a short description of each by running:
```
help(package = "datasets")
```
To use a data set, just type its name. Each data set is already presaved as an R object. For example:
```
iris
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
However, R’s data sets are no substitute for your own data, which you can load into R from a wide variety of file formats. But before you load any data files into R, you’ll need to determine where your *working directory* is.
D.2 Working Directory
---------------------
Each time you open R, it links itself to a directory on your computer, which R calls the working directory. This is where R will look for files when you attempt to load them, and it is where R will save files when you save them. The location of your working directory will vary on different computers. To determine which directory R is using as your working directory, run:
```
getwd()
## "/Users/garrettgrolemund"
```
You can place data files straight into the folder that is your working directory, or you can move your working directory to where your data files are. You can move your working directory to any folder on your computer with the function `setwd`. Just give `setwd` the file path to your new working directory. I prefer to set my working directory to a folder dedicated to whichever project I am currently working on. That way I can keep all of my data, scripts, graphs, and reports in the same place. For example:
```
setwd("~/Users/garrettgrolemund/Documents/Book_Project")
```
If the file path does not begin with your root directory, R will assume that it begins at your current working directory.
You can also change your working directory by clicking on Session \> Set Working Directory \> Choose Directory in the RStudio menu bar. The Windows and Mac GUIs have similar options. If you start R from a UNIX command line (as on Linux machines), the working directory will be whichever directory you were in when you called R.
You can see what files are in your working directory with `list.files()`. If you see the file that you would like to open in your working directory, then you are ready to proceed. How you open files in your working directory will depend on which type of file you would like to open.
D.3 Plain\-text Files
---------------------
Plain\-text files are one of the most common ways to save data. They are very simple and can be read by many different computer programs—even the most basic text editors. For this reason, public data often comes as plain\-text files. For example, the Census Bureau, the Social Security Administration, and the Bureau of Labor Statistics all make their data available as plain\-text files.
Here’s how the royal flush data set from [R Objects](r-objects.html#r-objects) would appear as a plain\-text file (I’ve added a value column):
```
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
```
A plain\-text file stores a table of data in a text document. Each row of the table is saved on its own line, and a simple convention is used to separate the cells within a row. Often cells are separated by a comma, but they can also be separated by a tab, a pipe delimiter (i.e., `|` ), or any other character. Each file only uses one method of separating cells, which minimizes confusion. Within each cell, data appears as you’d expect to see it, as words and numbers.
All plain\-text files can be saved with the extension *.txt* (for text), but sometimes a file will receive a special extension that advertises how it separates data\-cell entries. Since entries in the data set mentioned earlier are separated with a comma, this file would be a *comma\-separated\-values* file and would usually be saved with the extension *.csv*.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
D.4 R Files
-----------
R provides two file formats of its own for storing data, *.RDS* and *.RData*. RDS files can store a single R object, and RData files can store multiple R objects.
You can open a RDS file with `readRDS`. For example, if the royal flush data was saved as *poker.RDS*, you could open it with:
```
poker <- readRDS("poker.RDS")
```
Opening RData files is even easier. Simply run the function `load` with the file:
```
load("file.RData")
```
There’s no need to assign the output to an object. The R objects in your RData file will be loaded into your R session with their original names. RData files can contain multiple R objects, so loading one may read in multiple objects. `load` doesn’t tell you how many objects it is reading in, nor what their names are, so it pays to know a little about the RData file before you load it.
If worse comes to worst, you can keep an eye on the environment pane in RStudio as you load an RData file. It displays all of the objects that you have created or loaded during your R session. Another useful trick is to put parentheses around your load command like so, `(load("poker.RData"))`. This will cause R to print out the names of each object it loads from the file.
Both `readRDS` and `load` take a file path as their first argument, just like R’s other read and write functions. If your file is in your working directory, the file path will be the file name.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
D.5 Excel Spreadsheets
----------------------
Microsoft Excel is a popular spreadsheet program that has become almost industry standard in the business world. There is a good chance that you will need to work with an Excel spreadsheet in R at least once in your career. You can read spreadsheets into R and also save R data as a spreadsheet in a variety of ways.
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
D.6 Loading Files from Other Programs
-------------------------------------
You should follow the same advice I gave you for Excel files whenever you wish to work with file formats native to other programs: open the file in the original program and export the data as a plain\-text file, usually a CSV. This will ensure the most faithful transcription of the data in the file, and it will usually give you the most options for customizing how the data is transcribed.
Sometimes, however, you may acquire a file but not the program it came from. As a result, you won’t be able to open the file in its native program and export it as a text file. In this case, you can use one of the functions in Table [D.4](dataio.html#tab:others) to open the file. These functions mostly come in R’s `foreign` package. Each attempts to read in a different file format with as few hiccups as possible.
Table D.4: A number of functions will attempt to read the file types of other data\-analysis programs
| File format | Function | Library |
| --- | --- | --- |
| ERSI ArcGIS | `read.shapefile` | shapefiles |
| Matlab | `readMat` | R.matlab |
| minitab | `read.mtp` | foreign |
| SAS (permanent data set) | `read.ssd` | foreign |
| SAS (XPORT format) | `read.xport` | foreign |
| SPSS | `read.spss` | foreign |
| Stata | `read.dta` | foreign |
| Systat | `read.systat` | foreign |
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/dataio.html |
D Loading and Saving Data in R
==============================
This appendix will show you how to load and save data into R from plain\-text files, R files, and Excel spreadsheets. It will also show you the R packages that you can use to load data from databases and other common programs, like SAS and MATLAB.
D.1 Data Sets in Base R
-----------------------
R comes with many data sets preloaded in the `datasets` package, which comes with base R. These data sets are not very interesting, but they give you a chance to test code or make a point without having to load a data set from outside R. You can see a list of R’s data sets as well as a short description of each by running:
```
help(package = "datasets")
```
To use a data set, just type its name. Each data set is already presaved as an R object. For example:
```
iris
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
However, R’s data sets are no substitute for your own data, which you can load into R from a wide variety of file formats. But before you load any data files into R, you’ll need to determine where your *working directory* is.
D.2 Working Directory
---------------------
Each time you open R, it links itself to a directory on your computer, which R calls the working directory. This is where R will look for files when you attempt to load them, and it is where R will save files when you save them. The location of your working directory will vary on different computers. To determine which directory R is using as your working directory, run:
```
getwd()
## "/Users/garrettgrolemund"
```
You can place data files straight into the folder that is your working directory, or you can move your working directory to where your data files are. You can move your working directory to any folder on your computer with the function `setwd`. Just give `setwd` the file path to your new working directory. I prefer to set my working directory to a folder dedicated to whichever project I am currently working on. That way I can keep all of my data, scripts, graphs, and reports in the same place. For example:
```
setwd("~/Users/garrettgrolemund/Documents/Book_Project")
```
If the file path does not begin with your root directory, R will assume that it begins at your current working directory.
You can also change your working directory by clicking on Session \> Set Working Directory \> Choose Directory in the RStudio menu bar. The Windows and Mac GUIs have similar options. If you start R from a UNIX command line (as on Linux machines), the working directory will be whichever directory you were in when you called R.
You can see what files are in your working directory with `list.files()`. If you see the file that you would like to open in your working directory, then you are ready to proceed. How you open files in your working directory will depend on which type of file you would like to open.
D.3 Plain\-text Files
---------------------
Plain\-text files are one of the most common ways to save data. They are very simple and can be read by many different computer programs—even the most basic text editors. For this reason, public data often comes as plain\-text files. For example, the Census Bureau, the Social Security Administration, and the Bureau of Labor Statistics all make their data available as plain\-text files.
Here’s how the royal flush data set from [R Objects](r-objects.html#r-objects) would appear as a plain\-text file (I’ve added a value column):
```
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
```
A plain\-text file stores a table of data in a text document. Each row of the table is saved on its own line, and a simple convention is used to separate the cells within a row. Often cells are separated by a comma, but they can also be separated by a tab, a pipe delimiter (i.e., `|` ), or any other character. Each file only uses one method of separating cells, which minimizes confusion. Within each cell, data appears as you’d expect to see it, as words and numbers.
All plain\-text files can be saved with the extension *.txt* (for text), but sometimes a file will receive a special extension that advertises how it separates data\-cell entries. Since entries in the data set mentioned earlier are separated with a comma, this file would be a *comma\-separated\-values* file and would usually be saved with the extension *.csv*.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
D.4 R Files
-----------
R provides two file formats of its own for storing data, *.RDS* and *.RData*. RDS files can store a single R object, and RData files can store multiple R objects.
You can open a RDS file with `readRDS`. For example, if the royal flush data was saved as *poker.RDS*, you could open it with:
```
poker <- readRDS("poker.RDS")
```
Opening RData files is even easier. Simply run the function `load` with the file:
```
load("file.RData")
```
There’s no need to assign the output to an object. The R objects in your RData file will be loaded into your R session with their original names. RData files can contain multiple R objects, so loading one may read in multiple objects. `load` doesn’t tell you how many objects it is reading in, nor what their names are, so it pays to know a little about the RData file before you load it.
If worse comes to worst, you can keep an eye on the environment pane in RStudio as you load an RData file. It displays all of the objects that you have created or loaded during your R session. Another useful trick is to put parentheses around your load command like so, `(load("poker.RData"))`. This will cause R to print out the names of each object it loads from the file.
Both `readRDS` and `load` take a file path as their first argument, just like R’s other read and write functions. If your file is in your working directory, the file path will be the file name.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
D.5 Excel Spreadsheets
----------------------
Microsoft Excel is a popular spreadsheet program that has become almost industry standard in the business world. There is a good chance that you will need to work with an Excel spreadsheet in R at least once in your career. You can read spreadsheets into R and also save R data as a spreadsheet in a variety of ways.
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
D.6 Loading Files from Other Programs
-------------------------------------
You should follow the same advice I gave you for Excel files whenever you wish to work with file formats native to other programs: open the file in the original program and export the data as a plain\-text file, usually a CSV. This will ensure the most faithful transcription of the data in the file, and it will usually give you the most options for customizing how the data is transcribed.
Sometimes, however, you may acquire a file but not the program it came from. As a result, you won’t be able to open the file in its native program and export it as a text file. In this case, you can use one of the functions in Table [D.4](dataio.html#tab:others) to open the file. These functions mostly come in R’s `foreign` package. Each attempts to read in a different file format with as few hiccups as possible.
Table D.4: A number of functions will attempt to read the file types of other data\-analysis programs
| File format | Function | Library |
| --- | --- | --- |
| ERSI ArcGIS | `read.shapefile` | shapefiles |
| Matlab | `readMat` | R.matlab |
| minitab | `read.mtp` | foreign |
| SAS (permanent data set) | `read.ssd` | foreign |
| SAS (XPORT format) | `read.xport` | foreign |
| SPSS | `read.spss` | foreign |
| Stata | `read.dta` | foreign |
| Systat | `read.systat` | foreign |
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
D.1 Data Sets in Base R
-----------------------
R comes with many data sets preloaded in the `datasets` package, which comes with base R. These data sets are not very interesting, but they give you a chance to test code or make a point without having to load a data set from outside R. You can see a list of R’s data sets as well as a short description of each by running:
```
help(package = "datasets")
```
To use a data set, just type its name. Each data set is already presaved as an R object. For example:
```
iris
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
However, R’s data sets are no substitute for your own data, which you can load into R from a wide variety of file formats. But before you load any data files into R, you’ll need to determine where your *working directory* is.
D.2 Working Directory
---------------------
Each time you open R, it links itself to a directory on your computer, which R calls the working directory. This is where R will look for files when you attempt to load them, and it is where R will save files when you save them. The location of your working directory will vary on different computers. To determine which directory R is using as your working directory, run:
```
getwd()
## "/Users/garrettgrolemund"
```
You can place data files straight into the folder that is your working directory, or you can move your working directory to where your data files are. You can move your working directory to any folder on your computer with the function `setwd`. Just give `setwd` the file path to your new working directory. I prefer to set my working directory to a folder dedicated to whichever project I am currently working on. That way I can keep all of my data, scripts, graphs, and reports in the same place. For example:
```
setwd("~/Users/garrettgrolemund/Documents/Book_Project")
```
If the file path does not begin with your root directory, R will assume that it begins at your current working directory.
You can also change your working directory by clicking on Session \> Set Working Directory \> Choose Directory in the RStudio menu bar. The Windows and Mac GUIs have similar options. If you start R from a UNIX command line (as on Linux machines), the working directory will be whichever directory you were in when you called R.
You can see what files are in your working directory with `list.files()`. If you see the file that you would like to open in your working directory, then you are ready to proceed. How you open files in your working directory will depend on which type of file you would like to open.
D.3 Plain\-text Files
---------------------
Plain\-text files are one of the most common ways to save data. They are very simple and can be read by many different computer programs—even the most basic text editors. For this reason, public data often comes as plain\-text files. For example, the Census Bureau, the Social Security Administration, and the Bureau of Labor Statistics all make their data available as plain\-text files.
Here’s how the royal flush data set from [R Objects](r-objects.html#r-objects) would appear as a plain\-text file (I’ve added a value column):
```
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
```
A plain\-text file stores a table of data in a text document. Each row of the table is saved on its own line, and a simple convention is used to separate the cells within a row. Often cells are separated by a comma, but they can also be separated by a tab, a pipe delimiter (i.e., `|` ), or any other character. Each file only uses one method of separating cells, which minimizes confusion. Within each cell, data appears as you’d expect to see it, as words and numbers.
All plain\-text files can be saved with the extension *.txt* (for text), but sometimes a file will receive a special extension that advertises how it separates data\-cell entries. Since entries in the data set mentioned earlier are separated with a comma, this file would be a *comma\-separated\-values* file and would usually be saved with the extension *.csv*.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
### D.3\.1 read.table
To load a plain\-text file, use `read.table`. The first argument of `read.table` should be the name of your file (if it is in your working directory), or the file path to your file (if it is not in your working directory). If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.You can give `read.table` other arguments as well. The two most important are `sep` and `header`.
If the royal flush data set was saved as a file named *poker.csv* in your working directory, you could load it with:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
#### D.3\.1\.1 sep
Use `sep` to tell `read.table` what character your file uses to separate data entries. To find this out, you might have to open your file in a text editor and look at it. If you don’t specify a `sep` argument, `read.table` will try to separate cells whenever it comes to white space, such as a tab or space. R won’t be able to tell you if `read.table` does this correctly or not, so rely on it at your own risk.
#### D.3\.1\.2 header
Use `header` to tell `read.table` whether the first line of the file contains variable names instead of values. If the first line of the file is a set of variable names, you should set `header = TRUE`.
#### D.3\.1\.3 na.strings
Oftentimes data sets will use special symbols to represent missing information. If you know that your data uses a certain symbol to represent missing entries, you can tell `read.table` (and the preceding functions) what the symbol is with the `na.strings` argument. `read.table` will convert all instances of the missing information symbol to `NA`, which is R’s missing information symbol (see [Missing Information](modify.html#missing)).
For example, your poker data set contained missing values stored as a `.`, like this:
```
## "card","suit","value"
## "ace"," spades"," 14"
## "king"," spades"," 13"
## "queen",".","."
## "jack",".","."
## "ten",".","."
```
You could read the data set into R and convert the missing values into NAs as you go with the command:
```
poker <- read.table("poker.csv", sep = ",", header = TRUE, na.string = ".")
```
R would save a version of `poker` that looks like this:
```
## card suit value
## ace spades 14
## king spades 13
## queen <NA> NA
## jack <NA> NA
## ten <NA> NA
```
#### D.3\.1\.4 skip and nrow
Sometimes a plain\-text file will come with introductory text that is not part of the data set. Or, you may decide that you only wish to read in part of a data set. You can do these things with the `skip` and `nrow` arguments. Use `skip` to tell R to skip a specific number of lines before it starts reading in values from the file. Use `nrow` to tell R to stop reading in values after it has read in a certain number of lines.
For example, imagine that the complete royal flush file looks like this:
```
This data was collected by the National Poker Institute.
We accidentally repeated the last row of data.
"card", "suit", "value"
"ace", "spades", 14
"king", "spades", 13
"queen", "spades", 12
"jack", "spades", 11
"ten", "spades", 10
"ten", "spades", 10
```
You can read just the six lines that you want (five rows plus a header) with:
```
read.table("poker.csv", sep = ",", header = TRUE, skip = 3, nrow = 5)
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 ten spades 10
```
Notice that the header row doesn’t count towards the total rows allowed by `nrow`.
#### D.3\.1\.5 stringsAsFactors
R reads in numbers just as you’d expect, but when R comes across character strings (e.g., letters and words) it begins to act strangely. R wants to convert every character string into a factor. This is R’s default behavior, but I think it is a mistake. Sometimes factors are useful. At other times, they’re clearly the wrong data type for the job. Also factors cause weird behavior, especially when you want to display data. This behavior can be surprising if you didn’t realize that R converted your data to factors. In general, you’ll have a smoother R experience if you don’t let R make factors until you ask for them. Thankfully, it is easy to do this.
Setting the argument `stringsAsFactors` to `FALSE` will ensure that R saves any character strings in your data set as character strings, not factors. To use `stringsAsFactors`, you’d write:
```
read.table("poker.csv", sep = ",", header = TRUE, stringsAsFactors = FALSE)
```
If you will be loading more than one data file, you can change the default factoring behavior at the global level with:
```
options(stringsAsFactors = FALSE)
```
This will ensure that all strings will be read as strings, not as factors, until you end your R session, or rechange the global default by running:
```
options(stringsAsFactors = TRUE)
```
### D.3\.2 The read Family
R also comes with some prepackaged short cuts for `read.table`, shown in Table [D.1](dataio.html#tab:shortcuts).
Table D.1: R’s read functions. You can overwrite any of the default arguments as necessary.
| Function | Defaults | Use |
| --- | --- | --- |
| `read.table` | sep \= " ", header \= FALSE | General\-purpose read function |
| `read.csv` | sep \= “,”, header \= TRUE | Comma\-separated\-variable (CSV) files |
| `read.delim` | sep \= “”, header \= TRUE | Tab\-delimited files |
| `read.csv2` | sep \= “;”, header \= TRUE, dec \= “,” | CSV files with European decimal format |
| `read.delim2` | sep \= “”, header \= TRUE, dec \= “,” | Tab\-delimited files with European decimal format |
The first shortcut, `read.csv`, behaves just like `read.table` but automatically sets `sep = ","` and `header = TRUE`, which can save you some typing:
```
poker <- read.csv("poker.csv")
```
`read.delim` automatically sets `sep` to the tab character, which is very handy for reading tab delimited files. These are files where each cell is separated by a tab. `read.delim` also sets `header = TRUE` by default.
`read.delim2` and `read.csv2` exist for European R users. These functions tell R that the data uses a comma instead of a period to denote decimal places. (If you’re wondering how this works with CSV files, CSV2 files usually separate cells with a semicolon, not a comma.)
**Import Dataset**
You can also load plain text files with RStudio’s Import Dataset button, as described in [Loading Data](r-objects.html#loading). Import Dataset provides a GUI version of `read.table`.
### D.3\.3 read.fwf
One type of plain\-text file defies the pattern by using its layout to separate data cells. Each row is placed in its own line (as with other plain\-text files), and then each column begins at a specific number of characters from the lefthand side of the document. To achieve this, an arbitrary number of character spaces is added to the end of each entry to correctly position the next entry. These documents are known as *fixed\-width files* and usually end with the extension *.fwf*.
Here’s one way the royal flush data set could look as a fixed\-width file. In each row, the suit entry begins exactly 10 characters from the start of the line. It doesn’t matter how many characters appeared in the first cell of each row:
```
card suit value
ace spades 14
king spades 13
queen spades 12
jack spades 11
10 spades 10
```
Fixed\-width files look nice to human eyes (but no better than a tab\-delimited file); however, they can be difficult to work with. Perhaps because of this, R comes with a function for reading fixed\-width files, but no function for saving them. Unfortunately, US government agencies seem to like fixed\-width files, and you’ll likely encounter one or more during your career.
You can read fixed\-width files into R with the function `read.fwf`. The function takes the same arguments as `read.table` but requires an additional argument, `widths`, which should be a vector of numbers. Each \_i\_th entry of the `widths` vector should state the width (in characters) of the \_i\_th column of the data set.
If the aforementioned fixed\-width royal flush data was saved as *poker.fwf* in your working directory, you could read it with:
```
poker <- read.fwf("poker.fwf", widths = c(10, 7, 6), header = TRUE)
```
### D.3\.4 HTML Links
Many data files are made available on the Internet at their own web address. If you are connected to the Internet, you can open these files straight into R with `read.table`, `read.csv`, etc. You can pass a web address into the file name argument for any of R’s data\-reading functions. As a result, you could read in the poker data set from a web address like *<http://>…/poker.csv* with:
```
poker <- read.csv("http://.../poker.csv")
```
That’s obviously not a real address, but here’s something that would work—if you can manage to type it!
```
deck <- read.csv("https://gist.githubusercontent.com/garrettgman/9629323/raw/ee5dfc039fd581cb467cc69c226ea2524913c3d8/deck.csv")
```
Just make sure that the web address links directly to the file and not to a web page that links to the file. Usually, when you visit a data file’s web address, the file will begin to download or the raw data will appear in your browser window.
Note that websites that begin with \_<https://_> are secure websites, which means R may not be able to access the data provided at these links.
### D.3\.5 Saving Plain\-Text Files
Once your data is in R, you can save it to any file format that R supports. If you’d like to save it as a plain\-text file, you can use the \+write\+ family of functions. The three basic write functions appear in Table [D.2](dataio.html#tab:write). Use `write.csv` to save your data as a *.csv* file and `write.table` to save your data as a tab delimited document or a document with more exotic separators.
Table D.2: R saves data sets to plain\-text files with the write family of functions
| File format | Function and syntax |
| --- | --- |
| **.csv** | `write.csv(r_object, file = filepath, row.names = FALSE)` |
| **.csv** (with European decimal notation) | `write.csv2(r_object, file = filepath, row.names = FALSE)` |
| tab delimited | `write.table(r_object, file = filepath, sep = "\t", row.names=FALSE)` |
The first argument of each function is the R object that contains your data set. The `file` argument is the file name (including extension) that you wish to give the saved data. By default, each function will save your data into your working directory. However, you can supply a file path to the file argument. R will oblige by saving the file at the end of the file path. If the file path does not begin with your root directory, R will append it to the end of the file path that leads to your working directory.
For example, you can save the (hypothetical) poker data frame to a subdirectory named *data* within your working directory with the command:
```
write.csv(poker, "data/poker.csv", row.names = FALSE)
```
Keep in mind that `write.csv` and `write.table` cannot create new directories on your computer. Each folder in the file path must exist before you try to save a file with it.
The `row.names` argument prevents R from saving the data frame’s row names as a column in the plain\-text file. You might have noticed that R automatically names each row in a data frame with a number. For example, each row in our poker data frame appears with a number next to it:
```
poker
## card suit value
## 1 ace spades 14
## 2 king spades 13
## 3 queen spades 12
## 4 jack spades 11
## 5 10 spades 10
```
These row numbers are helpful, but can quickly accumulate if you start saving them. R will add a new set of numbers by default each time you read the file back in. Avoid this by always setting `row.names = FALSE` when you use a function in the `write` family.
### D.3\.6 Compressing Files
To compress a plain\-text file, surround the file name or file path with the function `bzfile`, `gzfile`, or `xzfile`. For example:
```
write.csv(poker, file = bzfile("data/poker.csv.bz2"), row.names = FALSE)
```
Each of these functions will compress the output with a different type of compression format, shown in Table [D.3](dataio.html#tab:compression).
Table D.3: R comes with three helper functions for compressing files
| Function | Compression type |
| --- | --- |
| `bzfile` | bzip2 |
| `gzfile` | gnu zip (gzip) |
| `xzfile` | xz compression |
It is a good idea to adjust your file’s extension to reflect the compression. R’s `read` functions will open plain\-text files compressed in any of these formats. For example, you could read a compressed file named *poker.csv.bz2* with:
```
read.csv("poker.csv.bz2")
```
or:
```
read.csv("data/poker.csv.bz2")
```
depending on where the file is saved.
D.4 R Files
-----------
R provides two file formats of its own for storing data, *.RDS* and *.RData*. RDS files can store a single R object, and RData files can store multiple R objects.
You can open a RDS file with `readRDS`. For example, if the royal flush data was saved as *poker.RDS*, you could open it with:
```
poker <- readRDS("poker.RDS")
```
Opening RData files is even easier. Simply run the function `load` with the file:
```
load("file.RData")
```
There’s no need to assign the output to an object. The R objects in your RData file will be loaded into your R session with their original names. RData files can contain multiple R objects, so loading one may read in multiple objects. `load` doesn’t tell you how many objects it is reading in, nor what their names are, so it pays to know a little about the RData file before you load it.
If worse comes to worst, you can keep an eye on the environment pane in RStudio as you load an RData file. It displays all of the objects that you have created or loaded during your R session. Another useful trick is to put parentheses around your load command like so, `(load("poker.RData"))`. This will cause R to print out the names of each object it loads from the file.
Both `readRDS` and `load` take a file path as their first argument, just like R’s other read and write functions. If your file is in your working directory, the file path will be the file name.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
### D.4\.1 Saving R Files
You can save an R object like a data frame as either an RData file or an RDS file. RData files can store multiple R objects at once, but RDS files are the better choice because they foster reproducible code.
To save data as an RData object, use the `save` function. To save data as a RDS object, use the `saveRDS` function. In each case, the first argument should be the name of the R object you wish to save. You should then include a file argument that has the file name or file path you want to save the data set to.
For example, if you have three R objects, `a`, `b`, and `c`, you could save them all in the same RData file and then reload them in another R session:
```
a <- 1
b <- 2
c <- 3
save(a, b, c, file = "stuff.RData")
load("stuff.RData")
```
However, if you forget the names of your objects or give your file to someone else to use, it will be difficult to determine what was in the file—even after you (or they) load it. The user interface for RDS files is much more clear. You can save only one object per file, and whoever loads it can decide what they want to call their new data. As a bonus, you don’t have to worry about `load` overwriting any R objects that happened to have the same name as the objects you are loading:
```
saveRDS(a, file = "stuff.RDS")
a <- readRDS("stuff.RDS")
```
Saving your data as an R file offers some advantages over saving your data as a plain\-text file. R automatically compresses the file and will also save any R\-related metadata associated with your object. This can be handy if your data contains factors, dates and times, or class attributes. You won’t have to reparse this information into R the way you would if you converted everything to a text file.
On the other hand, R files cannot be read by many other programs, which makes them inefficient for sharing. They may also create a problem for long\-term storage if you don’t think you’ll have a copy of R when you reopen the files.
D.5 Excel Spreadsheets
----------------------
Microsoft Excel is a popular spreadsheet program that has become almost industry standard in the business world. There is a good chance that you will need to work with an Excel spreadsheet in R at least once in your career. You can read spreadsheets into R and also save R data as a spreadsheet in a variety of ways.
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
### D.5\.1 Export from Excel
The best method for moving data from Excel to R is to export the spreadsheet from Excel as a *.csv* or *.txt* file. Not only will R be able to read the text file, so will any other data analysis software. Text files are the lingua franca of data storage.
Exporting the data solves another difficulty as well. Excel uses proprietary formats and metadata that will not easily transfer into R. For example, a single Excel file can include multiple spreadsheets, each with their own columns and macros. When Excel exports the file as a *.csv* or *.txt*, it makes sure this format is transferred into a plain\-text file in the most appropriate way. R may not be able to manage the conversion as efficiently.
To export data from Excel, open the Excel spreadsheet and then go to Save As in the Microsoft Office Button menu. Then choose CSV in the Save as type box that appears and save the files. You can then read the file into R with the `read.csv` function.
### D.5\.2 Copy and Paste
You can also copy portions of an Excel spreadsheet and paste them into R. To do this, open the spreadsheet and select the cells you wish to read into R. Then select Edit \> Copy in the menu bar—or use a keyboard shortcut—to copy the cells to your clipboard.
On most operating systems, you can read the data stored in your clipboard into R with:
```
read.table("clipboard")
```
On Macs you will need to use:
```
read.table(pipe("pbpaste"))
```
If the cells contain values with spaces in them, this will disrupt `read.table`. You can try another `read` function (or just formally export the data from Excel) before reading it into R.
### D.5\.3 XLConnect
Many packages have been written to help you read Excel files directly into R. Unfortunately, many of these packages do not work on all operating systems. Others have been made out of date by the *.xlsx* file format. One package that does work on all file systems (and gets good reviews) is the XLConnect package. To use it, you’ll need to install and load the package:
```
install.packages("XLConnect")
library(XLConnect)
```
XLConnect relies on Java to be platform independent. So when you first open XLConnect, RStudio may ask to download a Java Runtime Environment if you do not already have one.
### D.5\.4 Reading Spreadsheets
You can use XLConnect to read in an Excel spreadsheet with either a one\- or a two\-step process. I’ll start with the two\-step process. First, load an Excel workbook with `loadWorkbook`. `loadWorkbook` can load both *.xls* and *.xlsx* files. It takes one argument: the file path to your Excel workbook (this will be the name of the workbook if it is saved in your working directory):
```
wb <- loadWorkbook("file.xlsx")
```
Next, read a spreadsheet from the workbook with `readWorksheet`, which takes several arguments. The first argument should be a workbook object created with `loadWorkbook`. The next argument, `sheet`, should be the name of the spreadsheet in the workbook that you would like to read into R. This will be the name that appears on the bottom tab of the spreadsheet. You can also give `sheet` a number, which specifies the sheet that you want to read in (one for the first sheet, two for the second, and so on).
`readWorksheet` then takes four arguments that specify a bounding box of cells to read in: `startRow`, `startCol`, `endRow`, and `endCol`. Use `startRow` and `startCol` to describe the cell in the top\-left corner of the bounding box of cells that you wish to read in. Use `endRow` and `endCol` to specify the cell in the bottom\-right corner of the bounding box. Each of these arguments takes a number. If you do not supply bounding arguments, `readWorksheet` will read in the rectangular region of cells in the spreadsheet that appears to contain data. `readWorksheet` will assume that this region contains a header row, but you can tell it otherwise with `header = FALSE`.
So to read in the first worksheet from `wb`, you could use:
```
sheet1 <- readWorksheet(wb, sheet = 1, startRow = 0, startCol = 0,
endRow = 100, endCol = 3)
```
R will save the output as a data frame. All of the arguments in `readWorkbook` except the first are vectorized, so you can use it to read in multiple sheets from the same workbook at once (or multiple cell regions from a single worksheet). In this case, `readWorksheet` will return a list of data frames.
You can combine these two steps with `readWorksheetFromFile`. It takes the file argument from `loadWorkbook` and combines it with the arguments from `readWorksheet`. You can use it to read one or more sheets straight from an Excel file:
```
sheet1 <- readWorksheetFromFile("file.xlsx", sheet = 1, startRow = 0,
startCol = 0, endRow = 100, endCol = 3)
```
### D.5\.5 Writing Spreadsheets
Writing to an Excel spreadsheet is a four\-step process. First, you need to set up a workbook object with `loadWorkbook`. This works just as before, except if you are not using an existing Excel file, you should add the argument `create = TRUE`. XLConnect will create a blank workbook. When you save it, XLConnect will write it to the file location that you specified here with `loadWorkbook`:
```
wb <- loadWorkbook("file.xlsx", create = TRUE)
```
Next, you need to create a worksheet inside your workbook object with `createSheet`. Tell `createSheet` which workbook to place the sheet in and which to use for the sheet.
```
createSheet(wb, "Sheet 1")
```
Then you can save your data frame or matrix to the sheet with `writeWorksheet`. The first argument of `writeWorksheet`, `object`, is the workbook to write the data to. The second argument, `data`, is the data to write. The third argument, `sheet`, is the name of the sheet to write it to. The next two arguments, `startRow` and `startCol`, tell R where in the spreadsheet to place the upper\-left cell of the new data. These arguments each default to 1\. Finally, you can use `header` to tell R whether your column names should be written with the data:
```
writeWorksheet(wb, data = poker, sheet = "Sheet 1")
```
Once you have finished adding sheets and data to your workbook, you can save it by running `saveWorkbook` on the workbook object. R will save the workbook to the file name or path you provided in `loadWorkbook`. If this leads to an existing Excel file, R will overwrite it. If it leads to a new file, R will create it.
You can also collapse these steps into a single call with `writeWorksheetToFile`, like this:
```
writeWorksheetToFile("file.xlsx", data = poker, sheet = "Sheet 1",
startRow = 1, startCol = 1)
```
The XLConnect package also lets you do more advanced things with Excel spreadsheets, such as writing to a named region in a spreadsheet, working with formulas, and assigning styles to cells. You can read about these features in XLConnect’s vignette, which is accessible by loading XLConnect and then running:
```
vignette("XLConnect")
```
D.6 Loading Files from Other Programs
-------------------------------------
You should follow the same advice I gave you for Excel files whenever you wish to work with file formats native to other programs: open the file in the original program and export the data as a plain\-text file, usually a CSV. This will ensure the most faithful transcription of the data in the file, and it will usually give you the most options for customizing how the data is transcribed.
Sometimes, however, you may acquire a file but not the program it came from. As a result, you won’t be able to open the file in its native program and export it as a text file. In this case, you can use one of the functions in Table [D.4](dataio.html#tab:others) to open the file. These functions mostly come in R’s `foreign` package. Each attempts to read in a different file format with as few hiccups as possible.
Table D.4: A number of functions will attempt to read the file types of other data\-analysis programs
| File format | Function | Library |
| --- | --- | --- |
| ERSI ArcGIS | `read.shapefile` | shapefiles |
| Matlab | `readMat` | R.matlab |
| minitab | `read.mtp` | foreign |
| SAS (permanent data set) | `read.ssd` | foreign |
| SAS (XPORT format) | `read.xport` | foreign |
| SPSS | `read.spss` | foreign |
| Stata | `read.dta` | foreign |
| Systat | `read.systat` | foreign |
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
### D.6\.1 Connecting to Databases
You can also use R to connect to a database and read in data.
Use the RODBC package to connect to databases through an ODBC connection.
Use the DBI package to connect to databases through individual drivers. The DBI package provides a common syntax for working with different databases. You will have to download a database\-specific package to use in conjunction with DBI. These packages provide the API for the native drivers of different database programs. For MySQL use RMySQL, for SQLite use RSQLite, for Oracle use ROracle, for PostgreSQL use RPostgreSQL, and for databases that use drivers based on the Java Database Connectivity (JDBC) API use RJDBC. Once you have loaded the appropriate driver package, you can use the commands provided by DBI to access your database.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/debug.html |
E Debugging R Code
==================
This appendix refers to environments, the topic of [Environments](environments.html#environments-1), and uses examples from [Programs](programs.html#programs) and [S3](s3.html#s3). You should read through these chapters first to get the most out of this appendix.
R comes with a simple set of debugging tools that RStudio amplifies. You can use these tools to better understand code that produces an error or returns an unexpected result. Usually this will be your own code, but you can also examine the functions in R or one of its packages.
Debugging code can take as much creativity and insight as writing code. There is no guarantee that you will find a bug or be able to fix it when you do. However, you can help yourself by using R’s debugging tools. These include the `traceback`, `browser`, `debug`, `debugonce`, `trace`, and `recover` functions.
Using these tools is usually a two\-step process. First, you locate *where* an error occurred. Then you try to determine *why* it occurred. You can do the first step with R’s `traceback` function.
E.1 traceback
-------------
The `traceback` tool pinpoints the location of an error. Many R functions call other R functions, which call other functions, and so on. When an error occurs, it may not be clear which of these functions went wrong. Let’s consider an example. The following functions call one another, and the last function creates an error (you’ll see why in a second):
```
first <- function() second()
second <- function() third()
third <- function() fourth()
fourth <- function() fifth()
fifth <- function() bug()
```
When you run `first`, it will call `second`, which will call `third`, which will call `fourth`, which will call `fifth`, which will call `bug`, a function that does not exist. Here’s what that will look like at the command line:
```
first()
## Error in fifth() : could not find function "bug"
```
The error report tells us that the error occurred when R tried to run `fifth`. It also tells us the nature of the error (there is no function called `bug`). Here, it is obvious why R calls `fifth`, but it might not be so obvious why R calls a function when an error occurs in the wild.
You can see the path of functions that R called before it hit an error by typing *`traceback()`* at the command line. `traceback` will return a call stack, a list of the functions that R called in the order that it called them. The bottom function will be the command that you entered in the command line. The top function will be the function that caused the error:
```
traceback()
## 5: fifth() at #1
## 4: fourth() at #1
## 3: third() at #1
## 2: second() at #1
## 1: first()
```
`traceback` will always refer to the last error you encountered. If you would like to look at a less recent error, you will need to recreate it before running `traceback`.
How can this help you? First, `traceback` returns a list of suspects. One of these functions caused the error, and each function is more suspicious than the ones below it. Chances are that our bug came from `fifth` (it did), but it is also possible that an earlier function did something odd—like call `fifth` when it shouldn’t have.
Second, `traceback` can show you if R stepped off the path that you expected it to take. If this happened, look at the last function before things went wrong.
Third, `traceback` can reveal the frightening extent of infinite recursion errors. For example, if you change `fifth` so that it calls `second`, the functions will make a loop: `second` will call `third`, which will call `fourth`, which will call `fifth`, which will call `second` and start the loop over again. It is easier to do this sort of thing in practice than you might think:
```
fifth <- function() second()
```
When you call `first()`, R will start to run the functions. After awhile, it will notice that it is repeating itself and will return an error. `traceback` will show just what R was doing:
```
first()
## Error: evaluation nested too deeply: infinite recursion/options(expressions=)?
traceback()
## 5000: fourth() at #1
## 4999: third() at #1
## 4998: second() at #1
## 4997: fifth() at #1
## 4996: fourth() at #1
## 4995: third() at #1
## 4994: second() at #1
## 4993: fifth() at #1
## ...
```
Notice that there are 5,000 lines of output in this `traceback`. If you are using RStudio, you will not get to see the traceback of an infinite recursion error (I used the Mac GUI to get this output). RStudio represses the traceback for infinite recursion errors to prevent the large call stacks from pushing your console history out of R’s memory buffer. With RStudio, you will have to recognize the infinite recursion error by its error message. However, you can still see the imposing `traceback` by running things in a UNIX shell or the Windows or Mac GUIs.
RStudio makes it very easy to use `traceback`. You do not even need to type in the function name. Whenever an error occurs, RStudio will display it in a gray box with two options. The first is Show Traceback, shown in Figure [E.1](debug.html#fig:show-traceback).
Figure E.1: RStudio’s Show Traceback option.
If you click Show Traceback, RStudio will expand the gray box and display the `traceback` call stack, as in Figure [E.2](debug.html#fig:hide-traceback). The Show Traceback option will persist beside an error message in your console, even as you write new commands. This means that you can go back and look at the call stacks for all errors—not just the most recent error.
Imagine that you’ve used `traceback` to pinpoint a function that you think might cause a bug. Now what should you do? You should try to figure out what the function did to cause an error while it ran (if it did anything). You can examine how the function runs with `browser`.
Figure E.2: RStudio’s Traceback display.
E.2 browser
-----------
You can ask R to pause in the middle of running a function and give control back to you with `browser`. This will let you enter new commands at the command line. The active environment for these commands will not be the global environment (as usual); it will be the runtime environment of the function that you have paused. As a result, you can look at the objects that the function is using, look up their values with the same scoping rules that the function would use, and run code under the same conditions that the function would run it in. This arrangement provides the best chance for spotting the source of bugs in a function.
To use `browser`, add the call `browser()` to the body of a function and then resave the function. For example, if I wanted to pause in the middle of the `score` function from [Programs](programs.html#programs), I could add `browser()` to the body of `score` and then rerun the following code, which defines `score`:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
browser()
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Now whenever R runs `score`, it will come to the call `browser()`. You can see this with the `play` function from [Programs](programs.html#programs). If you don’t have `play` handy, you can access it by running this code:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
When you run `play`, `play` will call `get_symbols` and then `score`. As R works through `score`, it will come across the call to `browser` and run it. When R runs this call, several things will happen, as in Figure [E.3](debug.html#fig:browser). First, R will stop running `score`. Second, the command prompt will change to `browser[1]>` and R will give me back control; I can now type new commands in at the new command prompt. Third, three buttons will appear above the console pane: Next, Continue, and Stop. Fourth, RStudio will display the source code for `score` in the scripts pane, and it will highlight the line that contains `browser()`. Fifth, the environments tab will change. Instead of revealing the objects that are saved in the global environment, it will reveal the objects that are saved in the runtime environment of `score` (see [Environments](environments.html#environments-1) for an explanation of R’s environment system). Sixth, RStudio will open a new Traceback pane, which shows the call stack RStudio took to get to `browser`. The most recent function, `score`, will be highlighted.
I’m now in a new R mode, called *browser mode*. Browser mode is designed to help you uncover bugs, and the new display in RStudio is designed to help you navigate this mode.
Any command that you run in browser mode will be evaluated in the context of the runtime environment of the function that called `browser`. This will be the function that is highlighted in the new Traceback pane. Here, that function is `score`. So while we are in browser mode, the active environment will be `score`’s runtime environment. This lets you do two things.
Figure E.3: RStudio updates its display whenever you enter browser mode to help you navigate the mode.
First, you can inspect the objects that `score` uses. The updated Environments pane shows you which objects `score` has saved in its local environment. You can inspect any of them by typing their name at the browser prompt. This gives you a way to see the values of runtime variables that you normally would not be able to access. If a value looks clearly wrong, you may be close to finding a bug:
```
Browse[1]> symbols
## [1] "B" "B" "0"
Browse[1]> same
## [1] FALSE
```
Second, you can run code and see the same results that `score` would see. For example, you could run the remaining lines of the `score` function and see if they do anything unusual. You could run these lines by typing them into the command prompt, or you could use the three navigation buttons that now appear above the prompt, as in Figure [E.4](debug.html#fig:browser-buttons).
The first button, Next, will run the next line of code in `score`. The highlighted line in the scripts pane will advance by one line to show you your new location in the `score` function. If the next line begins a code chunk, like a `for` loop or an `if` tree, R will run the whole chunk and will highlight the whole chunk in the script window.
The second button, Continue, will run all of the remaining lines of `score` and then exit the browser mode.
The third button, Stop, will exit browser mode without running any more lines of `score`.
Figure E.4: You can navigate browser mode with the three buttons at the top of the console pane.
You can do the same things by typing the commands `n`, `c`, and `Q` into the browser prompt. This creates an annoyance: what if you want to look up an object named `n`, `c`, or `Q`? Typing in the object name will not work, R will either advance, continue, or quit the browser mode. Instead you will have to look these objects up with the commands `get("n")`, `get("c")`, and `get("Q")`. `cont` is a synonym for `c` in browser mode and `where` prints the call stack, so you’ll have to look up these objects with `get` as well.
Browser mode can help you see things from the perspective of your functions, but it cannot show you where the bug lies. However, browser mode can help you test hypotheses and investigate function behavior. This is usually all you need to spot and fix a bug. The browser mode is the basic debugging tool of R. Each of the following functions just provides an alternate way to enter the browser mode.
Once you fix the bug, you should resave your function a third time—this time without the `browser()` call. As long as the browser call is in there, R will pause each time you, or another function, calls `score`.
E.3 Break Points
----------------
RStudio’s break points provide a graphical way to add a `browser` statement to a function. To use them, open the script where you’ve defined a function. Then click to the left of the line number of the line of code in the function body where you’d like to add the browser statement. A hollow red dot will appear to show you where the break point will occur. Then run the script by clicking the Source button at the top of the Scripts pane. The hollow dot will turn into a solid red dot to show that the function has a break point (see Figure [E.5](debug.html#fig:break-point)).
R will treat the break point like a `browser` statement, going into browser mode when it encounters it. You can remove a break point by clicking on the red dot. The dot will disappear, and the break point will be removed.
Figure E.5: Break points provide the graphical equivalent of a browser statement.
Break points and `browser` provide a great way to debug functions that you have defined. But what if you want to debug a function that already exists in R? You can do that with the `debug` function.
E.4 debug
---------
You can “add” a browser call to the very start of a preexisting function with `debug`. To do this, run `debug` on the function. For example, you can run `debug` on `sample` with:
```
debug(sample)
```
Afterward, R will act as if there is a `browser()` statement in the first line of the function. Whenever R runs the function, it will immediately enter browser mode, allowing you to step through the function one line at a time. R will continue to behave this way until you “remove” the browser statement with `undebug`:
```
undebug(sample)
```
You can check whether a function is in “debugging” mode with `isdebugged`. This will return `TRUE` if you’ve ran `debug` on the function but have yet to run `undebug`:
```
isdebugged(sample)
## FALSE
```
If this is all too much of a hassle, you can do what I do and use `debugonce` instead of `debug`. R will enter browser mode the very next time it runs the function but will automatically undebug the function afterward. If you need to browse through the function again, you can just run `debugonce` on it a second time.
You can recreate `debugonce` in RStudio whenever an error occurs. “Rerun with debug” will appear in the grey error box beneath Show Traceback (Figure [E.1](debug.html#fig:show-traceback)). If you click this option, RStudio will rerun the command as if you had first run `debugonce` on it. R will immediately go into browser mode, allowing you to step through the code. The browser behavior will only occur on this run of the code. You do not need to worry about calling `undebug` when you are done.
E.5 trace
---------
You can add the browser statement further into the function, and not at the very start, with `trace`. `trace` takes the name of a function as a character string and then an R expression to insert into the function. You can also provide an `at` argument that tells `trace` at which line of the function to place the expression. So to insert a browser call at the fourth line of `sample`, you would run:
```
trace("sample", browser, at = 4)
```
You can use `trace` to insert other R functions (not just `browser`) into a function, but you may need to think of a clever reason for doing so. You can also run `trace` on a function without inserting any new code. R will prints `trace:<the function>` at the command line every time R runs the function. This is a great way to test a claim I made in [S3](s3.html#s3), that R calls `print` every time it displays something at the command line:
```
trace(print)
first
## trace: print(function () second())
## function() second()
head(deck)
## trace: print
## face suit value
## 1 king spades 13
## 2 queen spades 12
## 3 jack spades 11
## 4 ten spades 10
## 5 nine spades 9
## 6 eight spades 8
```
You can revert a function to normal after calling trace on it with `untrace`:
```
untrace(sample)
untrace(print)
```
E.6 recover
-----------
The `recover` function provides one final option for debugging. It combines the call stack of `traceback` with the browser mode of `browser`. You can use `recover` just like `browser`, by inserting it directly into a function’s body. Let’s demonstrate `recover` with the `fifth` function:
```
fifth <- function() recover()
```
When R runs `recover`, it will pause and display the call stack, but that’s not all. R gives you the option of opening a browser mode in *any* of the functions that appear in the call stack. Annoyingly, the call stack will be displayed upside down compared to `traceback`. The most recent function will be on the bottom, and the original function will be on the top:
```
first()
##
## Enter a frame number, or 0 to exit
##
## 1: first()
## 2: #1: second()
## 3: #1: third()
## 4: #1: fourth()
## 5: #1: fifth()
```
To enter a browser mode, type in the number next to the function in whose runtime environment you would like to browse. If you do not wish to browse any of the functions, type `0`:
```
3
## Selection: 3
## Called from: fourth()
## Browse[1]>
```
You can then proceed as normal. `recover` gives you a chance to inspect variables up and down your call stack and is a powerful tool for uncovering bugs. However, adding `recover` to the body of an R function can be cumbersome. Most R users use it as a global option for handling errors.
If you run the following code, R will automatically call `recover()` whenever an error occurs:
```
options(error = recover)
```
This behavior will last until you close your R session, or reverse the behavior by calling:
```
options(error = NULL)
```
E.1 traceback
-------------
The `traceback` tool pinpoints the location of an error. Many R functions call other R functions, which call other functions, and so on. When an error occurs, it may not be clear which of these functions went wrong. Let’s consider an example. The following functions call one another, and the last function creates an error (you’ll see why in a second):
```
first <- function() second()
second <- function() third()
third <- function() fourth()
fourth <- function() fifth()
fifth <- function() bug()
```
When you run `first`, it will call `second`, which will call `third`, which will call `fourth`, which will call `fifth`, which will call `bug`, a function that does not exist. Here’s what that will look like at the command line:
```
first()
## Error in fifth() : could not find function "bug"
```
The error report tells us that the error occurred when R tried to run `fifth`. It also tells us the nature of the error (there is no function called `bug`). Here, it is obvious why R calls `fifth`, but it might not be so obvious why R calls a function when an error occurs in the wild.
You can see the path of functions that R called before it hit an error by typing *`traceback()`* at the command line. `traceback` will return a call stack, a list of the functions that R called in the order that it called them. The bottom function will be the command that you entered in the command line. The top function will be the function that caused the error:
```
traceback()
## 5: fifth() at #1
## 4: fourth() at #1
## 3: third() at #1
## 2: second() at #1
## 1: first()
```
`traceback` will always refer to the last error you encountered. If you would like to look at a less recent error, you will need to recreate it before running `traceback`.
How can this help you? First, `traceback` returns a list of suspects. One of these functions caused the error, and each function is more suspicious than the ones below it. Chances are that our bug came from `fifth` (it did), but it is also possible that an earlier function did something odd—like call `fifth` when it shouldn’t have.
Second, `traceback` can show you if R stepped off the path that you expected it to take. If this happened, look at the last function before things went wrong.
Third, `traceback` can reveal the frightening extent of infinite recursion errors. For example, if you change `fifth` so that it calls `second`, the functions will make a loop: `second` will call `third`, which will call `fourth`, which will call `fifth`, which will call `second` and start the loop over again. It is easier to do this sort of thing in practice than you might think:
```
fifth <- function() second()
```
When you call `first()`, R will start to run the functions. After awhile, it will notice that it is repeating itself and will return an error. `traceback` will show just what R was doing:
```
first()
## Error: evaluation nested too deeply: infinite recursion/options(expressions=)?
traceback()
## 5000: fourth() at #1
## 4999: third() at #1
## 4998: second() at #1
## 4997: fifth() at #1
## 4996: fourth() at #1
## 4995: third() at #1
## 4994: second() at #1
## 4993: fifth() at #1
## ...
```
Notice that there are 5,000 lines of output in this `traceback`. If you are using RStudio, you will not get to see the traceback of an infinite recursion error (I used the Mac GUI to get this output). RStudio represses the traceback for infinite recursion errors to prevent the large call stacks from pushing your console history out of R’s memory buffer. With RStudio, you will have to recognize the infinite recursion error by its error message. However, you can still see the imposing `traceback` by running things in a UNIX shell or the Windows or Mac GUIs.
RStudio makes it very easy to use `traceback`. You do not even need to type in the function name. Whenever an error occurs, RStudio will display it in a gray box with two options. The first is Show Traceback, shown in Figure [E.1](debug.html#fig:show-traceback).
Figure E.1: RStudio’s Show Traceback option.
If you click Show Traceback, RStudio will expand the gray box and display the `traceback` call stack, as in Figure [E.2](debug.html#fig:hide-traceback). The Show Traceback option will persist beside an error message in your console, even as you write new commands. This means that you can go back and look at the call stacks for all errors—not just the most recent error.
Imagine that you’ve used `traceback` to pinpoint a function that you think might cause a bug. Now what should you do? You should try to figure out what the function did to cause an error while it ran (if it did anything). You can examine how the function runs with `browser`.
Figure E.2: RStudio’s Traceback display.
E.2 browser
-----------
You can ask R to pause in the middle of running a function and give control back to you with `browser`. This will let you enter new commands at the command line. The active environment for these commands will not be the global environment (as usual); it will be the runtime environment of the function that you have paused. As a result, you can look at the objects that the function is using, look up their values with the same scoping rules that the function would use, and run code under the same conditions that the function would run it in. This arrangement provides the best chance for spotting the source of bugs in a function.
To use `browser`, add the call `browser()` to the body of a function and then resave the function. For example, if I wanted to pause in the middle of the `score` function from [Programs](programs.html#programs), I could add `browser()` to the body of `score` and then rerun the following code, which defines `score`:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
browser()
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Now whenever R runs `score`, it will come to the call `browser()`. You can see this with the `play` function from [Programs](programs.html#programs). If you don’t have `play` handy, you can access it by running this code:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
When you run `play`, `play` will call `get_symbols` and then `score`. As R works through `score`, it will come across the call to `browser` and run it. When R runs this call, several things will happen, as in Figure [E.3](debug.html#fig:browser). First, R will stop running `score`. Second, the command prompt will change to `browser[1]>` and R will give me back control; I can now type new commands in at the new command prompt. Third, three buttons will appear above the console pane: Next, Continue, and Stop. Fourth, RStudio will display the source code for `score` in the scripts pane, and it will highlight the line that contains `browser()`. Fifth, the environments tab will change. Instead of revealing the objects that are saved in the global environment, it will reveal the objects that are saved in the runtime environment of `score` (see [Environments](environments.html#environments-1) for an explanation of R’s environment system). Sixth, RStudio will open a new Traceback pane, which shows the call stack RStudio took to get to `browser`. The most recent function, `score`, will be highlighted.
I’m now in a new R mode, called *browser mode*. Browser mode is designed to help you uncover bugs, and the new display in RStudio is designed to help you navigate this mode.
Any command that you run in browser mode will be evaluated in the context of the runtime environment of the function that called `browser`. This will be the function that is highlighted in the new Traceback pane. Here, that function is `score`. So while we are in browser mode, the active environment will be `score`’s runtime environment. This lets you do two things.
Figure E.3: RStudio updates its display whenever you enter browser mode to help you navigate the mode.
First, you can inspect the objects that `score` uses. The updated Environments pane shows you which objects `score` has saved in its local environment. You can inspect any of them by typing their name at the browser prompt. This gives you a way to see the values of runtime variables that you normally would not be able to access. If a value looks clearly wrong, you may be close to finding a bug:
```
Browse[1]> symbols
## [1] "B" "B" "0"
Browse[1]> same
## [1] FALSE
```
Second, you can run code and see the same results that `score` would see. For example, you could run the remaining lines of the `score` function and see if they do anything unusual. You could run these lines by typing them into the command prompt, or you could use the three navigation buttons that now appear above the prompt, as in Figure [E.4](debug.html#fig:browser-buttons).
The first button, Next, will run the next line of code in `score`. The highlighted line in the scripts pane will advance by one line to show you your new location in the `score` function. If the next line begins a code chunk, like a `for` loop or an `if` tree, R will run the whole chunk and will highlight the whole chunk in the script window.
The second button, Continue, will run all of the remaining lines of `score` and then exit the browser mode.
The third button, Stop, will exit browser mode without running any more lines of `score`.
Figure E.4: You can navigate browser mode with the three buttons at the top of the console pane.
You can do the same things by typing the commands `n`, `c`, and `Q` into the browser prompt. This creates an annoyance: what if you want to look up an object named `n`, `c`, or `Q`? Typing in the object name will not work, R will either advance, continue, or quit the browser mode. Instead you will have to look these objects up with the commands `get("n")`, `get("c")`, and `get("Q")`. `cont` is a synonym for `c` in browser mode and `where` prints the call stack, so you’ll have to look up these objects with `get` as well.
Browser mode can help you see things from the perspective of your functions, but it cannot show you where the bug lies. However, browser mode can help you test hypotheses and investigate function behavior. This is usually all you need to spot and fix a bug. The browser mode is the basic debugging tool of R. Each of the following functions just provides an alternate way to enter the browser mode.
Once you fix the bug, you should resave your function a third time—this time without the `browser()` call. As long as the browser call is in there, R will pause each time you, or another function, calls `score`.
E.3 Break Points
----------------
RStudio’s break points provide a graphical way to add a `browser` statement to a function. To use them, open the script where you’ve defined a function. Then click to the left of the line number of the line of code in the function body where you’d like to add the browser statement. A hollow red dot will appear to show you where the break point will occur. Then run the script by clicking the Source button at the top of the Scripts pane. The hollow dot will turn into a solid red dot to show that the function has a break point (see Figure [E.5](debug.html#fig:break-point)).
R will treat the break point like a `browser` statement, going into browser mode when it encounters it. You can remove a break point by clicking on the red dot. The dot will disappear, and the break point will be removed.
Figure E.5: Break points provide the graphical equivalent of a browser statement.
Break points and `browser` provide a great way to debug functions that you have defined. But what if you want to debug a function that already exists in R? You can do that with the `debug` function.
E.4 debug
---------
You can “add” a browser call to the very start of a preexisting function with `debug`. To do this, run `debug` on the function. For example, you can run `debug` on `sample` with:
```
debug(sample)
```
Afterward, R will act as if there is a `browser()` statement in the first line of the function. Whenever R runs the function, it will immediately enter browser mode, allowing you to step through the function one line at a time. R will continue to behave this way until you “remove” the browser statement with `undebug`:
```
undebug(sample)
```
You can check whether a function is in “debugging” mode with `isdebugged`. This will return `TRUE` if you’ve ran `debug` on the function but have yet to run `undebug`:
```
isdebugged(sample)
## FALSE
```
If this is all too much of a hassle, you can do what I do and use `debugonce` instead of `debug`. R will enter browser mode the very next time it runs the function but will automatically undebug the function afterward. If you need to browse through the function again, you can just run `debugonce` on it a second time.
You can recreate `debugonce` in RStudio whenever an error occurs. “Rerun with debug” will appear in the grey error box beneath Show Traceback (Figure [E.1](debug.html#fig:show-traceback)). If you click this option, RStudio will rerun the command as if you had first run `debugonce` on it. R will immediately go into browser mode, allowing you to step through the code. The browser behavior will only occur on this run of the code. You do not need to worry about calling `undebug` when you are done.
E.5 trace
---------
You can add the browser statement further into the function, and not at the very start, with `trace`. `trace` takes the name of a function as a character string and then an R expression to insert into the function. You can also provide an `at` argument that tells `trace` at which line of the function to place the expression. So to insert a browser call at the fourth line of `sample`, you would run:
```
trace("sample", browser, at = 4)
```
You can use `trace` to insert other R functions (not just `browser`) into a function, but you may need to think of a clever reason for doing so. You can also run `trace` on a function without inserting any new code. R will prints `trace:<the function>` at the command line every time R runs the function. This is a great way to test a claim I made in [S3](s3.html#s3), that R calls `print` every time it displays something at the command line:
```
trace(print)
first
## trace: print(function () second())
## function() second()
head(deck)
## trace: print
## face suit value
## 1 king spades 13
## 2 queen spades 12
## 3 jack spades 11
## 4 ten spades 10
## 5 nine spades 9
## 6 eight spades 8
```
You can revert a function to normal after calling trace on it with `untrace`:
```
untrace(sample)
untrace(print)
```
E.6 recover
-----------
The `recover` function provides one final option for debugging. It combines the call stack of `traceback` with the browser mode of `browser`. You can use `recover` just like `browser`, by inserting it directly into a function’s body. Let’s demonstrate `recover` with the `fifth` function:
```
fifth <- function() recover()
```
When R runs `recover`, it will pause and display the call stack, but that’s not all. R gives you the option of opening a browser mode in *any* of the functions that appear in the call stack. Annoyingly, the call stack will be displayed upside down compared to `traceback`. The most recent function will be on the bottom, and the original function will be on the top:
```
first()
##
## Enter a frame number, or 0 to exit
##
## 1: first()
## 2: #1: second()
## 3: #1: third()
## 4: #1: fourth()
## 5: #1: fifth()
```
To enter a browser mode, type in the number next to the function in whose runtime environment you would like to browse. If you do not wish to browse any of the functions, type `0`:
```
3
## Selection: 3
## Called from: fourth()
## Browse[1]>
```
You can then proceed as normal. `recover` gives you a chance to inspect variables up and down your call stack and is a powerful tool for uncovering bugs. However, adding `recover` to the body of an R function can be cumbersome. Most R users use it as a global option for handling errors.
If you run the following code, R will automatically call `recover()` whenever an error occurs:
```
options(error = recover)
```
This behavior will last until you close your R session, or reverse the behavior by calling:
```
options(error = NULL)
```
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/hopr/debug.html |
E Debugging R Code
==================
This appendix refers to environments, the topic of [Environments](environments.html#environments-1), and uses examples from [Programs](programs.html#programs) and [S3](s3.html#s3). You should read through these chapters first to get the most out of this appendix.
R comes with a simple set of debugging tools that RStudio amplifies. You can use these tools to better understand code that produces an error or returns an unexpected result. Usually this will be your own code, but you can also examine the functions in R or one of its packages.
Debugging code can take as much creativity and insight as writing code. There is no guarantee that you will find a bug or be able to fix it when you do. However, you can help yourself by using R’s debugging tools. These include the `traceback`, `browser`, `debug`, `debugonce`, `trace`, and `recover` functions.
Using these tools is usually a two\-step process. First, you locate *where* an error occurred. Then you try to determine *why* it occurred. You can do the first step with R’s `traceback` function.
E.1 traceback
-------------
The `traceback` tool pinpoints the location of an error. Many R functions call other R functions, which call other functions, and so on. When an error occurs, it may not be clear which of these functions went wrong. Let’s consider an example. The following functions call one another, and the last function creates an error (you’ll see why in a second):
```
first <- function() second()
second <- function() third()
third <- function() fourth()
fourth <- function() fifth()
fifth <- function() bug()
```
When you run `first`, it will call `second`, which will call `third`, which will call `fourth`, which will call `fifth`, which will call `bug`, a function that does not exist. Here’s what that will look like at the command line:
```
first()
## Error in fifth() : could not find function "bug"
```
The error report tells us that the error occurred when R tried to run `fifth`. It also tells us the nature of the error (there is no function called `bug`). Here, it is obvious why R calls `fifth`, but it might not be so obvious why R calls a function when an error occurs in the wild.
You can see the path of functions that R called before it hit an error by typing *`traceback()`* at the command line. `traceback` will return a call stack, a list of the functions that R called in the order that it called them. The bottom function will be the command that you entered in the command line. The top function will be the function that caused the error:
```
traceback()
## 5: fifth() at #1
## 4: fourth() at #1
## 3: third() at #1
## 2: second() at #1
## 1: first()
```
`traceback` will always refer to the last error you encountered. If you would like to look at a less recent error, you will need to recreate it before running `traceback`.
How can this help you? First, `traceback` returns a list of suspects. One of these functions caused the error, and each function is more suspicious than the ones below it. Chances are that our bug came from `fifth` (it did), but it is also possible that an earlier function did something odd—like call `fifth` when it shouldn’t have.
Second, `traceback` can show you if R stepped off the path that you expected it to take. If this happened, look at the last function before things went wrong.
Third, `traceback` can reveal the frightening extent of infinite recursion errors. For example, if you change `fifth` so that it calls `second`, the functions will make a loop: `second` will call `third`, which will call `fourth`, which will call `fifth`, which will call `second` and start the loop over again. It is easier to do this sort of thing in practice than you might think:
```
fifth <- function() second()
```
When you call `first()`, R will start to run the functions. After awhile, it will notice that it is repeating itself and will return an error. `traceback` will show just what R was doing:
```
first()
## Error: evaluation nested too deeply: infinite recursion/options(expressions=)?
traceback()
## 5000: fourth() at #1
## 4999: third() at #1
## 4998: second() at #1
## 4997: fifth() at #1
## 4996: fourth() at #1
## 4995: third() at #1
## 4994: second() at #1
## 4993: fifth() at #1
## ...
```
Notice that there are 5,000 lines of output in this `traceback`. If you are using RStudio, you will not get to see the traceback of an infinite recursion error (I used the Mac GUI to get this output). RStudio represses the traceback for infinite recursion errors to prevent the large call stacks from pushing your console history out of R’s memory buffer. With RStudio, you will have to recognize the infinite recursion error by its error message. However, you can still see the imposing `traceback` by running things in a UNIX shell or the Windows or Mac GUIs.
RStudio makes it very easy to use `traceback`. You do not even need to type in the function name. Whenever an error occurs, RStudio will display it in a gray box with two options. The first is Show Traceback, shown in Figure [E.1](debug.html#fig:show-traceback).
Figure E.1: RStudio’s Show Traceback option.
If you click Show Traceback, RStudio will expand the gray box and display the `traceback` call stack, as in Figure [E.2](debug.html#fig:hide-traceback). The Show Traceback option will persist beside an error message in your console, even as you write new commands. This means that you can go back and look at the call stacks for all errors—not just the most recent error.
Imagine that you’ve used `traceback` to pinpoint a function that you think might cause a bug. Now what should you do? You should try to figure out what the function did to cause an error while it ran (if it did anything). You can examine how the function runs with `browser`.
Figure E.2: RStudio’s Traceback display.
E.2 browser
-----------
You can ask R to pause in the middle of running a function and give control back to you with `browser`. This will let you enter new commands at the command line. The active environment for these commands will not be the global environment (as usual); it will be the runtime environment of the function that you have paused. As a result, you can look at the objects that the function is using, look up their values with the same scoping rules that the function would use, and run code under the same conditions that the function would run it in. This arrangement provides the best chance for spotting the source of bugs in a function.
To use `browser`, add the call `browser()` to the body of a function and then resave the function. For example, if I wanted to pause in the middle of the `score` function from [Programs](programs.html#programs), I could add `browser()` to the body of `score` and then rerun the following code, which defines `score`:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
browser()
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Now whenever R runs `score`, it will come to the call `browser()`. You can see this with the `play` function from [Programs](programs.html#programs). If you don’t have `play` handy, you can access it by running this code:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
When you run `play`, `play` will call `get_symbols` and then `score`. As R works through `score`, it will come across the call to `browser` and run it. When R runs this call, several things will happen, as in Figure [E.3](debug.html#fig:browser). First, R will stop running `score`. Second, the command prompt will change to `browser[1]>` and R will give me back control; I can now type new commands in at the new command prompt. Third, three buttons will appear above the console pane: Next, Continue, and Stop. Fourth, RStudio will display the source code for `score` in the scripts pane, and it will highlight the line that contains `browser()`. Fifth, the environments tab will change. Instead of revealing the objects that are saved in the global environment, it will reveal the objects that are saved in the runtime environment of `score` (see [Environments](environments.html#environments-1) for an explanation of R’s environment system). Sixth, RStudio will open a new Traceback pane, which shows the call stack RStudio took to get to `browser`. The most recent function, `score`, will be highlighted.
I’m now in a new R mode, called *browser mode*. Browser mode is designed to help you uncover bugs, and the new display in RStudio is designed to help you navigate this mode.
Any command that you run in browser mode will be evaluated in the context of the runtime environment of the function that called `browser`. This will be the function that is highlighted in the new Traceback pane. Here, that function is `score`. So while we are in browser mode, the active environment will be `score`’s runtime environment. This lets you do two things.
Figure E.3: RStudio updates its display whenever you enter browser mode to help you navigate the mode.
First, you can inspect the objects that `score` uses. The updated Environments pane shows you which objects `score` has saved in its local environment. You can inspect any of them by typing their name at the browser prompt. This gives you a way to see the values of runtime variables that you normally would not be able to access. If a value looks clearly wrong, you may be close to finding a bug:
```
Browse[1]> symbols
## [1] "B" "B" "0"
Browse[1]> same
## [1] FALSE
```
Second, you can run code and see the same results that `score` would see. For example, you could run the remaining lines of the `score` function and see if they do anything unusual. You could run these lines by typing them into the command prompt, or you could use the three navigation buttons that now appear above the prompt, as in Figure [E.4](debug.html#fig:browser-buttons).
The first button, Next, will run the next line of code in `score`. The highlighted line in the scripts pane will advance by one line to show you your new location in the `score` function. If the next line begins a code chunk, like a `for` loop or an `if` tree, R will run the whole chunk and will highlight the whole chunk in the script window.
The second button, Continue, will run all of the remaining lines of `score` and then exit the browser mode.
The third button, Stop, will exit browser mode without running any more lines of `score`.
Figure E.4: You can navigate browser mode with the three buttons at the top of the console pane.
You can do the same things by typing the commands `n`, `c`, and `Q` into the browser prompt. This creates an annoyance: what if you want to look up an object named `n`, `c`, or `Q`? Typing in the object name will not work, R will either advance, continue, or quit the browser mode. Instead you will have to look these objects up with the commands `get("n")`, `get("c")`, and `get("Q")`. `cont` is a synonym for `c` in browser mode and `where` prints the call stack, so you’ll have to look up these objects with `get` as well.
Browser mode can help you see things from the perspective of your functions, but it cannot show you where the bug lies. However, browser mode can help you test hypotheses and investigate function behavior. This is usually all you need to spot and fix a bug. The browser mode is the basic debugging tool of R. Each of the following functions just provides an alternate way to enter the browser mode.
Once you fix the bug, you should resave your function a third time—this time without the `browser()` call. As long as the browser call is in there, R will pause each time you, or another function, calls `score`.
E.3 Break Points
----------------
RStudio’s break points provide a graphical way to add a `browser` statement to a function. To use them, open the script where you’ve defined a function. Then click to the left of the line number of the line of code in the function body where you’d like to add the browser statement. A hollow red dot will appear to show you where the break point will occur. Then run the script by clicking the Source button at the top of the Scripts pane. The hollow dot will turn into a solid red dot to show that the function has a break point (see Figure [E.5](debug.html#fig:break-point)).
R will treat the break point like a `browser` statement, going into browser mode when it encounters it. You can remove a break point by clicking on the red dot. The dot will disappear, and the break point will be removed.
Figure E.5: Break points provide the graphical equivalent of a browser statement.
Break points and `browser` provide a great way to debug functions that you have defined. But what if you want to debug a function that already exists in R? You can do that with the `debug` function.
E.4 debug
---------
You can “add” a browser call to the very start of a preexisting function with `debug`. To do this, run `debug` on the function. For example, you can run `debug` on `sample` with:
```
debug(sample)
```
Afterward, R will act as if there is a `browser()` statement in the first line of the function. Whenever R runs the function, it will immediately enter browser mode, allowing you to step through the function one line at a time. R will continue to behave this way until you “remove” the browser statement with `undebug`:
```
undebug(sample)
```
You can check whether a function is in “debugging” mode with `isdebugged`. This will return `TRUE` if you’ve ran `debug` on the function but have yet to run `undebug`:
```
isdebugged(sample)
## FALSE
```
If this is all too much of a hassle, you can do what I do and use `debugonce` instead of `debug`. R will enter browser mode the very next time it runs the function but will automatically undebug the function afterward. If you need to browse through the function again, you can just run `debugonce` on it a second time.
You can recreate `debugonce` in RStudio whenever an error occurs. “Rerun with debug” will appear in the grey error box beneath Show Traceback (Figure [E.1](debug.html#fig:show-traceback)). If you click this option, RStudio will rerun the command as if you had first run `debugonce` on it. R will immediately go into browser mode, allowing you to step through the code. The browser behavior will only occur on this run of the code. You do not need to worry about calling `undebug` when you are done.
E.5 trace
---------
You can add the browser statement further into the function, and not at the very start, with `trace`. `trace` takes the name of a function as a character string and then an R expression to insert into the function. You can also provide an `at` argument that tells `trace` at which line of the function to place the expression. So to insert a browser call at the fourth line of `sample`, you would run:
```
trace("sample", browser, at = 4)
```
You can use `trace` to insert other R functions (not just `browser`) into a function, but you may need to think of a clever reason for doing so. You can also run `trace` on a function without inserting any new code. R will prints `trace:<the function>` at the command line every time R runs the function. This is a great way to test a claim I made in [S3](s3.html#s3), that R calls `print` every time it displays something at the command line:
```
trace(print)
first
## trace: print(function () second())
## function() second()
head(deck)
## trace: print
## face suit value
## 1 king spades 13
## 2 queen spades 12
## 3 jack spades 11
## 4 ten spades 10
## 5 nine spades 9
## 6 eight spades 8
```
You can revert a function to normal after calling trace on it with `untrace`:
```
untrace(sample)
untrace(print)
```
E.6 recover
-----------
The `recover` function provides one final option for debugging. It combines the call stack of `traceback` with the browser mode of `browser`. You can use `recover` just like `browser`, by inserting it directly into a function’s body. Let’s demonstrate `recover` with the `fifth` function:
```
fifth <- function() recover()
```
When R runs `recover`, it will pause and display the call stack, but that’s not all. R gives you the option of opening a browser mode in *any* of the functions that appear in the call stack. Annoyingly, the call stack will be displayed upside down compared to `traceback`. The most recent function will be on the bottom, and the original function will be on the top:
```
first()
##
## Enter a frame number, or 0 to exit
##
## 1: first()
## 2: #1: second()
## 3: #1: third()
## 4: #1: fourth()
## 5: #1: fifth()
```
To enter a browser mode, type in the number next to the function in whose runtime environment you would like to browse. If you do not wish to browse any of the functions, type `0`:
```
3
## Selection: 3
## Called from: fourth()
## Browse[1]>
```
You can then proceed as normal. `recover` gives you a chance to inspect variables up and down your call stack and is a powerful tool for uncovering bugs. However, adding `recover` to the body of an R function can be cumbersome. Most R users use it as a global option for handling errors.
If you run the following code, R will automatically call `recover()` whenever an error occurs:
```
options(error = recover)
```
This behavior will last until you close your R session, or reverse the behavior by calling:
```
options(error = NULL)
```
E.1 traceback
-------------
The `traceback` tool pinpoints the location of an error. Many R functions call other R functions, which call other functions, and so on. When an error occurs, it may not be clear which of these functions went wrong. Let’s consider an example. The following functions call one another, and the last function creates an error (you’ll see why in a second):
```
first <- function() second()
second <- function() third()
third <- function() fourth()
fourth <- function() fifth()
fifth <- function() bug()
```
When you run `first`, it will call `second`, which will call `third`, which will call `fourth`, which will call `fifth`, which will call `bug`, a function that does not exist. Here’s what that will look like at the command line:
```
first()
## Error in fifth() : could not find function "bug"
```
The error report tells us that the error occurred when R tried to run `fifth`. It also tells us the nature of the error (there is no function called `bug`). Here, it is obvious why R calls `fifth`, but it might not be so obvious why R calls a function when an error occurs in the wild.
You can see the path of functions that R called before it hit an error by typing *`traceback()`* at the command line. `traceback` will return a call stack, a list of the functions that R called in the order that it called them. The bottom function will be the command that you entered in the command line. The top function will be the function that caused the error:
```
traceback()
## 5: fifth() at #1
## 4: fourth() at #1
## 3: third() at #1
## 2: second() at #1
## 1: first()
```
`traceback` will always refer to the last error you encountered. If you would like to look at a less recent error, you will need to recreate it before running `traceback`.
How can this help you? First, `traceback` returns a list of suspects. One of these functions caused the error, and each function is more suspicious than the ones below it. Chances are that our bug came from `fifth` (it did), but it is also possible that an earlier function did something odd—like call `fifth` when it shouldn’t have.
Second, `traceback` can show you if R stepped off the path that you expected it to take. If this happened, look at the last function before things went wrong.
Third, `traceback` can reveal the frightening extent of infinite recursion errors. For example, if you change `fifth` so that it calls `second`, the functions will make a loop: `second` will call `third`, which will call `fourth`, which will call `fifth`, which will call `second` and start the loop over again. It is easier to do this sort of thing in practice than you might think:
```
fifth <- function() second()
```
When you call `first()`, R will start to run the functions. After awhile, it will notice that it is repeating itself and will return an error. `traceback` will show just what R was doing:
```
first()
## Error: evaluation nested too deeply: infinite recursion/options(expressions=)?
traceback()
## 5000: fourth() at #1
## 4999: third() at #1
## 4998: second() at #1
## 4997: fifth() at #1
## 4996: fourth() at #1
## 4995: third() at #1
## 4994: second() at #1
## 4993: fifth() at #1
## ...
```
Notice that there are 5,000 lines of output in this `traceback`. If you are using RStudio, you will not get to see the traceback of an infinite recursion error (I used the Mac GUI to get this output). RStudio represses the traceback for infinite recursion errors to prevent the large call stacks from pushing your console history out of R’s memory buffer. With RStudio, you will have to recognize the infinite recursion error by its error message. However, you can still see the imposing `traceback` by running things in a UNIX shell or the Windows or Mac GUIs.
RStudio makes it very easy to use `traceback`. You do not even need to type in the function name. Whenever an error occurs, RStudio will display it in a gray box with two options. The first is Show Traceback, shown in Figure [E.1](debug.html#fig:show-traceback).
Figure E.1: RStudio’s Show Traceback option.
If you click Show Traceback, RStudio will expand the gray box and display the `traceback` call stack, as in Figure [E.2](debug.html#fig:hide-traceback). The Show Traceback option will persist beside an error message in your console, even as you write new commands. This means that you can go back and look at the call stacks for all errors—not just the most recent error.
Imagine that you’ve used `traceback` to pinpoint a function that you think might cause a bug. Now what should you do? You should try to figure out what the function did to cause an error while it ran (if it did anything). You can examine how the function runs with `browser`.
Figure E.2: RStudio’s Traceback display.
E.2 browser
-----------
You can ask R to pause in the middle of running a function and give control back to you with `browser`. This will let you enter new commands at the command line. The active environment for these commands will not be the global environment (as usual); it will be the runtime environment of the function that you have paused. As a result, you can look at the objects that the function is using, look up their values with the same scoping rules that the function would use, and run code under the same conditions that the function would run it in. This arrangement provides the best chance for spotting the source of bugs in a function.
To use `browser`, add the call `browser()` to the body of a function and then resave the function. For example, if I wanted to pause in the middle of the `score` function from [Programs](programs.html#programs), I could add `browser()` to the body of `score` and then rerun the following code, which defines `score`:
```
score <- function (symbols) {
# identify case
same <- symbols[1] == symbols[2] && symbols[2] == symbols[3]
bars <- symbols %in% c("B", "BB", "BBB")
# get prize
if (same) {
payouts <- c("DD" = 100, "7" = 80, "BBB" = 40, "BB" = 25,
"B" = 10, "C" = 10, "0" = 0)
prize <- unname(payouts[symbols[1]])
} else if (all(bars)) {
prize <- 5
} else {
cherries <- sum(symbols == "C")
prize <- c(0, 2, 5)[cherries + 1]
}
browser()
# adjust for diamonds
diamonds <- sum(symbols == "DD")
prize * 2 ^ diamonds
}
```
Now whenever R runs `score`, it will come to the call `browser()`. You can see this with the `play` function from [Programs](programs.html#programs). If you don’t have `play` handy, you can access it by running this code:
```
get_symbols <- function() {
wheel <- c("DD", "7", "BBB", "BB", "B", "C", "0")
sample(wheel, size = 3, replace = TRUE,
prob = c(0.03, 0.03, 0.06, 0.1, 0.25, 0.01, 0.52))
}
play <- function() {
symbols <- get_symbols()
structure(score(symbols), symbols = symbols, class = "slots")
}
```
When you run `play`, `play` will call `get_symbols` and then `score`. As R works through `score`, it will come across the call to `browser` and run it. When R runs this call, several things will happen, as in Figure [E.3](debug.html#fig:browser). First, R will stop running `score`. Second, the command prompt will change to `browser[1]>` and R will give me back control; I can now type new commands in at the new command prompt. Third, three buttons will appear above the console pane: Next, Continue, and Stop. Fourth, RStudio will display the source code for `score` in the scripts pane, and it will highlight the line that contains `browser()`. Fifth, the environments tab will change. Instead of revealing the objects that are saved in the global environment, it will reveal the objects that are saved in the runtime environment of `score` (see [Environments](environments.html#environments-1) for an explanation of R’s environment system). Sixth, RStudio will open a new Traceback pane, which shows the call stack RStudio took to get to `browser`. The most recent function, `score`, will be highlighted.
I’m now in a new R mode, called *browser mode*. Browser mode is designed to help you uncover bugs, and the new display in RStudio is designed to help you navigate this mode.
Any command that you run in browser mode will be evaluated in the context of the runtime environment of the function that called `browser`. This will be the function that is highlighted in the new Traceback pane. Here, that function is `score`. So while we are in browser mode, the active environment will be `score`’s runtime environment. This lets you do two things.
Figure E.3: RStudio updates its display whenever you enter browser mode to help you navigate the mode.
First, you can inspect the objects that `score` uses. The updated Environments pane shows you which objects `score` has saved in its local environment. You can inspect any of them by typing their name at the browser prompt. This gives you a way to see the values of runtime variables that you normally would not be able to access. If a value looks clearly wrong, you may be close to finding a bug:
```
Browse[1]> symbols
## [1] "B" "B" "0"
Browse[1]> same
## [1] FALSE
```
Second, you can run code and see the same results that `score` would see. For example, you could run the remaining lines of the `score` function and see if they do anything unusual. You could run these lines by typing them into the command prompt, or you could use the three navigation buttons that now appear above the prompt, as in Figure [E.4](debug.html#fig:browser-buttons).
The first button, Next, will run the next line of code in `score`. The highlighted line in the scripts pane will advance by one line to show you your new location in the `score` function. If the next line begins a code chunk, like a `for` loop or an `if` tree, R will run the whole chunk and will highlight the whole chunk in the script window.
The second button, Continue, will run all of the remaining lines of `score` and then exit the browser mode.
The third button, Stop, will exit browser mode without running any more lines of `score`.
Figure E.4: You can navigate browser mode with the three buttons at the top of the console pane.
You can do the same things by typing the commands `n`, `c`, and `Q` into the browser prompt. This creates an annoyance: what if you want to look up an object named `n`, `c`, or `Q`? Typing in the object name will not work, R will either advance, continue, or quit the browser mode. Instead you will have to look these objects up with the commands `get("n")`, `get("c")`, and `get("Q")`. `cont` is a synonym for `c` in browser mode and `where` prints the call stack, so you’ll have to look up these objects with `get` as well.
Browser mode can help you see things from the perspective of your functions, but it cannot show you where the bug lies. However, browser mode can help you test hypotheses and investigate function behavior. This is usually all you need to spot and fix a bug. The browser mode is the basic debugging tool of R. Each of the following functions just provides an alternate way to enter the browser mode.
Once you fix the bug, you should resave your function a third time—this time without the `browser()` call. As long as the browser call is in there, R will pause each time you, or another function, calls `score`.
E.3 Break Points
----------------
RStudio’s break points provide a graphical way to add a `browser` statement to a function. To use them, open the script where you’ve defined a function. Then click to the left of the line number of the line of code in the function body where you’d like to add the browser statement. A hollow red dot will appear to show you where the break point will occur. Then run the script by clicking the Source button at the top of the Scripts pane. The hollow dot will turn into a solid red dot to show that the function has a break point (see Figure [E.5](debug.html#fig:break-point)).
R will treat the break point like a `browser` statement, going into browser mode when it encounters it. You can remove a break point by clicking on the red dot. The dot will disappear, and the break point will be removed.
Figure E.5: Break points provide the graphical equivalent of a browser statement.
Break points and `browser` provide a great way to debug functions that you have defined. But what if you want to debug a function that already exists in R? You can do that with the `debug` function.
E.4 debug
---------
You can “add” a browser call to the very start of a preexisting function with `debug`. To do this, run `debug` on the function. For example, you can run `debug` on `sample` with:
```
debug(sample)
```
Afterward, R will act as if there is a `browser()` statement in the first line of the function. Whenever R runs the function, it will immediately enter browser mode, allowing you to step through the function one line at a time. R will continue to behave this way until you “remove” the browser statement with `undebug`:
```
undebug(sample)
```
You can check whether a function is in “debugging” mode with `isdebugged`. This will return `TRUE` if you’ve ran `debug` on the function but have yet to run `undebug`:
```
isdebugged(sample)
## FALSE
```
If this is all too much of a hassle, you can do what I do and use `debugonce` instead of `debug`. R will enter browser mode the very next time it runs the function but will automatically undebug the function afterward. If you need to browse through the function again, you can just run `debugonce` on it a second time.
You can recreate `debugonce` in RStudio whenever an error occurs. “Rerun with debug” will appear in the grey error box beneath Show Traceback (Figure [E.1](debug.html#fig:show-traceback)). If you click this option, RStudio will rerun the command as if you had first run `debugonce` on it. R will immediately go into browser mode, allowing you to step through the code. The browser behavior will only occur on this run of the code. You do not need to worry about calling `undebug` when you are done.
E.5 trace
---------
You can add the browser statement further into the function, and not at the very start, with `trace`. `trace` takes the name of a function as a character string and then an R expression to insert into the function. You can also provide an `at` argument that tells `trace` at which line of the function to place the expression. So to insert a browser call at the fourth line of `sample`, you would run:
```
trace("sample", browser, at = 4)
```
You can use `trace` to insert other R functions (not just `browser`) into a function, but you may need to think of a clever reason for doing so. You can also run `trace` on a function without inserting any new code. R will prints `trace:<the function>` at the command line every time R runs the function. This is a great way to test a claim I made in [S3](s3.html#s3), that R calls `print` every time it displays something at the command line:
```
trace(print)
first
## trace: print(function () second())
## function() second()
head(deck)
## trace: print
## face suit value
## 1 king spades 13
## 2 queen spades 12
## 3 jack spades 11
## 4 ten spades 10
## 5 nine spades 9
## 6 eight spades 8
```
You can revert a function to normal after calling trace on it with `untrace`:
```
untrace(sample)
untrace(print)
```
E.6 recover
-----------
The `recover` function provides one final option for debugging. It combines the call stack of `traceback` with the browser mode of `browser`. You can use `recover` just like `browser`, by inserting it directly into a function’s body. Let’s demonstrate `recover` with the `fifth` function:
```
fifth <- function() recover()
```
When R runs `recover`, it will pause and display the call stack, but that’s not all. R gives you the option of opening a browser mode in *any* of the functions that appear in the call stack. Annoyingly, the call stack will be displayed upside down compared to `traceback`. The most recent function will be on the bottom, and the original function will be on the top:
```
first()
##
## Enter a frame number, or 0 to exit
##
## 1: first()
## 2: #1: second()
## 3: #1: third()
## 4: #1: fourth()
## 5: #1: fifth()
```
To enter a browser mode, type in the number next to the function in whose runtime environment you would like to browse. If you do not wish to browse any of the functions, type `0`:
```
3
## Selection: 3
## Called from: fourth()
## Browse[1]>
```
You can then proceed as normal. `recover` gives you a chance to inspect variables up and down your call stack and is a powerful tool for uncovering bugs. However, adding `recover` to the body of an R function can be cumbersome. Most R users use it as a global option for handling errors.
If you run the following code, R will automatically call `recover()` whenever an error occurs:
```
options(error = recover)
```
This behavior will last until you close your R session, or reverse the behavior by calling:
```
options(error = NULL)
```
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/index.html |
Preface
=======
Note to the reader
------------------
I have been working on this on and off for the past 4 years or so. In 2022, I have updated the
contents of the book to reflect updates introduced with R 4\.1 and in several packages (especially
those from the `{tidyverse}`). I have also cut some content that I think is not that useful,
especially in later chapters.
This book is still being written. Chapters 1 to 8 are almost ready, but more content is being added
(especially to chapter 8\). 9 and 10 are empty for now. Some exercises might be at the wrong place
too and more are coming.
You can purchase an ebook version of this book on [leanpub](https://leanpub.com/modern_tidyverse).
The version on leanpub is quite out of date, so if you buy it, it’s really just to send some money
my money, so many thanks for that! You can also support me by [buying me a
coffee](https://www.buymeacoffee.com/brodriguesco) or
[paypal.me](https://www.paypal.me/brodriguesco).
What is R?
----------
Read R’s official answer to this question
[here](https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f). To make it short: R is a
multi\-paradigm (procedural, imperative, object\-oriented and functional)[1](#fn1) programming language that
focuses on applications in *statistics*. By *statistics* I mean any field that uses statistics such
as official statistics, economics, finance, data science, machine learning, etc. For the sake of
simplicity, I will use the word “statistics” as a general term that encompasses all these fields and
disciplines for the remainder of this book.
Who is this book for?
---------------------
This book can be useful to different audiences. If you have never used R in your life, and want
to start, start with Chapter 1 of this book. Chapter 1 to 3 are the very basics, and should be
easy to follow up to Chapter 7\.
Starting with Chapter 7, it gets more technical, and will be harder to follow. But I suggest
you keep on going, and do not hesitate to contact me for help if you struggle! Chapter 7
is also where you can start if you are already familiar with R **and** the `{tidyverse}`, but not
functional programming. If you are familiar with R but not the `{tidyverse}` (or have no clue
what the `{tidyverse}` is), then you can start with Chapter 4\. If you are familiar with R, the
`{tidyverse}` and functional programming, you might still be interested in this book, especially
Chapter 9 and 10, which deal with package development and further advanced topics respectively.
Why this book?
--------------
This book is first and foremost for myself. This book is the result of years of using and teaching
R at university and then at my jobs. During my university time, I wrote some notes to help me
teach R and which I shared with my students. These are still the basis of Chapter 2\. Then, once
I had left university, and continued using R at my first “real” job, I wrote another book that
dealt mostly with package development and functional programming. This book is now merged to this
one and is the basis of Chapters 9 and 10\. During these years at my first
job, I was also tasked with teaching R. By that time, I was already quite familiar with the
`{tidyverse}` so I wrote a lot of notes that were internal and adapted for the audience of my
first job. These are now the basis of Chapters 3 to 8\.
Then, during all these years, I kept blogging about R, and reading blogs and further books. All
this knowledge is condensed here, so if you are familiar with my blog, you’ll definitely recognize
a lot of my blog posts in here. So this book is first and foremost for me, because I need to write
all of this down in a central place. So because my target audience is myself, this book is free. If
you find it useful, and are in the mood of buying me a coffee, you can, but if this book is not
useful to you, no harm done (unless you paid for it before reading it, in which case, I am sorry
to have wasted your time). But I am quite sure you’ll find some of the things written here useful,
regardless of your current experience level with R.
Why *modern* R?
---------------
*Modern* R instead of “just” R because we are going to learn how to use modern packages (mostly
those from the [tidyverse](https://www.tidyverse.org/)) and concepts, such as functional
programming (which is quite an old concept actually, but one that came into fashion recently). R is
derived from S, which is a programming language that has roots in FORTRAN and other languages too.
If you learned R at university, you’ve probably learned to use it as you would have used FORTRAN;
very long scripts where data are represented as matrices and where row\-wise (or column\-wise)
operations are implemented with `for` loops. There’s nothing wrong with that, mind you, but R
was also influenced by Scheme and Common Lisp, which are functional programming languages.
In my opinion, functional programming is a programming paradigm that works really well when dealing
with statistical problems. This is because programming in a functional style is just like
writing math. For instance, suppose you want to sum all the elements of a vector. In mathematical
notation, you would write something like:
\\\[
\\sum\_{i \= 1}^{100} x\_{i}
\\]
where \\(x\\) is a vector of length 100\. Solving this using a loop would look something like this:
```
res <- 0
for(i in 1:length(x)){
res <- x[i] + res
}
```
This does not look like the math notation at all! You have to define a variable that will hold
the result outside of the loop, and then you have to define `res` as something plus `res` inside
the body of the loop. This is really unnatural. The functional programming approach is much
easier:
```
Reduce(`+`, x)
```
We will learn about `Reduce()` later (to be more precise, we will learn about `purrr::reduce()`,
the “tidy” version of `Reduce()`), but already you see that the notation looks a lot more
like the mathematical notation.
At its core, functional programming uses functions, and functions are so\-called *first
class* objects in R, which means that there is nothing special about them… you can pass them to
other functions, create functions that return functions and do any kind of operation on them just as
with any other object. This means that functions in R are extremely powerful and flexible tools.
In the first part of the book, we are going to use functions that are already available in R, and
then use those available in packages, mostly those from the `tidyverse`. The `tidyverse` is a
collection of packages developed by [Hadley Wickham](http://hadley.nz/), and several of his colleagues
at RStudio, Inc. By using the packages from the `tidyverse` and R’s built\-in functional programming
capabilities, we can write code that is faster and easier to explain to colleagues, and also easier
to maintain. This also means that you might have to change your expectations and what you know
already from R, if you learned it at University but haven’t touched it in a long time. For example
for and while loops, are relegated to chapter 8\. This does not mean that you will have to wait for
8 chapter to know how to repeat instructions *N* times, but that *for* and *while* loops are tools that
are very useful for very specific situations that will be discussed at that point.
In the second part of the book, we are going to move from using R to solve statistical problems to
developing with R. We are going to learn about creating your own package. If you do not know what
packages are, don’t worry, this will be discussed just below.
What is RStudio?
----------------
RStudio is a modern IDE that makes writing R code easier. The first thing we are going to learn is
how to use it.
R and RStudio are both open source: this means that the source code is freely available on
the internet and contributions by anyone are welcome and integrated; provided they are meaningful
and useful.
What to expect from this book?
------------------------------
The idea of Chapters 1 to 7 is to make you efficient with R as quickly as possible, especially if
you already have prior programming knowledge. Starting with Chapter 8 you will learn more advanced
topics, especially programming with R. R is a programming language, and you can’t write
“programming language” without “language”. And just as you wouldn’t expect to learn
French, Portuguese or Icelandic by reading a single book, you shouldn’t expect to become fluent in R
by reading a single book, not even by reading 10 books. Programming is an art which requires a lot of
practice. [Teach yourself programming in 10 years](http://www.norvig.com/21-days.html) is a blog
post written by Peter Norvig which explains that just as with any craft, mastering programming
takes time. And even if you don’t need or want to become an expert in R, if you wish to use R
effectively and in a way that ultimately saves you time, you need to have some fluency in it, and
this only comes by continuing to learn about the language, and most importantly practicing. If you
keep using R every day, you’ll definitely become very fluent. To stay informed about developments of
the language, and the latest news, I advise you read blogs, especially
[R\-bloggers](https://www.r-bloggers.com/) which aggregates blog posts by more than 750 blogs
discussing R.
So what you can expect from this book is that this book is not the only one you should read.
Prerequisites
-------------
R and RStudio are the two main pieces of software that we are going to use. R is the programming
language and RStudio is a modern IDE for it. You can use R without RStudio; but you cannot use
RStudio without R.
If you wish to install R and RStudio at home to follow the examples in this book you can do it as
both pieces of software are available free of charge (paid options for RStudio exist, for companies
that need technical support). Installation is simple, but operating system dependent. To download
and install R for Windows, follow [this link](https://cloud.r-project.org/bin/windows/base/).
For macOS, follow [this one](https://cloud.r-project.org/bin/macosx/). If you run a GNU\+Linux
distribution, you can install R using the system’s package manager. If you’re running Ubuntu, you
might want to take a look at [r2u](https://github.com/eddelbuettel/r2u), which provides very
fast installation of packages, full integration with `apt` (so dependencies get solved automatically)
and covers the entirety of CRAN.
For RStudio, look for your operating system [here](https://www.rstudio.com/products/rstudio/download/#download).
What are packages?
------------------
There is one more step; we are going to install some packages. Packages are additional pieces of
code that can be installed from within R with the following function: `install.packages()`. These
packages extend R’s capabilities significantly, and are probably one of the main reasons R is so
popular. As of November 2018, R has over 13000 packages.
To install the packages we need, first open RStudio and then copy and paste this line in the console:
```
install.packages(c("tidyverse", "rsample", "recipes", "blogdown" ,"yardstick", "parsnip", "plm", "pwt9",
"checkpoint", "Ecdat", "ggthemes", "ggfortify", "margins", "janitor", "rio", "stopwords",
"colourpicker", "glmnet", "lhs", "mlrMBO", "mlbench", "ranger"))
```
or go to the **Packages** pane and then click on *Install*:
The author
----------
My name is Bruno Rodrigues and I program almost exclusively in R and have been teaching some R
courses for a few years now. I first started teaching for students at the University of Strasbourg
while working on my PhD. I hold a PhD in economics, with a focus on quantitative methods.
I’m currently head of the statistics department of the Ministry of Higher education and Research
in Luxembourg, and before that worked as a manager in the data science team of PWC Luxembourg.
This book is an adaptation of notes I’ve used in the past during my time as a teacher, but also
a lot of things I’ve learned about R since I left academia.
In my free time I like cooking, working out and [blogging](https://www.brodrigues.co), while listening to
[Fip](http://www.fipradio.fr/player) or
[Chillsky Radio](https://chillsky.com/listen/).
I also like to get my butt handed to me by playing roguelikes
such as [NetHack](http://nethack.wikia.com/wiki/NetHack), for which I wrote a
[package](https://github.com/b-rodrigues/nethack) that contains functions to analyze the data that
is saved on your computer after you win or lose (it will be lose 99% of the time) the game.
You can follow me on [twitter](https://www.twitter.com/brodriguesco), I tweet mostly about R or
what’s happening in Luxembourg.
Note to the reader
------------------
I have been working on this on and off for the past 4 years or so. In 2022, I have updated the
contents of the book to reflect updates introduced with R 4\.1 and in several packages (especially
those from the `{tidyverse}`). I have also cut some content that I think is not that useful,
especially in later chapters.
This book is still being written. Chapters 1 to 8 are almost ready, but more content is being added
(especially to chapter 8\). 9 and 10 are empty for now. Some exercises might be at the wrong place
too and more are coming.
You can purchase an ebook version of this book on [leanpub](https://leanpub.com/modern_tidyverse).
The version on leanpub is quite out of date, so if you buy it, it’s really just to send some money
my money, so many thanks for that! You can also support me by [buying me a
coffee](https://www.buymeacoffee.com/brodriguesco) or
[paypal.me](https://www.paypal.me/brodriguesco).
What is R?
----------
Read R’s official answer to this question
[here](https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f). To make it short: R is a
multi\-paradigm (procedural, imperative, object\-oriented and functional)[1](#fn1) programming language that
focuses on applications in *statistics*. By *statistics* I mean any field that uses statistics such
as official statistics, economics, finance, data science, machine learning, etc. For the sake of
simplicity, I will use the word “statistics” as a general term that encompasses all these fields and
disciplines for the remainder of this book.
Who is this book for?
---------------------
This book can be useful to different audiences. If you have never used R in your life, and want
to start, start with Chapter 1 of this book. Chapter 1 to 3 are the very basics, and should be
easy to follow up to Chapter 7\.
Starting with Chapter 7, it gets more technical, and will be harder to follow. But I suggest
you keep on going, and do not hesitate to contact me for help if you struggle! Chapter 7
is also where you can start if you are already familiar with R **and** the `{tidyverse}`, but not
functional programming. If you are familiar with R but not the `{tidyverse}` (or have no clue
what the `{tidyverse}` is), then you can start with Chapter 4\. If you are familiar with R, the
`{tidyverse}` and functional programming, you might still be interested in this book, especially
Chapter 9 and 10, which deal with package development and further advanced topics respectively.
Why this book?
--------------
This book is first and foremost for myself. This book is the result of years of using and teaching
R at university and then at my jobs. During my university time, I wrote some notes to help me
teach R and which I shared with my students. These are still the basis of Chapter 2\. Then, once
I had left university, and continued using R at my first “real” job, I wrote another book that
dealt mostly with package development and functional programming. This book is now merged to this
one and is the basis of Chapters 9 and 10\. During these years at my first
job, I was also tasked with teaching R. By that time, I was already quite familiar with the
`{tidyverse}` so I wrote a lot of notes that were internal and adapted for the audience of my
first job. These are now the basis of Chapters 3 to 8\.
Then, during all these years, I kept blogging about R, and reading blogs and further books. All
this knowledge is condensed here, so if you are familiar with my blog, you’ll definitely recognize
a lot of my blog posts in here. So this book is first and foremost for me, because I need to write
all of this down in a central place. So because my target audience is myself, this book is free. If
you find it useful, and are in the mood of buying me a coffee, you can, but if this book is not
useful to you, no harm done (unless you paid for it before reading it, in which case, I am sorry
to have wasted your time). But I am quite sure you’ll find some of the things written here useful,
regardless of your current experience level with R.
Why *modern* R?
---------------
*Modern* R instead of “just” R because we are going to learn how to use modern packages (mostly
those from the [tidyverse](https://www.tidyverse.org/)) and concepts, such as functional
programming (which is quite an old concept actually, but one that came into fashion recently). R is
derived from S, which is a programming language that has roots in FORTRAN and other languages too.
If you learned R at university, you’ve probably learned to use it as you would have used FORTRAN;
very long scripts where data are represented as matrices and where row\-wise (or column\-wise)
operations are implemented with `for` loops. There’s nothing wrong with that, mind you, but R
was also influenced by Scheme and Common Lisp, which are functional programming languages.
In my opinion, functional programming is a programming paradigm that works really well when dealing
with statistical problems. This is because programming in a functional style is just like
writing math. For instance, suppose you want to sum all the elements of a vector. In mathematical
notation, you would write something like:
\\\[
\\sum\_{i \= 1}^{100} x\_{i}
\\]
where \\(x\\) is a vector of length 100\. Solving this using a loop would look something like this:
```
res <- 0
for(i in 1:length(x)){
res <- x[i] + res
}
```
This does not look like the math notation at all! You have to define a variable that will hold
the result outside of the loop, and then you have to define `res` as something plus `res` inside
the body of the loop. This is really unnatural. The functional programming approach is much
easier:
```
Reduce(`+`, x)
```
We will learn about `Reduce()` later (to be more precise, we will learn about `purrr::reduce()`,
the “tidy” version of `Reduce()`), but already you see that the notation looks a lot more
like the mathematical notation.
At its core, functional programming uses functions, and functions are so\-called *first
class* objects in R, which means that there is nothing special about them… you can pass them to
other functions, create functions that return functions and do any kind of operation on them just as
with any other object. This means that functions in R are extremely powerful and flexible tools.
In the first part of the book, we are going to use functions that are already available in R, and
then use those available in packages, mostly those from the `tidyverse`. The `tidyverse` is a
collection of packages developed by [Hadley Wickham](http://hadley.nz/), and several of his colleagues
at RStudio, Inc. By using the packages from the `tidyverse` and R’s built\-in functional programming
capabilities, we can write code that is faster and easier to explain to colleagues, and also easier
to maintain. This also means that you might have to change your expectations and what you know
already from R, if you learned it at University but haven’t touched it in a long time. For example
for and while loops, are relegated to chapter 8\. This does not mean that you will have to wait for
8 chapter to know how to repeat instructions *N* times, but that *for* and *while* loops are tools that
are very useful for very specific situations that will be discussed at that point.
In the second part of the book, we are going to move from using R to solve statistical problems to
developing with R. We are going to learn about creating your own package. If you do not know what
packages are, don’t worry, this will be discussed just below.
What is RStudio?
----------------
RStudio is a modern IDE that makes writing R code easier. The first thing we are going to learn is
how to use it.
R and RStudio are both open source: this means that the source code is freely available on
the internet and contributions by anyone are welcome and integrated; provided they are meaningful
and useful.
What to expect from this book?
------------------------------
The idea of Chapters 1 to 7 is to make you efficient with R as quickly as possible, especially if
you already have prior programming knowledge. Starting with Chapter 8 you will learn more advanced
topics, especially programming with R. R is a programming language, and you can’t write
“programming language” without “language”. And just as you wouldn’t expect to learn
French, Portuguese or Icelandic by reading a single book, you shouldn’t expect to become fluent in R
by reading a single book, not even by reading 10 books. Programming is an art which requires a lot of
practice. [Teach yourself programming in 10 years](http://www.norvig.com/21-days.html) is a blog
post written by Peter Norvig which explains that just as with any craft, mastering programming
takes time. And even if you don’t need or want to become an expert in R, if you wish to use R
effectively and in a way that ultimately saves you time, you need to have some fluency in it, and
this only comes by continuing to learn about the language, and most importantly practicing. If you
keep using R every day, you’ll definitely become very fluent. To stay informed about developments of
the language, and the latest news, I advise you read blogs, especially
[R\-bloggers](https://www.r-bloggers.com/) which aggregates blog posts by more than 750 blogs
discussing R.
So what you can expect from this book is that this book is not the only one you should read.
Prerequisites
-------------
R and RStudio are the two main pieces of software that we are going to use. R is the programming
language and RStudio is a modern IDE for it. You can use R without RStudio; but you cannot use
RStudio without R.
If you wish to install R and RStudio at home to follow the examples in this book you can do it as
both pieces of software are available free of charge (paid options for RStudio exist, for companies
that need technical support). Installation is simple, but operating system dependent. To download
and install R for Windows, follow [this link](https://cloud.r-project.org/bin/windows/base/).
For macOS, follow [this one](https://cloud.r-project.org/bin/macosx/). If you run a GNU\+Linux
distribution, you can install R using the system’s package manager. If you’re running Ubuntu, you
might want to take a look at [r2u](https://github.com/eddelbuettel/r2u), which provides very
fast installation of packages, full integration with `apt` (so dependencies get solved automatically)
and covers the entirety of CRAN.
For RStudio, look for your operating system [here](https://www.rstudio.com/products/rstudio/download/#download).
What are packages?
------------------
There is one more step; we are going to install some packages. Packages are additional pieces of
code that can be installed from within R with the following function: `install.packages()`. These
packages extend R’s capabilities significantly, and are probably one of the main reasons R is so
popular. As of November 2018, R has over 13000 packages.
To install the packages we need, first open RStudio and then copy and paste this line in the console:
```
install.packages(c("tidyverse", "rsample", "recipes", "blogdown" ,"yardstick", "parsnip", "plm", "pwt9",
"checkpoint", "Ecdat", "ggthemes", "ggfortify", "margins", "janitor", "rio", "stopwords",
"colourpicker", "glmnet", "lhs", "mlrMBO", "mlbench", "ranger"))
```
or go to the **Packages** pane and then click on *Install*:
The author
----------
My name is Bruno Rodrigues and I program almost exclusively in R and have been teaching some R
courses for a few years now. I first started teaching for students at the University of Strasbourg
while working on my PhD. I hold a PhD in economics, with a focus on quantitative methods.
I’m currently head of the statistics department of the Ministry of Higher education and Research
in Luxembourg, and before that worked as a manager in the data science team of PWC Luxembourg.
This book is an adaptation of notes I’ve used in the past during my time as a teacher, but also
a lot of things I’ve learned about R since I left academia.
In my free time I like cooking, working out and [blogging](https://www.brodrigues.co), while listening to
[Fip](http://www.fipradio.fr/player) or
[Chillsky Radio](https://chillsky.com/listen/).
I also like to get my butt handed to me by playing roguelikes
such as [NetHack](http://nethack.wikia.com/wiki/NetHack), for which I wrote a
[package](https://github.com/b-rodrigues/nethack) that contains functions to analyze the data that
is saved on your computer after you win or lose (it will be lose 99% of the time) the game.
You can follow me on [twitter](https://www.twitter.com/brodriguesco), I tweet mostly about R or
what’s happening in Luxembourg.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/getting-to-know-rstudio.html |
Chapter 1 Getting to know RStudio
=================================
RStudio is a company that develops and maintains several products. Their best\-known product is
an IDE (Integrated development environment) for the R programming language, also called RStudio.
You can install RStudio by visiting this [link](https://www.rstudio.com/products/rstudio/download/).
There is also a server version that can be used to have a centralized version of R within, say, a
company. RStudio, the company, also develops [Shiny](https://shiny.rstudio.com/), a package to
create full\-fledged web\-apps. I am not going to cover Shiny in this book, since there’s already
[a lot](http://shiny.rstudio.com/tutorial/) of material that you can learn from.
Once you have installed RStudio, launch it and let’s go through the interface together.
1\.1 Panes
----------
RStudio is divided into different panes. Each pane has a specific function. The gif below shows
some of these panes:
Take some time to look around what each pane shows you. Some panes are empty; for example the *Plots*
pane or the *Viewer* pane. *Plots* shows you the plots you make. You can browse the plots and save
them. We will see this in more detail in a later chapter. *Viewer* shows you previews of documents
that you generate with R. More on this later.
1\.2 Console
------------
The *Console* pane is where you can execute R code. Write the following in the console:
```
2 + 3
```
and you’ll get the answer, `5`. However, do not write a lot of lines in the console. It is better
write your code inside a script. Output is also shown inside the console.
1\.3 Scripts
------------
Look at the gif below:
In this gif, we see the user creating a new R script. R scripts are simple text files that hold R
code. Think of `.do` files in STATA or `.c` files for C. R scripts have the extension `.r` or `.R`.
It is possible to create a lot of other files. We’ll take a look at `R Markdown` files in Chapter 11\.
### 1\.3\.1 The help pane
The *Help* pane allows you to consult documentation for functions or packages. The gif below shows
how it works:
you can also access help using the following syntax: `?lm`. This will bring up the documentation for
the function `lm()`. You can also type `??lm` which will look for the string `lm` in every package.
### 1\.3\.2 The Environment pane
The *Environment* pane shows every object created in the current section. It is especially useful
if you have defined lists or have loaded data into R as it makes it easy to explore these more
complex objects.
1\.4 Options
------------
It is also possible to customize RStudio’s look and feel:
Take some time to go through the options.
1\.5 Keyboard shortcuts
-----------------------
It is a good idea to familiarize yourself with at least some keyboard shortcuts. This is more
convenient than having to move the mouse around:
If there is only one keyboard shortcut you need to know, it’s `Ctrl-Enter` that executes a line of code
from your script. However, these other shortcuts are also worth knowing:
* `CTRL-ALT-R`: run entire script
* `CTRL-ALT-UP or DOWN`: make cursor taller or shorter, allowing you to edit multiple lines at the same time
* `CTRL-F`: Search and replace
* `ALT-UP or DOWN`: Move line up or down
* `CTRL-SHIFT-C`: Comment/uncomment line
* `ALT-SHIFT-K`: Bring up the list of keyboard shortcuts
* `CTRL-SHIFT-M`: Insert the pipe operator (`%>%`, more on this later)
* `CTRL-S`: Save script
This is just a few keyboard shortcuts that I personally find useful. However, I strongly advise you
to learn and use whatever shortcuts are useful and feel natural to you!
1\.6 Projects
-------------
One of the best features of RStudio are projects. Creating a project is simple; the gif below
shows how you can create a project and how you can switch between projects.
Projects make a lot of things easier, such as managing paths. More on this in the chapter about
reading data. Another useful feature of projects is that the scripts you open in project A will
stay open even if you switch to another project B, and then switch back to the project A again.
You can also use version control (with git) inside a project. Version control is very useful, but
I won’t discuss it here. You can find a lot of resources online to get you started with git.
1\.7 History
------------
The *History* pane saves all the previous lines you executed. You can then select these lines and
send them back to the console or the script.
1\.8 Plots
----------
All the plots you make during a session are visible in the *Plots* pane. From there, you can
export them in different formats.
The plots shown in the gif are made using basic R functions. Later, we will learn how to make nicer
looking plots using the package `ggplot2`.
1\.9 Addins
-----------
Some packages install addins, which are accessible through the addins button:
These addins make it easier to use some functions and you can read more about them [here](https://rstudio.github.io/rstudioaddins/#overview).
My favorite addins are the ones you get when installing the `{datapasta}` package. Read more about
it [here](https://github.com/MilesMcBain/datapasta).
There are other panes that I will not discuss here, but you will naturally discover their use as you
go. For example, we will discuss the *Build* pane in Chapter 11\.
1\.10 Packages
--------------
You can think of packages as addons that extend R’s core functionality. You can browse all available
packages on [CRAN](https://cloud.r-project.org/). To make it easier to find what you might be
interested in, you can also browse the [CRAN Task Views](https://cloud.r-project.org/web/views/).
Each package has a landing page that summarises its dependencies, version number etc. For example,
for the `dplyr` package: [https://cran.r\-project.org/web/packages/dplyr/index.html](https://cran.r-project.org/web/packages/dplyr/index.html).
Take a look at the *Downloads* section, and especially at the Reference Manual and Vignettes:
Vignettes are valuable documents; inside vignettes, the purpose of the package is explained in
plain English, usually with accompanying examples. The reference manuals list the available functions
inside the packages. You can also find vignettes from within Rstudio:
Go to the *Packages* pane and click on the package you’re interested in. Then you can consult the
help for the functions that come with the package as well as the package’s vignettes.
Once you installed a package, you have to load it before you can use it. To load packages you use the
`library()` function:
```
library(dplyr)
library(janitor)
# and so on...
```
If you only need to use one single function once, you don’t need to load an entire package. You can
write the following:
```
dplyr::full_join(A, B)
```
using the `::` operator, you can access functions from packages without having to load the whole
package beforehand.
It is possible and easy to create your own packages. This is useful if you have to write a lot of
functions that you use daily. We will lean about that, in Chapter 10\.
1\.11 Exercises
---------------
### Exercise 1
Change the look and feel of RStudio to suit your tastes! I personally like to move the console
to the right and use a dark theme. Take some 5 minutes to customize it and browse through all the options.
1\.1 Panes
----------
RStudio is divided into different panes. Each pane has a specific function. The gif below shows
some of these panes:
Take some time to look around what each pane shows you. Some panes are empty; for example the *Plots*
pane or the *Viewer* pane. *Plots* shows you the plots you make. You can browse the plots and save
them. We will see this in more detail in a later chapter. *Viewer* shows you previews of documents
that you generate with R. More on this later.
1\.2 Console
------------
The *Console* pane is where you can execute R code. Write the following in the console:
```
2 + 3
```
and you’ll get the answer, `5`. However, do not write a lot of lines in the console. It is better
write your code inside a script. Output is also shown inside the console.
1\.3 Scripts
------------
Look at the gif below:
In this gif, we see the user creating a new R script. R scripts are simple text files that hold R
code. Think of `.do` files in STATA or `.c` files for C. R scripts have the extension `.r` or `.R`.
It is possible to create a lot of other files. We’ll take a look at `R Markdown` files in Chapter 11\.
### 1\.3\.1 The help pane
The *Help* pane allows you to consult documentation for functions or packages. The gif below shows
how it works:
you can also access help using the following syntax: `?lm`. This will bring up the documentation for
the function `lm()`. You can also type `??lm` which will look for the string `lm` in every package.
### 1\.3\.2 The Environment pane
The *Environment* pane shows every object created in the current section. It is especially useful
if you have defined lists or have loaded data into R as it makes it easy to explore these more
complex objects.
### 1\.3\.1 The help pane
The *Help* pane allows you to consult documentation for functions or packages. The gif below shows
how it works:
you can also access help using the following syntax: `?lm`. This will bring up the documentation for
the function `lm()`. You can also type `??lm` which will look for the string `lm` in every package.
### 1\.3\.2 The Environment pane
The *Environment* pane shows every object created in the current section. It is especially useful
if you have defined lists or have loaded data into R as it makes it easy to explore these more
complex objects.
1\.4 Options
------------
It is also possible to customize RStudio’s look and feel:
Take some time to go through the options.
1\.5 Keyboard shortcuts
-----------------------
It is a good idea to familiarize yourself with at least some keyboard shortcuts. This is more
convenient than having to move the mouse around:
If there is only one keyboard shortcut you need to know, it’s `Ctrl-Enter` that executes a line of code
from your script. However, these other shortcuts are also worth knowing:
* `CTRL-ALT-R`: run entire script
* `CTRL-ALT-UP or DOWN`: make cursor taller or shorter, allowing you to edit multiple lines at the same time
* `CTRL-F`: Search and replace
* `ALT-UP or DOWN`: Move line up or down
* `CTRL-SHIFT-C`: Comment/uncomment line
* `ALT-SHIFT-K`: Bring up the list of keyboard shortcuts
* `CTRL-SHIFT-M`: Insert the pipe operator (`%>%`, more on this later)
* `CTRL-S`: Save script
This is just a few keyboard shortcuts that I personally find useful. However, I strongly advise you
to learn and use whatever shortcuts are useful and feel natural to you!
1\.6 Projects
-------------
One of the best features of RStudio are projects. Creating a project is simple; the gif below
shows how you can create a project and how you can switch between projects.
Projects make a lot of things easier, such as managing paths. More on this in the chapter about
reading data. Another useful feature of projects is that the scripts you open in project A will
stay open even if you switch to another project B, and then switch back to the project A again.
You can also use version control (with git) inside a project. Version control is very useful, but
I won’t discuss it here. You can find a lot of resources online to get you started with git.
1\.7 History
------------
The *History* pane saves all the previous lines you executed. You can then select these lines and
send them back to the console or the script.
1\.8 Plots
----------
All the plots you make during a session are visible in the *Plots* pane. From there, you can
export them in different formats.
The plots shown in the gif are made using basic R functions. Later, we will learn how to make nicer
looking plots using the package `ggplot2`.
1\.9 Addins
-----------
Some packages install addins, which are accessible through the addins button:
These addins make it easier to use some functions and you can read more about them [here](https://rstudio.github.io/rstudioaddins/#overview).
My favorite addins are the ones you get when installing the `{datapasta}` package. Read more about
it [here](https://github.com/MilesMcBain/datapasta).
There are other panes that I will not discuss here, but you will naturally discover their use as you
go. For example, we will discuss the *Build* pane in Chapter 11\.
1\.10 Packages
--------------
You can think of packages as addons that extend R’s core functionality. You can browse all available
packages on [CRAN](https://cloud.r-project.org/). To make it easier to find what you might be
interested in, you can also browse the [CRAN Task Views](https://cloud.r-project.org/web/views/).
Each package has a landing page that summarises its dependencies, version number etc. For example,
for the `dplyr` package: [https://cran.r\-project.org/web/packages/dplyr/index.html](https://cran.r-project.org/web/packages/dplyr/index.html).
Take a look at the *Downloads* section, and especially at the Reference Manual and Vignettes:
Vignettes are valuable documents; inside vignettes, the purpose of the package is explained in
plain English, usually with accompanying examples. The reference manuals list the available functions
inside the packages. You can also find vignettes from within Rstudio:
Go to the *Packages* pane and click on the package you’re interested in. Then you can consult the
help for the functions that come with the package as well as the package’s vignettes.
Once you installed a package, you have to load it before you can use it. To load packages you use the
`library()` function:
```
library(dplyr)
library(janitor)
# and so on...
```
If you only need to use one single function once, you don’t need to load an entire package. You can
write the following:
```
dplyr::full_join(A, B)
```
using the `::` operator, you can access functions from packages without having to load the whole
package beforehand.
It is possible and easy to create your own packages. This is useful if you have to write a lot of
functions that you use daily. We will lean about that, in Chapter 10\.
### Exercise 1
Change the look and feel of RStudio to suit your tastes! I personally like to move the console
to the right and use a dark theme. Take some 5 minutes to customize it and browse through all the options.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/objects-their-classes-and-types-and-useful-r-functions-to-get-you-started.html |
Chapter 2 Objects, their classes and types, and useful R functions to get you started
=====================================================================================
All objects in R have a given *type*. You already know most of them, as these types are also used
in mathematics. Integers, floating point numbers (floats), matrices, etc, are all objects you
are already familiar with. But R has other, maybe lesser known data types (that you can find in a
lot of other programming languages) that you need to become familiar with. But first, we need to
learn how to assign a value to a variable. This can be done in two ways:
```
a <- 3
```
or
```
a = 3
```
in very practical terms, there is no difference between the two. I prefer using `<-` for assigning
values to variables and reserve `=` for passing arguments to functions, for example:
```
spam <- mean(x = c(1,2,3))
```
I think this is less confusing than:
```
spam = mean(x = c(1,2,3))
```
but as I explained above you can use whatever you feel most comfortable with.
2\.1 The `numeric` class
------------------------
To define single numbers, you can do the following:
```
a <- 3
```
The `class()` function allows you to check the class of an object:
```
class(a)
```
```
## [1] "numeric"
```
Decimals are defined with the character `.`:
```
a <- 3.14
```
R also supports integers. If you find yourself in a situation where you explicitly need an integer
and not a floating point number, you can use the following:
```
a <- as.integer(3)
class(a)
```
```
## [1] "integer"
```
The `as.integer()` function is very useful, because it converts its argument into an integer. There
is a whole family of `as.*()` functions. To convert `a` into a floating point number again:
```
class(as.numeric(a))
```
```
## [1] "numeric"
```
There is also `is.numeric()` which tests whether a number is of the `numeric` class:
```
is.numeric(a)
```
```
## [1] TRUE
```
It is also possible to create an integer using `L`:
```
a <- 5L
class(a)
```
```
## [1] "integer"
```
Another way to convert this integer back to a floating point number is to use `as.double()` instead of
as numeric:
```
class(as.double(a))
```
```
## [1] "numeric"
```
The functions prefixed with `is.*` and `as.*` are quite useful, there is one for any of the supported types in R, such
as `as/is.character()`, `as/is.factor()`, etc…
2\.2 The `character` class
--------------------------
Use `" "` to define characters (called strings in other programming languages):
```
a <- "this is a string"
```
```
class(a)
```
```
## [1] "character"
```
To convert something to a character you can use the `as.character()` function:
```
a <- 4.392
class(a)
```
```
## [1] "numeric"
```
Now let’s convert it:
```
class(as.character(a))
```
```
## [1] "character"
```
It is also possible to convert a character to a numeric:
```
a <- "4.392"
class(a)
```
```
## [1] "character"
```
```
class(as.numeric(a))
```
```
## [1] "numeric"
```
But this only works if it makes sense:
```
a <- "this won't work, chief"
class(a)
```
```
## [1] "character"
```
```
as.numeric(a)
```
```
## Warning: NAs introduced by coercion
```
```
## [1] NA
```
A very nice package to work with characters is `{stringr}`, which is also part of the `{tidyverse}`.
2\.3 The `factor` class
-----------------------
Factors look like characters, but are very different. They are the representation of categorical
variables. A `{tidyverse}` package to work with factors is `{forcats}`. You would rarely use
factor variables outside of datasets, so for now, it is enough to know that this class exists.
We are going to learn more about factor variables in Chapter 4, by using the `{forcats}` package.
2\.4 The `Date` class
---------------------
Dates also look like characters, but are very different too:
```
as.Date("2019/03/19")
```
```
## [1] "2019-03-19"
```
```
class(as.Date("2019/03/19"))
```
```
## [1] "Date"
```
Manipulating dates and time can be tricky, but thankfully there’s a `{tidyverse}` package for that,
called `{lubridate}`. We are going to go over this package in Chapter 4\.
2\.5 The `logical` class
------------------------
This is the class of predicates, expressions that evaluate to *true* or *false*. For example, if you type:
```
4 > 3
```
```
## [1] TRUE
```
R returns `TRUE`, which is an object of class `logical`:
```
k <- 4 > 3
class(k)
```
```
## [1] "logical"
```
In other programming languages, `logical`s are often called `bool`s. A `logical` variable can only have
two values, either `TRUE` or `FALSE`. You can test the truthiness of a variable with `isTRUE()`:
```
k <- 4 > 3
isTRUE(k)
```
```
## [1] TRUE
```
How can you test if a variable is false? There is not a `isFALSE()` function (at least not without having
to load a package containing this function), but there is way to do it:
```
k <- 4 > 3
!isTRUE(k)
```
```
## [1] FALSE
```
The `!` operator indicates negation, so the above expression could be translated as *is k not TRUE?*.
There are other operators for boolean algebra, namely `&, &&, |, ||`. `&` means *and* and `|` stands for *or*.
You might be wondering what the difference between `&` and `&&` is? Or between `|` and `||`? `&` and
`|` work on vectors, doing pairwise comparisons:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
one & two
```
```
## [1] FALSE FALSE TRUE FALSE
```
Compare this to the `&&` operator:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
one && two
```
```
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] FALSE
```
The `&&` and `||` operators only compare the first element of the vectors and stop as soon as a the return
value can be safely determined. This is called short\-circuiting. Consider the following:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
three <- c(TRUE, TRUE, FALSE, FALSE)
one && two && three
```
```
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] FALSE
```
```
one || two || three
```
```
## Warning in one || two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] TRUE
```
The `||` operator stops as soon it evaluates to `TRUE` whereas the `&&` stops as soon as it evaluates to `FALSE`.
Personally, I rarely use `||` or `&&` because I get confused. I find using `|` or `&` in combination with the
`all()` or `any()` functions much more useful:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
any(one & two)
```
```
## [1] TRUE
```
```
all(one & two)
```
```
## [1] FALSE
```
`any()` checks whether any of the vector’s elements are `TRUE` and `all()` checks if all elements of the vector are
`TRUE`.
As a final note, you should know that is possible to use `T` for `TRUE` and `F` for `FALSE` but I
would advise against doing this, because it is not very explicit.
2\.6 Vectors and matrices
-------------------------
You can create a vector in different ways. But first of all, it is important to understand that a
vector in most programming languages is nothing more than a list of things. These things can be
numbers (either integers or floats), strings, or even other vectors. A vector in R can only contain elements of one
single type. This is not the case for a list, which is much more flexible. We will talk about lists shortly, but
let’s first focus on vectors and matrices.
### 2\.6\.1 The `c()` function
A very important function that allows you to build a vector is `c()`:
```
a <- c(1,2,3,4,5)
```
This creates a vector with elements 1, 2, 3, 4, 5\. If you check its class:
```
class(a)
```
```
## [1] "numeric"
```
This can be confusing: you where probably expecting a to be of class *vector* or
something similar. This is not the case if you use `c()` to create the vector, because `c()`
doesn’t build a vector in the mathematical sense, but a so\-called atomic vector.
Checking its dimension:
```
dim(a)
```
```
## NULL
```
returns `NULL` because an atomic vector doesn’t have a dimension.
If you want to create a true vector, you need to use `cbind()` or `rbind()`.
But before continuing, be aware that atomic vectors can only contain elements of the same type:
```
c(1, 2, "3")
```
```
## [1] "1" "2" "3"
```
because “3” is a character, all the other values get implicitly converted to characters. You have
to be very careful about this, and if you use atomic vectors in your programming, you have to make
absolutely sure that no characters or logicals or whatever else are going to convert your atomic
vector to something you were not expecting.
### 2\.6\.2 `cbind()` and `rbind()`
You can create a *true* vector with `cbind()`:
```
a <- cbind(1, 2, 3, 4, 5)
```
Check its class now:
```
class(a)
```
```
## [1] "matrix" "array"
```
This is exactly what we expected. Let’s check its dimension:
```
dim(a)
```
```
## [1] 1 5
```
This returns the dimension of `a` using the LICO notation (number of LInes first, the number of COlumns).
It is also possible to bind vectors together to create a matrix.
```
b <- cbind(6,7,8,9,10)
```
Now let’s put vector `a` and `b` into a matrix called `matrix_c` using `rbind()`.
`rbind()` functions the same way as `cbind()` but glues the vectors together by rows and not by columns.
```
matrix_c <- rbind(a,b)
print(matrix_c)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 2 3 4 5
## [2,] 6 7 8 9 10
```
### 2\.6\.3 The `matrix` class
R also has support for matrices. For example, you can create a matrix of dimension (5,5\) filled
with 0’s with the `matrix()` function:
```
matrix_a <- matrix(0, nrow = 5, ncol = 5)
```
If you want to create the following matrix:
\\\[
B \= \\left(
\\begin{array}{ccc}
2 \& 4 \& 3 \\\\
1 \& 5 \& 7
\\end{array} \\right)
\\]
you would do it like this:
```
B <- matrix(c(2, 4, 3, 1, 5, 7), nrow = 2, byrow = TRUE)
```
The option `byrow = TRUE` means that the rows of the matrix will be filled first.
You can access individual elements of `matrix_a` like so:
```
matrix_a[2, 3]
```
```
## [1] 0
```
and R returns its value, 0\. We can assign a new value to this element if we want. Try:
```
matrix_a[2, 3] <- 7
```
and now take a look at `matrix_a` again.
```
print(matrix_a)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 0 0 0 0
## [2,] 0 0 7 0 0
## [3,] 0 0 0 0 0
## [4,] 0 0 0 0 0
## [5,] 0 0 0 0 0
```
Recall our vector `b`:
```
b <- cbind(6,7,8,9,10)
```
To access its third element, you can simply write:
```
b[3]
```
```
## [1] 8
```
I have heard many people praising R for being a matrix based language. Matrices are indeed useful,
and statisticians are used to working with them. However, I very rarely use matrices in my
day to day work, and prefer an approach based on data frames (which will be discussed below). This
is because working with data frames makes it easier to use R’s advanced functional programming
language capabilities, and this is where R really shines in my opinion. Working with matrices
almost automatically implies using loops and all the iterative programming techniques, *à la Fortran*,
which I personally believe are ill\-suited for interactive statistical programming (as discussed in
the introduction).
2\.7 The `list` class
---------------------
The `list` class is a very flexible class, and thus, very useful. You can put anything inside a list,
such as numbers:
```
list1 <- list(3, 2)
```
or other lists constructed with `c()`:
```
list2 <- list(c(1, 2), c(3, 4))
```
you can also put objects of different classes in the same list:
```
list3 <- list(3, c(1, 2), "lists are amazing!")
```
and of course create list of lists:
```
my_lists <- list(list1, list2, list3)
```
To check the contents of a list, you can use the structure function `str()`:
```
str(my_lists)
```
```
## List of 3
## $ :List of 2
## ..$ : num 3
## ..$ : num 2
## $ :List of 2
## ..$ : num [1:2] 1 2
## ..$ : num [1:2] 3 4
## $ :List of 3
## ..$ : num 3
## ..$ : num [1:2] 1 2
## ..$ : chr "lists are amazing!"
```
or you can use RStudio’s *Environment* pane:
You can also create named lists:
```
list4 <- list("name_1" = 2, "name_2" = 8, "name_3" = "this is a named list")
```
and you can access the elements in two ways:
```
list4[[1]]
```
```
## [1] 2
```
or, for named lists:
```
list4$name_3
```
```
## [1] "this is a named list"
```
Take note of the `$` operator, because it is going to be quite useful for `data.frame`s as well,
which we are going to get to know in the next section.
Lists are used extensively because they are so flexible. You can build lists of datasets and apply
functions to all the datasets at once, build lists of models, lists of plots, etc… In the later
chapters we are going to learn all about them. Lists are central objects in a functional programming
workflow for interactive statistical analysis.
2\.8 The `data.frame` and `tibble` classes
------------------------------------------
In the next chapter we are going to learn how to import datasets into R. Once you import data, the
resulting object is either a `data.frame` or a `tibble` depending on which package you used to
import the data. `tibble`s extend `data.frame`s so if you know about `data.frame` objects already,
working with `tibble`s will be very easy. `tibble`s have a better `print()` method, and some other
niceties.
However, I want to stress that these objects are central to R and are thus very important; they are
actually special cases of lists, discussed above. There are different ways to print a `data.frame` or
a `tibble` if you wish to inspect it. You can use `View(my_data)` to show the `my_data` `data.frame`
in the *View* pane of RStudio:
You can also use the `str()` function:
```
str(my_data)
```
And if you need to access an individual column, you can use the `$` sign, same as for a list:
```
my_data$col1
```
2\.9 Formulas
-------------
We will learn more about formulas later, but because it is an important object, it is useful if you
already know about them early on. A formula is defined in the following way:
```
my_formula <- ~x
class(my_formula)
```
```
## [1] "formula"
```
Formula objects are defined using the `~` symbol. Formulas are useful to define statistical models,
for example for a linear regression:
```
lm(y ~ x)
```
or also to define anonymous functions, but more on this later.
2\.10 Models
------------
A statistical model is an object like any other in R:
Here, I have already a model that I ran on some test data:
```
class(my_model)
```
```
## [1] "lm"
```
`my_model` is an object of class `lm`, for *linear model*. You can apply different functions to a model object:
```
summary(my_model)
```
```
##
## Call:
## lm(formula = mpg ~ hp, data = mtcars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.7121 -2.1122 -0.8854 1.5819 8.2360
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 30.09886 1.63392 18.421 < 2e-16 ***
## hp -0.06823 0.01012 -6.742 1.79e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.863 on 30 degrees of freedom
## Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892
## F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07
```
This class will be explored in later chapters.
2\.11 NULL, NA and NaN
----------------------
The `NULL`, `NA` and `NaN` classes are pretty special. `NULL` is returned when the result of function is undetermined.
For example, consider `list4`:
```
list4
```
```
## $name_1
## [1] 2
##
## $name_2
## [1] 8
##
## $name_3
## [1] "this is a named list"
```
if you try to access an element that does not exist, such as `d`, you will get `NULL` back:
```
list4$d
```
```
## NULL
```
`NaN` means “Not a Number” and is returned when a function return something that is not a number:
```
sqrt(-1)
```
```
## Warning in sqrt(-1): NaNs produced
```
```
## [1] NaN
```
or:
```
0/0
```
```
## [1] NaN
```
Basically, numbers that cannot be represented as floating point numbers are `NaN`.
Finally, there’s `NA` which is closely related to `NaN` but is used for missing values. `NA` stands for `Not Available`. There are
several types of `NA`s:
* `NA_integer_`
* `NA_real_`
* `NA_complex_`
* `NA_character_`
but these are in principle only used when you need to program your own functions and need
to explicitly test for the missingness of, say, a character value.
To test whether a value is `NA`, use the `is.na()` function.
2\.12 Useful functions to get you started
-----------------------------------------
This section will list several basic R functions that are very useful and should be part of your toolbox.
### 2\.12\.1 Sequences
There are several functions that create sequences, `seq()`, `seq_along()` and `rep()`. `rep()` is easy enough:
```
rep(1, 10)
```
```
## [1] 1 1 1 1 1 1 1 1 1 1
```
This simply repeats `1` 10 times. You can repeat other objects too:
```
rep("HAHA", 10)
```
```
## [1] "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA"
```
To create a sequence, things are not as straightforward. There is `seq()`:
```
seq(1, 10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq(70, 80)
```
```
## [1] 70 71 72 73 74 75 76 77 78 79 80
```
It is also possible to provide a `by` argument:
```
seq(1, 10, by = 2)
```
```
## [1] 1 3 5 7 9
```
`seq_along()` behaves similarly, but returns the length of the object passed to it. So if you pass `list4` to
`seq_along()`, it will return a sequence from 1 to 3:
```
seq_along(list4)
```
```
## [1] 1 2 3
```
which is also true for `seq()` actually:
```
seq(list4)
```
```
## [1] 1 2 3
```
but these two functions behave differently for arguments of length equal to 1:
```
seq(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
So be quite careful about that. I would advise you do not use `seq()`, but only `seq_along()` and `seq_len()`. `seq_len()`
only takes arguments of length 1:
```
seq_len(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
The problem with `seq()` is that it is unpredictable; depending on its input, the output will either be an integer or a sequence.
When programming, it is better to have function that are stricter and fail when confronted to special cases, instead of returning
some result. This is a bit of a recurrent issue with R, and the functions from the `{tidyverse}` mitigate this issue by being
stricter than their base R counterparts. For example, consider the `ifelse()` function from base R:
```
ifelse(3 > 5, 1, "this is false")
```
```
## [1] "this is false"
```
and compare it to `{dplyr}`’s implementation, `if_else()`:
```
if_else(3 > 5, 1, "this is false")
Error: `false` must be type double, not character
Call `rlang::last_error()` to see a backtrace
```
`if_else()` fails because the return value when `FALSE` is not a double (a real number) but a character. This might seem unnecessarily
strict, but at least it is predictable. This makes debugging easier when used inside functions. In Chapter 8 we are going to learn how
to write our own functions, and being strict makes programming easier.
### 2\.12\.2 Basic string manipulation
For now, we have not closely studied `character` objects, we only learned how to define them. Later, in Chapter 5 we will learn about the
`{stringr}` package which provides useful function to work with strings. However, there are several base R functions that are very
useful that you might want to know nonetheless, such as `paste()` and `paste0()`:
```
paste("Hello", "amigo")
```
```
## [1] "Hello amigo"
```
but you can also change the separator if needed:
```
paste("Hello", "amigo", sep = "--")
```
```
## [1] "Hello--amigo"
```
`paste0()` is the same as `paste()` but does not have any `sep` argument:
```
paste0("Hello", "amigo")
```
```
## [1] "Helloamigo"
```
If you provide a vector of characters, you can also use the `collapse` argument,
which places whatever you provide for `collapse` between the
characters of the vector:
```
paste0(c("Joseph", "Mary", "Jesus"), collapse = ", and ")
```
```
## [1] "Joseph, and Mary, and Jesus"
```
To change the case of characters, you can use `toupper()` and `tolower()`:
```
tolower("HAHAHAHAH")
```
```
## [1] "hahahahah"
```
```
toupper("hueuehuehuheuhe")
```
```
## [1] "HUEUEHUEHUHEUHE"
```
Finally, there are the classical mathematical functions that you know and love:
* `sqrt()`
* `exp()`
* `log()`
* `abs()`
* `sin()`, `cos()`, `tan()`, and others
* `sum()`, `cumsum()`, `prod()`, `cumprod()`
* `max()`, `min()`
and many others…
2\.13 Exercises
---------------
### Exercise 1
Try to create the following vector:
\\\[a \= (6,3,8,9\)\\]
and add it this other vector:
\\\[b \= (9,1,3,5\)\\]
and save the result to a new variable called `result`.
### Exercise 2
Using `a` and `b` from before, try to get their dot product.
Try with `a * b` in the R console. What happened?
Try to find the right function to get the dot product. Don’t hesitate to google the answer!
### Exercise 3
How can you create a matrix of dimension (30,30\) filled with 2’s by only using the function `matrix()`?
### Exercise 4
Save your first name in a variable `a` and your surname in a variable `b`. What does the function:
```
paste(a, b)
```
do? Look at the help for `paste()` with `?paste` or using the *Help* pane in RStudio. What does the
optional argument `sep` do?
### Exercise 5
Define the following variables: `a <- 8`, `b <- 3`, `c <- 19`. What do the following lines check?
What do they return?
```
a > b
a == b
a != b
a < b
(a > b) && (a < c)
(a > b) && (a > c)
(a > b) || (a < b)
```
### Exercise 6
Define the following matrix:
\\\[
\\text{matrix\_a} \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
* What does `matrix_a >= 5` do?
* What does `matrix_a[ , 2]` do?
* Can you find which function gives you the transpose of this matrix?
### Exercise 7
Solve the following system of equations using the `solve()` function:
\\\[
\\left(
\\begin{array}{cccc}
9 \& 4 \& 12 \& 2 \\\\
5 \& 0 \& 7 \& 9\\\\
2 \& 6 \& 8 \& 0\\\\
9 \& 2 \& 9 \& 11
\\end{array} \\right) \\times \\left(
\\begin{array}{ccc}
x \\\\
y \\\\
z \\\\
t \\\\
\\end{array}\\right) \=
\\left(
\\begin{array}{ccc}
7\\\\
18\\\\
1\\\\
0
\\end{array}
\\right)
\\]
### Exercise 8
Load the `mtcars` data (`mtcars` is include in R, so you only need to use the `data()` function to
load the data):
```
data(mtcars)
```
if you run `class(mtcars)`, you get “data.frame”. Try now with `typeof(mtcars)`. The answer is now
“list”! This is because the class of an object is an attribute of that object, which can even
be assigned by the user:
```
class(mtcars) <- "don't do this"
class(mtcars)
```
```
## [1] "don't do this"
```
The type of an object is R’s internal type of that object, which cannot be manipulated by the user.
It is always useful to know the type of an object (not just its class). For example, in the particular
case of data frames, because the type of a data frame is a list, you can use all that you learned
about lists to manipulate data frames! Recall that `$` allowed you to select the element of a list
for instance:
```
my_list <- list("one" = 1, "two" = 2, "three" = 3)
my_list$one
```
```
## [1] 1
```
Because data frames are nothing but fancy lists, this is why you can access columns the same way:
```
mtcars$mpg
```
```
## [1] 21.0 21.0 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 17.8 16.4 17.3 15.2 10.4
## [16] 10.4 14.7 32.4 30.4 33.9 21.5 15.5 15.2 13.3 19.2 27.3 26.0 30.4 15.8 19.7
## [31] 15.0 21.4
```
2\.1 The `numeric` class
------------------------
To define single numbers, you can do the following:
```
a <- 3
```
The `class()` function allows you to check the class of an object:
```
class(a)
```
```
## [1] "numeric"
```
Decimals are defined with the character `.`:
```
a <- 3.14
```
R also supports integers. If you find yourself in a situation where you explicitly need an integer
and not a floating point number, you can use the following:
```
a <- as.integer(3)
class(a)
```
```
## [1] "integer"
```
The `as.integer()` function is very useful, because it converts its argument into an integer. There
is a whole family of `as.*()` functions. To convert `a` into a floating point number again:
```
class(as.numeric(a))
```
```
## [1] "numeric"
```
There is also `is.numeric()` which tests whether a number is of the `numeric` class:
```
is.numeric(a)
```
```
## [1] TRUE
```
It is also possible to create an integer using `L`:
```
a <- 5L
class(a)
```
```
## [1] "integer"
```
Another way to convert this integer back to a floating point number is to use `as.double()` instead of
as numeric:
```
class(as.double(a))
```
```
## [1] "numeric"
```
The functions prefixed with `is.*` and `as.*` are quite useful, there is one for any of the supported types in R, such
as `as/is.character()`, `as/is.factor()`, etc…
2\.2 The `character` class
--------------------------
Use `" "` to define characters (called strings in other programming languages):
```
a <- "this is a string"
```
```
class(a)
```
```
## [1] "character"
```
To convert something to a character you can use the `as.character()` function:
```
a <- 4.392
class(a)
```
```
## [1] "numeric"
```
Now let’s convert it:
```
class(as.character(a))
```
```
## [1] "character"
```
It is also possible to convert a character to a numeric:
```
a <- "4.392"
class(a)
```
```
## [1] "character"
```
```
class(as.numeric(a))
```
```
## [1] "numeric"
```
But this only works if it makes sense:
```
a <- "this won't work, chief"
class(a)
```
```
## [1] "character"
```
```
as.numeric(a)
```
```
## Warning: NAs introduced by coercion
```
```
## [1] NA
```
A very nice package to work with characters is `{stringr}`, which is also part of the `{tidyverse}`.
2\.3 The `factor` class
-----------------------
Factors look like characters, but are very different. They are the representation of categorical
variables. A `{tidyverse}` package to work with factors is `{forcats}`. You would rarely use
factor variables outside of datasets, so for now, it is enough to know that this class exists.
We are going to learn more about factor variables in Chapter 4, by using the `{forcats}` package.
2\.4 The `Date` class
---------------------
Dates also look like characters, but are very different too:
```
as.Date("2019/03/19")
```
```
## [1] "2019-03-19"
```
```
class(as.Date("2019/03/19"))
```
```
## [1] "Date"
```
Manipulating dates and time can be tricky, but thankfully there’s a `{tidyverse}` package for that,
called `{lubridate}`. We are going to go over this package in Chapter 4\.
2\.5 The `logical` class
------------------------
This is the class of predicates, expressions that evaluate to *true* or *false*. For example, if you type:
```
4 > 3
```
```
## [1] TRUE
```
R returns `TRUE`, which is an object of class `logical`:
```
k <- 4 > 3
class(k)
```
```
## [1] "logical"
```
In other programming languages, `logical`s are often called `bool`s. A `logical` variable can only have
two values, either `TRUE` or `FALSE`. You can test the truthiness of a variable with `isTRUE()`:
```
k <- 4 > 3
isTRUE(k)
```
```
## [1] TRUE
```
How can you test if a variable is false? There is not a `isFALSE()` function (at least not without having
to load a package containing this function), but there is way to do it:
```
k <- 4 > 3
!isTRUE(k)
```
```
## [1] FALSE
```
The `!` operator indicates negation, so the above expression could be translated as *is k not TRUE?*.
There are other operators for boolean algebra, namely `&, &&, |, ||`. `&` means *and* and `|` stands for *or*.
You might be wondering what the difference between `&` and `&&` is? Or between `|` and `||`? `&` and
`|` work on vectors, doing pairwise comparisons:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
one & two
```
```
## [1] FALSE FALSE TRUE FALSE
```
Compare this to the `&&` operator:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
one && two
```
```
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] FALSE
```
The `&&` and `||` operators only compare the first element of the vectors and stop as soon as a the return
value can be safely determined. This is called short\-circuiting. Consider the following:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
three <- c(TRUE, TRUE, FALSE, FALSE)
one && two && three
```
```
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
## Warning in one && two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] FALSE
```
```
one || two || three
```
```
## Warning in one || two: 'length(x) = 4 > 1' in coercion to 'logical(1)'
```
```
## [1] TRUE
```
The `||` operator stops as soon it evaluates to `TRUE` whereas the `&&` stops as soon as it evaluates to `FALSE`.
Personally, I rarely use `||` or `&&` because I get confused. I find using `|` or `&` in combination with the
`all()` or `any()` functions much more useful:
```
one <- c(TRUE, FALSE, TRUE, FALSE)
two <- c(FALSE, TRUE, TRUE, TRUE)
any(one & two)
```
```
## [1] TRUE
```
```
all(one & two)
```
```
## [1] FALSE
```
`any()` checks whether any of the vector’s elements are `TRUE` and `all()` checks if all elements of the vector are
`TRUE`.
As a final note, you should know that is possible to use `T` for `TRUE` and `F` for `FALSE` but I
would advise against doing this, because it is not very explicit.
2\.6 Vectors and matrices
-------------------------
You can create a vector in different ways. But first of all, it is important to understand that a
vector in most programming languages is nothing more than a list of things. These things can be
numbers (either integers or floats), strings, or even other vectors. A vector in R can only contain elements of one
single type. This is not the case for a list, which is much more flexible. We will talk about lists shortly, but
let’s first focus on vectors and matrices.
### 2\.6\.1 The `c()` function
A very important function that allows you to build a vector is `c()`:
```
a <- c(1,2,3,4,5)
```
This creates a vector with elements 1, 2, 3, 4, 5\. If you check its class:
```
class(a)
```
```
## [1] "numeric"
```
This can be confusing: you where probably expecting a to be of class *vector* or
something similar. This is not the case if you use `c()` to create the vector, because `c()`
doesn’t build a vector in the mathematical sense, but a so\-called atomic vector.
Checking its dimension:
```
dim(a)
```
```
## NULL
```
returns `NULL` because an atomic vector doesn’t have a dimension.
If you want to create a true vector, you need to use `cbind()` or `rbind()`.
But before continuing, be aware that atomic vectors can only contain elements of the same type:
```
c(1, 2, "3")
```
```
## [1] "1" "2" "3"
```
because “3” is a character, all the other values get implicitly converted to characters. You have
to be very careful about this, and if you use atomic vectors in your programming, you have to make
absolutely sure that no characters or logicals or whatever else are going to convert your atomic
vector to something you were not expecting.
### 2\.6\.2 `cbind()` and `rbind()`
You can create a *true* vector with `cbind()`:
```
a <- cbind(1, 2, 3, 4, 5)
```
Check its class now:
```
class(a)
```
```
## [1] "matrix" "array"
```
This is exactly what we expected. Let’s check its dimension:
```
dim(a)
```
```
## [1] 1 5
```
This returns the dimension of `a` using the LICO notation (number of LInes first, the number of COlumns).
It is also possible to bind vectors together to create a matrix.
```
b <- cbind(6,7,8,9,10)
```
Now let’s put vector `a` and `b` into a matrix called `matrix_c` using `rbind()`.
`rbind()` functions the same way as `cbind()` but glues the vectors together by rows and not by columns.
```
matrix_c <- rbind(a,b)
print(matrix_c)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 2 3 4 5
## [2,] 6 7 8 9 10
```
### 2\.6\.3 The `matrix` class
R also has support for matrices. For example, you can create a matrix of dimension (5,5\) filled
with 0’s with the `matrix()` function:
```
matrix_a <- matrix(0, nrow = 5, ncol = 5)
```
If you want to create the following matrix:
\\\[
B \= \\left(
\\begin{array}{ccc}
2 \& 4 \& 3 \\\\
1 \& 5 \& 7
\\end{array} \\right)
\\]
you would do it like this:
```
B <- matrix(c(2, 4, 3, 1, 5, 7), nrow = 2, byrow = TRUE)
```
The option `byrow = TRUE` means that the rows of the matrix will be filled first.
You can access individual elements of `matrix_a` like so:
```
matrix_a[2, 3]
```
```
## [1] 0
```
and R returns its value, 0\. We can assign a new value to this element if we want. Try:
```
matrix_a[2, 3] <- 7
```
and now take a look at `matrix_a` again.
```
print(matrix_a)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 0 0 0 0
## [2,] 0 0 7 0 0
## [3,] 0 0 0 0 0
## [4,] 0 0 0 0 0
## [5,] 0 0 0 0 0
```
Recall our vector `b`:
```
b <- cbind(6,7,8,9,10)
```
To access its third element, you can simply write:
```
b[3]
```
```
## [1] 8
```
I have heard many people praising R for being a matrix based language. Matrices are indeed useful,
and statisticians are used to working with them. However, I very rarely use matrices in my
day to day work, and prefer an approach based on data frames (which will be discussed below). This
is because working with data frames makes it easier to use R’s advanced functional programming
language capabilities, and this is where R really shines in my opinion. Working with matrices
almost automatically implies using loops and all the iterative programming techniques, *à la Fortran*,
which I personally believe are ill\-suited for interactive statistical programming (as discussed in
the introduction).
### 2\.6\.1 The `c()` function
A very important function that allows you to build a vector is `c()`:
```
a <- c(1,2,3,4,5)
```
This creates a vector with elements 1, 2, 3, 4, 5\. If you check its class:
```
class(a)
```
```
## [1] "numeric"
```
This can be confusing: you where probably expecting a to be of class *vector* or
something similar. This is not the case if you use `c()` to create the vector, because `c()`
doesn’t build a vector in the mathematical sense, but a so\-called atomic vector.
Checking its dimension:
```
dim(a)
```
```
## NULL
```
returns `NULL` because an atomic vector doesn’t have a dimension.
If you want to create a true vector, you need to use `cbind()` or `rbind()`.
But before continuing, be aware that atomic vectors can only contain elements of the same type:
```
c(1, 2, "3")
```
```
## [1] "1" "2" "3"
```
because “3” is a character, all the other values get implicitly converted to characters. You have
to be very careful about this, and if you use atomic vectors in your programming, you have to make
absolutely sure that no characters or logicals or whatever else are going to convert your atomic
vector to something you were not expecting.
### 2\.6\.2 `cbind()` and `rbind()`
You can create a *true* vector with `cbind()`:
```
a <- cbind(1, 2, 3, 4, 5)
```
Check its class now:
```
class(a)
```
```
## [1] "matrix" "array"
```
This is exactly what we expected. Let’s check its dimension:
```
dim(a)
```
```
## [1] 1 5
```
This returns the dimension of `a` using the LICO notation (number of LInes first, the number of COlumns).
It is also possible to bind vectors together to create a matrix.
```
b <- cbind(6,7,8,9,10)
```
Now let’s put vector `a` and `b` into a matrix called `matrix_c` using `rbind()`.
`rbind()` functions the same way as `cbind()` but glues the vectors together by rows and not by columns.
```
matrix_c <- rbind(a,b)
print(matrix_c)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 2 3 4 5
## [2,] 6 7 8 9 10
```
### 2\.6\.3 The `matrix` class
R also has support for matrices. For example, you can create a matrix of dimension (5,5\) filled
with 0’s with the `matrix()` function:
```
matrix_a <- matrix(0, nrow = 5, ncol = 5)
```
If you want to create the following matrix:
\\\[
B \= \\left(
\\begin{array}{ccc}
2 \& 4 \& 3 \\\\
1 \& 5 \& 7
\\end{array} \\right)
\\]
you would do it like this:
```
B <- matrix(c(2, 4, 3, 1, 5, 7), nrow = 2, byrow = TRUE)
```
The option `byrow = TRUE` means that the rows of the matrix will be filled first.
You can access individual elements of `matrix_a` like so:
```
matrix_a[2, 3]
```
```
## [1] 0
```
and R returns its value, 0\. We can assign a new value to this element if we want. Try:
```
matrix_a[2, 3] <- 7
```
and now take a look at `matrix_a` again.
```
print(matrix_a)
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 0 0 0 0
## [2,] 0 0 7 0 0
## [3,] 0 0 0 0 0
## [4,] 0 0 0 0 0
## [5,] 0 0 0 0 0
```
Recall our vector `b`:
```
b <- cbind(6,7,8,9,10)
```
To access its third element, you can simply write:
```
b[3]
```
```
## [1] 8
```
I have heard many people praising R for being a matrix based language. Matrices are indeed useful,
and statisticians are used to working with them. However, I very rarely use matrices in my
day to day work, and prefer an approach based on data frames (which will be discussed below). This
is because working with data frames makes it easier to use R’s advanced functional programming
language capabilities, and this is where R really shines in my opinion. Working with matrices
almost automatically implies using loops and all the iterative programming techniques, *à la Fortran*,
which I personally believe are ill\-suited for interactive statistical programming (as discussed in
the introduction).
2\.7 The `list` class
---------------------
The `list` class is a very flexible class, and thus, very useful. You can put anything inside a list,
such as numbers:
```
list1 <- list(3, 2)
```
or other lists constructed with `c()`:
```
list2 <- list(c(1, 2), c(3, 4))
```
you can also put objects of different classes in the same list:
```
list3 <- list(3, c(1, 2), "lists are amazing!")
```
and of course create list of lists:
```
my_lists <- list(list1, list2, list3)
```
To check the contents of a list, you can use the structure function `str()`:
```
str(my_lists)
```
```
## List of 3
## $ :List of 2
## ..$ : num 3
## ..$ : num 2
## $ :List of 2
## ..$ : num [1:2] 1 2
## ..$ : num [1:2] 3 4
## $ :List of 3
## ..$ : num 3
## ..$ : num [1:2] 1 2
## ..$ : chr "lists are amazing!"
```
or you can use RStudio’s *Environment* pane:
You can also create named lists:
```
list4 <- list("name_1" = 2, "name_2" = 8, "name_3" = "this is a named list")
```
and you can access the elements in two ways:
```
list4[[1]]
```
```
## [1] 2
```
or, for named lists:
```
list4$name_3
```
```
## [1] "this is a named list"
```
Take note of the `$` operator, because it is going to be quite useful for `data.frame`s as well,
which we are going to get to know in the next section.
Lists are used extensively because they are so flexible. You can build lists of datasets and apply
functions to all the datasets at once, build lists of models, lists of plots, etc… In the later
chapters we are going to learn all about them. Lists are central objects in a functional programming
workflow for interactive statistical analysis.
2\.8 The `data.frame` and `tibble` classes
------------------------------------------
In the next chapter we are going to learn how to import datasets into R. Once you import data, the
resulting object is either a `data.frame` or a `tibble` depending on which package you used to
import the data. `tibble`s extend `data.frame`s so if you know about `data.frame` objects already,
working with `tibble`s will be very easy. `tibble`s have a better `print()` method, and some other
niceties.
However, I want to stress that these objects are central to R and are thus very important; they are
actually special cases of lists, discussed above. There are different ways to print a `data.frame` or
a `tibble` if you wish to inspect it. You can use `View(my_data)` to show the `my_data` `data.frame`
in the *View* pane of RStudio:
You can also use the `str()` function:
```
str(my_data)
```
And if you need to access an individual column, you can use the `$` sign, same as for a list:
```
my_data$col1
```
2\.9 Formulas
-------------
We will learn more about formulas later, but because it is an important object, it is useful if you
already know about them early on. A formula is defined in the following way:
```
my_formula <- ~x
class(my_formula)
```
```
## [1] "formula"
```
Formula objects are defined using the `~` symbol. Formulas are useful to define statistical models,
for example for a linear regression:
```
lm(y ~ x)
```
or also to define anonymous functions, but more on this later.
2\.10 Models
------------
A statistical model is an object like any other in R:
Here, I have already a model that I ran on some test data:
```
class(my_model)
```
```
## [1] "lm"
```
`my_model` is an object of class `lm`, for *linear model*. You can apply different functions to a model object:
```
summary(my_model)
```
```
##
## Call:
## lm(formula = mpg ~ hp, data = mtcars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.7121 -2.1122 -0.8854 1.5819 8.2360
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 30.09886 1.63392 18.421 < 2e-16 ***
## hp -0.06823 0.01012 -6.742 1.79e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.863 on 30 degrees of freedom
## Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892
## F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07
```
This class will be explored in later chapters.
2\.11 NULL, NA and NaN
----------------------
The `NULL`, `NA` and `NaN` classes are pretty special. `NULL` is returned when the result of function is undetermined.
For example, consider `list4`:
```
list4
```
```
## $name_1
## [1] 2
##
## $name_2
## [1] 8
##
## $name_3
## [1] "this is a named list"
```
if you try to access an element that does not exist, such as `d`, you will get `NULL` back:
```
list4$d
```
```
## NULL
```
`NaN` means “Not a Number” and is returned when a function return something that is not a number:
```
sqrt(-1)
```
```
## Warning in sqrt(-1): NaNs produced
```
```
## [1] NaN
```
or:
```
0/0
```
```
## [1] NaN
```
Basically, numbers that cannot be represented as floating point numbers are `NaN`.
Finally, there’s `NA` which is closely related to `NaN` but is used for missing values. `NA` stands for `Not Available`. There are
several types of `NA`s:
* `NA_integer_`
* `NA_real_`
* `NA_complex_`
* `NA_character_`
but these are in principle only used when you need to program your own functions and need
to explicitly test for the missingness of, say, a character value.
To test whether a value is `NA`, use the `is.na()` function.
2\.12 Useful functions to get you started
-----------------------------------------
This section will list several basic R functions that are very useful and should be part of your toolbox.
### 2\.12\.1 Sequences
There are several functions that create sequences, `seq()`, `seq_along()` and `rep()`. `rep()` is easy enough:
```
rep(1, 10)
```
```
## [1] 1 1 1 1 1 1 1 1 1 1
```
This simply repeats `1` 10 times. You can repeat other objects too:
```
rep("HAHA", 10)
```
```
## [1] "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA"
```
To create a sequence, things are not as straightforward. There is `seq()`:
```
seq(1, 10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq(70, 80)
```
```
## [1] 70 71 72 73 74 75 76 77 78 79 80
```
It is also possible to provide a `by` argument:
```
seq(1, 10, by = 2)
```
```
## [1] 1 3 5 7 9
```
`seq_along()` behaves similarly, but returns the length of the object passed to it. So if you pass `list4` to
`seq_along()`, it will return a sequence from 1 to 3:
```
seq_along(list4)
```
```
## [1] 1 2 3
```
which is also true for `seq()` actually:
```
seq(list4)
```
```
## [1] 1 2 3
```
but these two functions behave differently for arguments of length equal to 1:
```
seq(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
So be quite careful about that. I would advise you do not use `seq()`, but only `seq_along()` and `seq_len()`. `seq_len()`
only takes arguments of length 1:
```
seq_len(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
The problem with `seq()` is that it is unpredictable; depending on its input, the output will either be an integer or a sequence.
When programming, it is better to have function that are stricter and fail when confronted to special cases, instead of returning
some result. This is a bit of a recurrent issue with R, and the functions from the `{tidyverse}` mitigate this issue by being
stricter than their base R counterparts. For example, consider the `ifelse()` function from base R:
```
ifelse(3 > 5, 1, "this is false")
```
```
## [1] "this is false"
```
and compare it to `{dplyr}`’s implementation, `if_else()`:
```
if_else(3 > 5, 1, "this is false")
Error: `false` must be type double, not character
Call `rlang::last_error()` to see a backtrace
```
`if_else()` fails because the return value when `FALSE` is not a double (a real number) but a character. This might seem unnecessarily
strict, but at least it is predictable. This makes debugging easier when used inside functions. In Chapter 8 we are going to learn how
to write our own functions, and being strict makes programming easier.
### 2\.12\.2 Basic string manipulation
For now, we have not closely studied `character` objects, we only learned how to define them. Later, in Chapter 5 we will learn about the
`{stringr}` package which provides useful function to work with strings. However, there are several base R functions that are very
useful that you might want to know nonetheless, such as `paste()` and `paste0()`:
```
paste("Hello", "amigo")
```
```
## [1] "Hello amigo"
```
but you can also change the separator if needed:
```
paste("Hello", "amigo", sep = "--")
```
```
## [1] "Hello--amigo"
```
`paste0()` is the same as `paste()` but does not have any `sep` argument:
```
paste0("Hello", "amigo")
```
```
## [1] "Helloamigo"
```
If you provide a vector of characters, you can also use the `collapse` argument,
which places whatever you provide for `collapse` between the
characters of the vector:
```
paste0(c("Joseph", "Mary", "Jesus"), collapse = ", and ")
```
```
## [1] "Joseph, and Mary, and Jesus"
```
To change the case of characters, you can use `toupper()` and `tolower()`:
```
tolower("HAHAHAHAH")
```
```
## [1] "hahahahah"
```
```
toupper("hueuehuehuheuhe")
```
```
## [1] "HUEUEHUEHUHEUHE"
```
Finally, there are the classical mathematical functions that you know and love:
* `sqrt()`
* `exp()`
* `log()`
* `abs()`
* `sin()`, `cos()`, `tan()`, and others
* `sum()`, `cumsum()`, `prod()`, `cumprod()`
* `max()`, `min()`
and many others…
### 2\.12\.1 Sequences
There are several functions that create sequences, `seq()`, `seq_along()` and `rep()`. `rep()` is easy enough:
```
rep(1, 10)
```
```
## [1] 1 1 1 1 1 1 1 1 1 1
```
This simply repeats `1` 10 times. You can repeat other objects too:
```
rep("HAHA", 10)
```
```
## [1] "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA" "HAHA"
```
To create a sequence, things are not as straightforward. There is `seq()`:
```
seq(1, 10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq(70, 80)
```
```
## [1] 70 71 72 73 74 75 76 77 78 79 80
```
It is also possible to provide a `by` argument:
```
seq(1, 10, by = 2)
```
```
## [1] 1 3 5 7 9
```
`seq_along()` behaves similarly, but returns the length of the object passed to it. So if you pass `list4` to
`seq_along()`, it will return a sequence from 1 to 3:
```
seq_along(list4)
```
```
## [1] 1 2 3
```
which is also true for `seq()` actually:
```
seq(list4)
```
```
## [1] 1 2 3
```
but these two functions behave differently for arguments of length equal to 1:
```
seq(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
So be quite careful about that. I would advise you do not use `seq()`, but only `seq_along()` and `seq_len()`. `seq_len()`
only takes arguments of length 1:
```
seq_len(10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
seq_along(10)
```
```
## [1] 1
```
The problem with `seq()` is that it is unpredictable; depending on its input, the output will either be an integer or a sequence.
When programming, it is better to have function that are stricter and fail when confronted to special cases, instead of returning
some result. This is a bit of a recurrent issue with R, and the functions from the `{tidyverse}` mitigate this issue by being
stricter than their base R counterparts. For example, consider the `ifelse()` function from base R:
```
ifelse(3 > 5, 1, "this is false")
```
```
## [1] "this is false"
```
and compare it to `{dplyr}`’s implementation, `if_else()`:
```
if_else(3 > 5, 1, "this is false")
Error: `false` must be type double, not character
Call `rlang::last_error()` to see a backtrace
```
`if_else()` fails because the return value when `FALSE` is not a double (a real number) but a character. This might seem unnecessarily
strict, but at least it is predictable. This makes debugging easier when used inside functions. In Chapter 8 we are going to learn how
to write our own functions, and being strict makes programming easier.
### 2\.12\.2 Basic string manipulation
For now, we have not closely studied `character` objects, we only learned how to define them. Later, in Chapter 5 we will learn about the
`{stringr}` package which provides useful function to work with strings. However, there are several base R functions that are very
useful that you might want to know nonetheless, such as `paste()` and `paste0()`:
```
paste("Hello", "amigo")
```
```
## [1] "Hello amigo"
```
but you can also change the separator if needed:
```
paste("Hello", "amigo", sep = "--")
```
```
## [1] "Hello--amigo"
```
`paste0()` is the same as `paste()` but does not have any `sep` argument:
```
paste0("Hello", "amigo")
```
```
## [1] "Helloamigo"
```
If you provide a vector of characters, you can also use the `collapse` argument,
which places whatever you provide for `collapse` between the
characters of the vector:
```
paste0(c("Joseph", "Mary", "Jesus"), collapse = ", and ")
```
```
## [1] "Joseph, and Mary, and Jesus"
```
To change the case of characters, you can use `toupper()` and `tolower()`:
```
tolower("HAHAHAHAH")
```
```
## [1] "hahahahah"
```
```
toupper("hueuehuehuheuhe")
```
```
## [1] "HUEUEHUEHUHEUHE"
```
Finally, there are the classical mathematical functions that you know and love:
* `sqrt()`
* `exp()`
* `log()`
* `abs()`
* `sin()`, `cos()`, `tan()`, and others
* `sum()`, `cumsum()`, `prod()`, `cumprod()`
* `max()`, `min()`
and many others…
2\.13 Exercises
---------------
### Exercise 1
Try to create the following vector:
\\\[a \= (6,3,8,9\)\\]
and add it this other vector:
\\\[b \= (9,1,3,5\)\\]
and save the result to a new variable called `result`.
### Exercise 2
Using `a` and `b` from before, try to get their dot product.
Try with `a * b` in the R console. What happened?
Try to find the right function to get the dot product. Don’t hesitate to google the answer!
### Exercise 3
How can you create a matrix of dimension (30,30\) filled with 2’s by only using the function `matrix()`?
### Exercise 4
Save your first name in a variable `a` and your surname in a variable `b`. What does the function:
```
paste(a, b)
```
do? Look at the help for `paste()` with `?paste` or using the *Help* pane in RStudio. What does the
optional argument `sep` do?
### Exercise 5
Define the following variables: `a <- 8`, `b <- 3`, `c <- 19`. What do the following lines check?
What do they return?
```
a > b
a == b
a != b
a < b
(a > b) && (a < c)
(a > b) && (a > c)
(a > b) || (a < b)
```
### Exercise 6
Define the following matrix:
\\\[
\\text{matrix\_a} \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
* What does `matrix_a >= 5` do?
* What does `matrix_a[ , 2]` do?
* Can you find which function gives you the transpose of this matrix?
### Exercise 7
Solve the following system of equations using the `solve()` function:
\\\[
\\left(
\\begin{array}{cccc}
9 \& 4 \& 12 \& 2 \\\\
5 \& 0 \& 7 \& 9\\\\
2 \& 6 \& 8 \& 0\\\\
9 \& 2 \& 9 \& 11
\\end{array} \\right) \\times \\left(
\\begin{array}{ccc}
x \\\\
y \\\\
z \\\\
t \\\\
\\end{array}\\right) \=
\\left(
\\begin{array}{ccc}
7\\\\
18\\\\
1\\\\
0
\\end{array}
\\right)
\\]
### Exercise 8
Load the `mtcars` data (`mtcars` is include in R, so you only need to use the `data()` function to
load the data):
```
data(mtcars)
```
if you run `class(mtcars)`, you get “data.frame”. Try now with `typeof(mtcars)`. The answer is now
“list”! This is because the class of an object is an attribute of that object, which can even
be assigned by the user:
```
class(mtcars) <- "don't do this"
class(mtcars)
```
```
## [1] "don't do this"
```
The type of an object is R’s internal type of that object, which cannot be manipulated by the user.
It is always useful to know the type of an object (not just its class). For example, in the particular
case of data frames, because the type of a data frame is a list, you can use all that you learned
about lists to manipulate data frames! Recall that `$` allowed you to select the element of a list
for instance:
```
my_list <- list("one" = 1, "two" = 2, "three" = 3)
my_list$one
```
```
## [1] 1
```
Because data frames are nothing but fancy lists, this is why you can access columns the same way:
```
mtcars$mpg
```
```
## [1] 21.0 21.0 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 17.8 16.4 17.3 15.2 10.4
## [16] 10.4 14.7 32.4 30.4 33.9 21.5 15.5 15.2 13.3 19.2 27.3 26.0 30.4 15.8 19.7
## [31] 15.0 21.4
```
### Exercise 1
Try to create the following vector:
\\\[a \= (6,3,8,9\)\\]
and add it this other vector:
\\\[b \= (9,1,3,5\)\\]
and save the result to a new variable called `result`.
### Exercise 2
Using `a` and `b` from before, try to get their dot product.
Try with `a * b` in the R console. What happened?
Try to find the right function to get the dot product. Don’t hesitate to google the answer!
### Exercise 3
How can you create a matrix of dimension (30,30\) filled with 2’s by only using the function `matrix()`?
### Exercise 4
Save your first name in a variable `a` and your surname in a variable `b`. What does the function:
```
paste(a, b)
```
do? Look at the help for `paste()` with `?paste` or using the *Help* pane in RStudio. What does the
optional argument `sep` do?
### Exercise 5
Define the following variables: `a <- 8`, `b <- 3`, `c <- 19`. What do the following lines check?
What do they return?
```
a > b
a == b
a != b
a < b
(a > b) && (a < c)
(a > b) && (a > c)
(a > b) || (a < b)
```
### Exercise 6
Define the following matrix:
\\\[
\\text{matrix\_a} \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
* What does `matrix_a >= 5` do?
* What does `matrix_a[ , 2]` do?
* Can you find which function gives you the transpose of this matrix?
### Exercise 7
Solve the following system of equations using the `solve()` function:
\\\[
\\left(
\\begin{array}{cccc}
9 \& 4 \& 12 \& 2 \\\\
5 \& 0 \& 7 \& 9\\\\
2 \& 6 \& 8 \& 0\\\\
9 \& 2 \& 9 \& 11
\\end{array} \\right) \\times \\left(
\\begin{array}{ccc}
x \\\\
y \\\\
z \\\\
t \\\\
\\end{array}\\right) \=
\\left(
\\begin{array}{ccc}
7\\\\
18\\\\
1\\\\
0
\\end{array}
\\right)
\\]
### Exercise 8
Load the `mtcars` data (`mtcars` is include in R, so you only need to use the `data()` function to
load the data):
```
data(mtcars)
```
if you run `class(mtcars)`, you get “data.frame”. Try now with `typeof(mtcars)`. The answer is now
“list”! This is because the class of an object is an attribute of that object, which can even
be assigned by the user:
```
class(mtcars) <- "don't do this"
class(mtcars)
```
```
## [1] "don't do this"
```
The type of an object is R’s internal type of that object, which cannot be manipulated by the user.
It is always useful to know the type of an object (not just its class). For example, in the particular
case of data frames, because the type of a data frame is a list, you can use all that you learned
about lists to manipulate data frames! Recall that `$` allowed you to select the element of a list
for instance:
```
my_list <- list("one" = 1, "two" = 2, "three" = 3)
my_list$one
```
```
## [1] 1
```
Because data frames are nothing but fancy lists, this is why you can access columns the same way:
```
mtcars$mpg
```
```
## [1] 21.0 21.0 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 17.8 16.4 17.3 15.2 10.4
## [16] 10.4 14.7 32.4 30.4 33.9 21.5 15.5 15.2 13.3 19.2 27.3 26.0 30.4 15.8 19.7
## [31] 15.0 21.4
```
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/reading-and-writing-data.html |
Chapter 3 Reading and writing data
==================================
In this chapter, we are going to import example datasets that are available in R, `mtcars` and
`iris`. I have converted these datasets into several formats. Download those datasets
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets) if you want to follow the
examples below. R can import some formats without the need of external packages, such as the `.csv`
format. However, for other formats, you will need to use different packages. Because there are a
lot of different formats available I suggest you use the `{rio}` package.
`{rio}` is a wrapper around different packages that import/export data in different formats.
This package is nice because you don’t need to remember which package to use to import, say,
STATA datasets and then you need to remember which one for SAS datasets, and so on. Read `{rio}`’s
[vignette](https://cran.r-project.org/web/packages/rio/vignettes/rio.html) for more details. Below
I show some of `{rio}`’s functions presented in the vignette. It is also possible to import data from
other, less “traditional” sources, such as your clipboard. Also note that it is possible to import
more than one dataset at once. There are two ways of doing that, either by importing all the
datasets, binding their rows together and add a new variable with the name of the data, or import
all the datasets into a list, where each element of that list is a data frame. We are going to
explore this second option later.
3\.1 The swiss army knife of data import and export: `{rio}`
------------------------------------------------------------
To import data with `{rio}`, `import()` is all you need:
```
library(rio)
mtcars <- import("datasets/mtcars.csv")
```
```
head(mtcars)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
`import()` needs the path to the data, and you can specify additional options if needed. On a
Windows computer, you have to pay attention to the path; you cannot simply copy and paste it, because
paths in Windows use the `\` symbol whereas R uses `/` (just like on Linux or macOS).
Importing a STATA or a SAS file is done just the same:
```
mtcars_stata <- import("datasets/mtcars.dta")
head(mtcars_stata)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
```
mtcars_sas <- import("datasets/mtcars.sas7bdat")
head(mtcars_sas)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
It is also possible to import Excel files where each sheet is a single table, but you will need
`import_list()` for that. The file `multi.xlsx` has two sheets, each with a table in it:
```
multi <- import_list("datasets/multi.xlsx")
str(multi)
```
```
## List of 2
## $ mtcars:'data.frame': 32 obs. of 11 variables:
## ..$ mpg : num [1:32] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
## ..$ cyl : num [1:32] 6 6 4 6 8 6 8 4 4 6 ...
## ..$ disp: num [1:32] 160 160 108 258 360 ...
## ..$ hp : num [1:32] 110 110 93 110 175 105 245 62 95 123 ...
## ..$ drat: num [1:32] 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
## ..$ wt : num [1:32] 2.62 2.88 2.32 3.21 3.44 ...
## ..$ qsec: num [1:32] 16.5 17 18.6 19.4 17 ...
## ..$ vs : num [1:32] 0 0 1 1 0 1 0 1 1 1 ...
## ..$ am : num [1:32] 1 1 1 0 0 0 0 0 0 0 ...
## ..$ gear: num [1:32] 4 4 4 3 3 3 3 4 4 4 ...
## ..$ carb: num [1:32] 4 4 1 1 2 1 4 2 2 4 ...
## $ iris :'data.frame': 150 obs. of 5 variables:
## ..$ Sepal.Length: num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## ..$ Petal.Length: num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## ..$ Species : chr [1:150] "setosa" "setosa" "setosa" "setosa" ...
```
As you can see `multi` is a list of datasets. Told you lists were very flexible! It is also possible
to import all the datasets in a single directory at once. For this, you first need a vector of paths:
```
paths <- Sys.glob("datasets/unemployment/*.csv")
```
`Sys.glob()` allows you to find files using a regular expression. “datasets/unemployment/\*.csv”
matches all the `.csv` files inside the “datasets/unemployment/” folder.
```
all_data <- import_list(paths)
str(all_data)
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of which: Wage-earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of which: Non-wage-earners: int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ Unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ Active population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ Year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of which: Wage-earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of which: Non-wage-earners: int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ Unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ Active population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ Year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of which: Wage-earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of which: Non-wage-earners: int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ Active population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ Year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of which: Wage-earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of which: Non-wage-earners: int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ Active population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ Year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
in a subsequent chapter we will learn how to actually use these lists of datasets.
If you know that each dataset in each file has the same columns, you can also import them directly
into a single dataset by binding each dataset together using `rbind = TRUE`:
```
bind_data <- import_list(paths, rbind = TRUE)
str(bind_data)
```
```
## 'data.frame': 472 obs. of 9 variables:
## $ Commune : chr "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## $ Total employed population : int 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## $ of which: Wage-earners : int 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## $ of which: Non-wage-earners: int 19872 1809 168 94 116 294 272 113 189 338 ...
## $ Unemployed : int 19287 1071 114 25 74 261 98 45 66 207 ...
## $ Active population : int 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## $ Unemployment rate (in %) : num 7.95 5.67 6.27 2.88 4.92 ...
## $ Year : int 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## $ _file : chr "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" ...
## - attr(*, ".internal.selfref")=<externalptr>
```
This also adds a further column called `_file` indicating the name of the file that contained the
original data.
If something goes wrong, you might need to take a look at the underlying function `{rio}` is
actually using to import the file. Let’s look at the following example:
```
testdata <- import("datasets/problems/mtcars.csv")
head(testdata)
```
```
## mpg&cyl&disp&hp&drat&wt&qsec&vs&am&gear&carb
## 1 21&6&160&110&3.9&2.62&16.46&0&1&4&4
## 2 21&6&160&110&3.9&2.875&17.02&0&1&4&4
## 3 22.8&4&108&93&3.85&2.32&18.61&1&1&4&1
## 4 21.4&6&258&110&3.08&3.215&19.44&1&0&3&1
## 5 18.7&8&360&175&3.15&3.44&17.02&0&0&3&2
## 6 18.1&6&225&105&2.76&3.46&20.22&1&0&3&1
```
as you can see, the import didn’t work quite well! This is because the separator is the `&` for
some reason. Because we are trying to read a `.csv` file, `rio::import()` is using
`data.table::fread()` under the hood (you can read this in `import()`’s help). If you then read
`data.table::fread()`’s help, you see that the `fread()` function has an optional `sep =` argument
that you can use to specify the separator. You can use this argument in `import()` too, and it will
be passed down to `data.table::fread()`:
```
testdata <- import("datasets/problems/mtcars.csv", sep = "&")
head(testdata)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21 6 160 110 3.9 2.62 16.46 0 1 4 4
## 2 21 6 160 110 3.9 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.32 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.44 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.46 20.22 1 0 3 1
```
`export()` allows you to write data to disk, by simply providing the path and name of the file you
wish to save.
```
export(testdata, "path/where/to/save/testdata.csv")
```
If you end the name with `.csv` the file is exported to the csv format, if instead you write `.dta`
the data will be exported to the STATA format, and so on.
If you wish to export to Excel, this is possible, but it may require that you change a file on your
computer (you only have to do this once). Try running:
```
export(testdata, "path/where/to/save/testdata.xlsx")
```
if this results in an error, try the following:
* Run the following lines in Rstudio:
```
if(!file.exists("~/.Rprofile")) # only create if not already there
file.create("~/.Rprofile") # (don't overwrite it)
file.edit("~/.Rprofile")
```
These lines, taken shamelessly from [Efficient R
programming](https://csgillespie.github.io/efficientR/3-3-r-startup.html#rprofile) (go read it,
it’s a very great resource) look for and open the `.Rprofile` file which is a file that is run
every time you open Rstudio. This means that you can put any line of code there that will always be
executed whenever you launch Rstudio.
* Add this line to the file:
```
Sys.setenv("R_ZIPCMD" = "C:/Program Files (x86)/Rtools/zip.exe")
```
This tells Rstudio to use `zip.exe` as the default zip tool, which is needed to export files to the
Excel format. Try it out by restarting Rstudio, and then running the following lines:
```
library(rio)
data(mtcars)
export(mtcars, "mtcars.xlsx")
```
You should find the `mtcars.xlsx` inside your working directory. You can check what is your working
directory with `getwd()`.
`{rio}` should cover all your needs, but if not, there is very likely a package out there that will
import the data you need.
3\.2 Writing any object to disk
-------------------------------
`{rio}` is an amazing package, but is only able to write tabular representations of data. What if you
would like to save, say, a list containing any arbitrary object? This is possible with the
`saveRDS()` function. Literally anything can be saved with `saveRDS()`:
```
my_list <- list("this is a list",
list("which contains a list", 12),
c(1, 2, 3, 4),
matrix(c(2, 4, 3, 1, 5, 7),
nrow = 2))
str(my_list)
```
```
## List of 4
## $ : chr "this is a list"
## $ :List of 2
## ..$ : chr "which contains a list"
## ..$ : num 12
## $ : num [1:4] 1 2 3 4
## $ : num [1:2, 1:3] 2 4 3 1 5 7
```
`my_list` is a list containing a string, a list which contains a string and a number, a vector and
a matrix… Now suppose that computing this list takes a very long time. For example, imagine that
each element of the list is the result of estimating a very complex model on a simulated
dataset, which takes hours to run. Because this takes so long to compute, you’d want to save
it to disk. This is possible with `saveRDS()`:
```
saveRDS(my_list, "my_list.RDS")
```
The next day, after having freshly started your computer and launched RStudio, it is possible to
retrieve the object exactly like it was using `readRDS()`:
```
my_list <- readRDS("my_list.RDS")
str(my_list)
```
```
## List of 4
## $ : chr "this is a list"
## $ :List of 2
## ..$ : chr "which contains a list"
## ..$ : num 12
## $ : num [1:4] 1 2 3 4
## $ : num [1:2, 1:3] 2 4 3 1 5 7
```
Even if you want to save a regular dataset, using `saveRDS()` might be a good idea because the data
gets compressed if you add the option `compress = TRUE` to `saveRDS()`. However keep in mind that
this will only be readable by R, so if you need to share this data with colleagues that use another
tool, save it in another format.
3\.3 Using RStudio projects to manage paths
-------------------------------------------
Managing paths can be painful, especially if you’re collaborating with a colleague and both of you
saved the data in paths that are different. Whenever one of you wants to work on the script, the
path will need to be adapted first. The best way to avoid that is to use projects with RStudio.
Imagine that you are working on a project entitled “housing”. You will create a folder called
“housing” somewhere on your computer and inside this folder have another folder called “data”, then
a bunch of other folders containing different files or the outputs of your analysis. What matters
here is that you have a folder called “data” which contains the datasets you will ananlyze. When
you are inside an RStudio project, granted that you chose your “housing” folder as the folder to
host the project, you can read the data by simply specifying the path like so:
```
my_data <- import("/data/data.csv")
```
Constrast this to what you would need to write if you were not using a project:
```
my_data <- import("C:/My Documents/Castor/Work/Projects/Housing/data/data.csv")
```
Not only is that longer, but if Castor is working on this project with Pollux, Pollux would need
to change the above line to this:
```
my_data <- import("C:/My Documents/Pollux/Work/Projects/Housing/data/data.csv")
```
whenever Pollux needs to work on it. Another, similar issue, is that if you need to write something
to disk, such as a dataset or a plot, you would also need to specify the whole path:
```
export(my_data, "C:/My Documents/Pollux/Work/Projects/Housing/data/data.csv")
```
If you forget to write the whole path, then the dataset will be saved in the standard working
directory, which is your “My Documents” folder on Windows, and “Home” on GNU\+Linux or macOS. You
can check what is the working directory with the `getwd()` function:
```
getwd()
```
On a fresh session on my computer this returns:
```
"/home/bruno"
```
or, on Windows:
```
"C:/Users/Bruno/Documents"
```
but if you call this function inside a project, it will return the path to your project. It is also
possible to set the working directory with `setwd()`, so you don’t need to always write the full
path, meaning that you can this:
```
setwd("the/path/I/want/")
import("data/my_data.csv")
export(processed_data, "processed_data.xlsx")
```
instead of:
```
import("the/path/I/want/data/my_data.csv")
export(processed_data, "the/path/I/want/processed_data.xlsx")
```
However, I really, really, really urge you never to use `setwd()`. Use projects instead!
Using projects saves a lot of pain in the long run.
3\.1 The swiss army knife of data import and export: `{rio}`
------------------------------------------------------------
To import data with `{rio}`, `import()` is all you need:
```
library(rio)
mtcars <- import("datasets/mtcars.csv")
```
```
head(mtcars)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
`import()` needs the path to the data, and you can specify additional options if needed. On a
Windows computer, you have to pay attention to the path; you cannot simply copy and paste it, because
paths in Windows use the `\` symbol whereas R uses `/` (just like on Linux or macOS).
Importing a STATA or a SAS file is done just the same:
```
mtcars_stata <- import("datasets/mtcars.dta")
head(mtcars_stata)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
```
mtcars_sas <- import("datasets/mtcars.sas7bdat")
head(mtcars_sas)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
It is also possible to import Excel files where each sheet is a single table, but you will need
`import_list()` for that. The file `multi.xlsx` has two sheets, each with a table in it:
```
multi <- import_list("datasets/multi.xlsx")
str(multi)
```
```
## List of 2
## $ mtcars:'data.frame': 32 obs. of 11 variables:
## ..$ mpg : num [1:32] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
## ..$ cyl : num [1:32] 6 6 4 6 8 6 8 4 4 6 ...
## ..$ disp: num [1:32] 160 160 108 258 360 ...
## ..$ hp : num [1:32] 110 110 93 110 175 105 245 62 95 123 ...
## ..$ drat: num [1:32] 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
## ..$ wt : num [1:32] 2.62 2.88 2.32 3.21 3.44 ...
## ..$ qsec: num [1:32] 16.5 17 18.6 19.4 17 ...
## ..$ vs : num [1:32] 0 0 1 1 0 1 0 1 1 1 ...
## ..$ am : num [1:32] 1 1 1 0 0 0 0 0 0 0 ...
## ..$ gear: num [1:32] 4 4 4 3 3 3 3 4 4 4 ...
## ..$ carb: num [1:32] 4 4 1 1 2 1 4 2 2 4 ...
## $ iris :'data.frame': 150 obs. of 5 variables:
## ..$ Sepal.Length: num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## ..$ Petal.Length: num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## ..$ Species : chr [1:150] "setosa" "setosa" "setosa" "setosa" ...
```
As you can see `multi` is a list of datasets. Told you lists were very flexible! It is also possible
to import all the datasets in a single directory at once. For this, you first need a vector of paths:
```
paths <- Sys.glob("datasets/unemployment/*.csv")
```
`Sys.glob()` allows you to find files using a regular expression. “datasets/unemployment/\*.csv”
matches all the `.csv` files inside the “datasets/unemployment/” folder.
```
all_data <- import_list(paths)
str(all_data)
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of which: Wage-earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of which: Non-wage-earners: int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ Unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ Active population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ Year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of which: Wage-earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of which: Non-wage-earners: int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ Unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ Active population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ Year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of which: Wage-earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of which: Non-wage-earners: int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ Active population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ Year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of which: Wage-earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of which: Non-wage-earners: int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ Active population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ Year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
in a subsequent chapter we will learn how to actually use these lists of datasets.
If you know that each dataset in each file has the same columns, you can also import them directly
into a single dataset by binding each dataset together using `rbind = TRUE`:
```
bind_data <- import_list(paths, rbind = TRUE)
str(bind_data)
```
```
## 'data.frame': 472 obs. of 9 variables:
## $ Commune : chr "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## $ Total employed population : int 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## $ of which: Wage-earners : int 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## $ of which: Non-wage-earners: int 19872 1809 168 94 116 294 272 113 189 338 ...
## $ Unemployed : int 19287 1071 114 25 74 261 98 45 66 207 ...
## $ Active population : int 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## $ Unemployment rate (in %) : num 7.95 5.67 6.27 2.88 4.92 ...
## $ Year : int 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## $ _file : chr "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" "datasets/unemployment/unemp_2013.csv" ...
## - attr(*, ".internal.selfref")=<externalptr>
```
This also adds a further column called `_file` indicating the name of the file that contained the
original data.
If something goes wrong, you might need to take a look at the underlying function `{rio}` is
actually using to import the file. Let’s look at the following example:
```
testdata <- import("datasets/problems/mtcars.csv")
head(testdata)
```
```
## mpg&cyl&disp&hp&drat&wt&qsec&vs&am&gear&carb
## 1 21&6&160&110&3.9&2.62&16.46&0&1&4&4
## 2 21&6&160&110&3.9&2.875&17.02&0&1&4&4
## 3 22.8&4&108&93&3.85&2.32&18.61&1&1&4&1
## 4 21.4&6&258&110&3.08&3.215&19.44&1&0&3&1
## 5 18.7&8&360&175&3.15&3.44&17.02&0&0&3&2
## 6 18.1&6&225&105&2.76&3.46&20.22&1&0&3&1
```
as you can see, the import didn’t work quite well! This is because the separator is the `&` for
some reason. Because we are trying to read a `.csv` file, `rio::import()` is using
`data.table::fread()` under the hood (you can read this in `import()`’s help). If you then read
`data.table::fread()`’s help, you see that the `fread()` function has an optional `sep =` argument
that you can use to specify the separator. You can use this argument in `import()` too, and it will
be passed down to `data.table::fread()`:
```
testdata <- import("datasets/problems/mtcars.csv", sep = "&")
head(testdata)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21 6 160 110 3.9 2.62 16.46 0 1 4 4
## 2 21 6 160 110 3.9 2.875 17.02 0 1 4 4
## 3 22.8 4 108 93 3.85 2.32 18.61 1 1 4 1
## 4 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360 175 3.15 3.44 17.02 0 0 3 2
## 6 18.1 6 225 105 2.76 3.46 20.22 1 0 3 1
```
`export()` allows you to write data to disk, by simply providing the path and name of the file you
wish to save.
```
export(testdata, "path/where/to/save/testdata.csv")
```
If you end the name with `.csv` the file is exported to the csv format, if instead you write `.dta`
the data will be exported to the STATA format, and so on.
If you wish to export to Excel, this is possible, but it may require that you change a file on your
computer (you only have to do this once). Try running:
```
export(testdata, "path/where/to/save/testdata.xlsx")
```
if this results in an error, try the following:
* Run the following lines in Rstudio:
```
if(!file.exists("~/.Rprofile")) # only create if not already there
file.create("~/.Rprofile") # (don't overwrite it)
file.edit("~/.Rprofile")
```
These lines, taken shamelessly from [Efficient R
programming](https://csgillespie.github.io/efficientR/3-3-r-startup.html#rprofile) (go read it,
it’s a very great resource) look for and open the `.Rprofile` file which is a file that is run
every time you open Rstudio. This means that you can put any line of code there that will always be
executed whenever you launch Rstudio.
* Add this line to the file:
```
Sys.setenv("R_ZIPCMD" = "C:/Program Files (x86)/Rtools/zip.exe")
```
This tells Rstudio to use `zip.exe` as the default zip tool, which is needed to export files to the
Excel format. Try it out by restarting Rstudio, and then running the following lines:
```
library(rio)
data(mtcars)
export(mtcars, "mtcars.xlsx")
```
You should find the `mtcars.xlsx` inside your working directory. You can check what is your working
directory with `getwd()`.
`{rio}` should cover all your needs, but if not, there is very likely a package out there that will
import the data you need.
3\.2 Writing any object to disk
-------------------------------
`{rio}` is an amazing package, but is only able to write tabular representations of data. What if you
would like to save, say, a list containing any arbitrary object? This is possible with the
`saveRDS()` function. Literally anything can be saved with `saveRDS()`:
```
my_list <- list("this is a list",
list("which contains a list", 12),
c(1, 2, 3, 4),
matrix(c(2, 4, 3, 1, 5, 7),
nrow = 2))
str(my_list)
```
```
## List of 4
## $ : chr "this is a list"
## $ :List of 2
## ..$ : chr "which contains a list"
## ..$ : num 12
## $ : num [1:4] 1 2 3 4
## $ : num [1:2, 1:3] 2 4 3 1 5 7
```
`my_list` is a list containing a string, a list which contains a string and a number, a vector and
a matrix… Now suppose that computing this list takes a very long time. For example, imagine that
each element of the list is the result of estimating a very complex model on a simulated
dataset, which takes hours to run. Because this takes so long to compute, you’d want to save
it to disk. This is possible with `saveRDS()`:
```
saveRDS(my_list, "my_list.RDS")
```
The next day, after having freshly started your computer and launched RStudio, it is possible to
retrieve the object exactly like it was using `readRDS()`:
```
my_list <- readRDS("my_list.RDS")
str(my_list)
```
```
## List of 4
## $ : chr "this is a list"
## $ :List of 2
## ..$ : chr "which contains a list"
## ..$ : num 12
## $ : num [1:4] 1 2 3 4
## $ : num [1:2, 1:3] 2 4 3 1 5 7
```
Even if you want to save a regular dataset, using `saveRDS()` might be a good idea because the data
gets compressed if you add the option `compress = TRUE` to `saveRDS()`. However keep in mind that
this will only be readable by R, so if you need to share this data with colleagues that use another
tool, save it in another format.
3\.3 Using RStudio projects to manage paths
-------------------------------------------
Managing paths can be painful, especially if you’re collaborating with a colleague and both of you
saved the data in paths that are different. Whenever one of you wants to work on the script, the
path will need to be adapted first. The best way to avoid that is to use projects with RStudio.
Imagine that you are working on a project entitled “housing”. You will create a folder called
“housing” somewhere on your computer and inside this folder have another folder called “data”, then
a bunch of other folders containing different files or the outputs of your analysis. What matters
here is that you have a folder called “data” which contains the datasets you will ananlyze. When
you are inside an RStudio project, granted that you chose your “housing” folder as the folder to
host the project, you can read the data by simply specifying the path like so:
```
my_data <- import("/data/data.csv")
```
Constrast this to what you would need to write if you were not using a project:
```
my_data <- import("C:/My Documents/Castor/Work/Projects/Housing/data/data.csv")
```
Not only is that longer, but if Castor is working on this project with Pollux, Pollux would need
to change the above line to this:
```
my_data <- import("C:/My Documents/Pollux/Work/Projects/Housing/data/data.csv")
```
whenever Pollux needs to work on it. Another, similar issue, is that if you need to write something
to disk, such as a dataset or a plot, you would also need to specify the whole path:
```
export(my_data, "C:/My Documents/Pollux/Work/Projects/Housing/data/data.csv")
```
If you forget to write the whole path, then the dataset will be saved in the standard working
directory, which is your “My Documents” folder on Windows, and “Home” on GNU\+Linux or macOS. You
can check what is the working directory with the `getwd()` function:
```
getwd()
```
On a fresh session on my computer this returns:
```
"/home/bruno"
```
or, on Windows:
```
"C:/Users/Bruno/Documents"
```
but if you call this function inside a project, it will return the path to your project. It is also
possible to set the working directory with `setwd()`, so you don’t need to always write the full
path, meaning that you can this:
```
setwd("the/path/I/want/")
import("data/my_data.csv")
export(processed_data, "processed_data.xlsx")
```
instead of:
```
import("the/path/I/want/data/my_data.csv")
export(processed_data, "the/path/I/want/processed_data.xlsx")
```
However, I really, really, really urge you never to use `setwd()`. Use projects instead!
Using projects saves a lot of pain in the long run.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/descriptive-statistics-and-data-manipulation.html |
Chapter 4 Descriptive statistics and data manipulation
======================================================
Now that we are familiar with some R objects and know how to import data, it is time to write some
code. In this chapter, we are going to compute descriptive statistics for a single dataset, but
also for a list of datasets later in the chapter. However, I will not give a list of functions to
compute descriptive statistics; if you need a specific function you can find easily in the *Help*
pane in Rstudio or using any modern internet search engine. What I will do is show you a workflow
that allows you to compute the descripitive statisics you need fast. R has a lot of built\-in
functions for descriptive statistics; however, if you want to compute statistics for different
sub\-groups, some more complex manipulations are needed. At least this was true in the past.
Nowadays, thanks to the packages from the `{tidyverse}`, it is very easy and fast to compute
descriptive statistics by any stratifying variable(s). The package we are going to use for this is
called `{dplyr}`. `{dplyr}` contains a lot of functions that make manipulating data and computing
descriptive statistics very easy. To make things easier for now, we are going to use example data
included with `{dplyr}`. So no need to import an external dataset; this does not change anything to
the example that we are going to study here; the source of the data does not matter for this. Using
`{dplyr}` is possible only if the data you are working with is already in a useful shape. When data
is more messy, you will need to first manipulate it to bring it a *tidy* format. For this, we will
use `{tidyr}`, which is very useful package to reshape data and to do advanced cleaning of your
data. All these tidyverse functions are also called *verbs*. However, before getting to know these
verbs, let’s do an analysis using standard, or *base* R functions. This will be the benchmark
against which we are going to measure a `{tidyverse}` workflow.
4\.1 A data exploration exercice using *base* R
-----------------------------------------------
Let’s first load the `starwars` data set, included in the `{dplyr}` package:
```
library(dplyr)
data(starwars)
```
Let’s first take a look at the data:
```
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
This data contains information on Star Wars characters. The first question you have to answer is
to find the average height of the characters:
```
mean(starwars$height)
```
```
## [1] NA
```
As discussed in Chapter 2, `$` allows you to access columns of a `data.frame` objects.
Because there are `NA` values in the data, the result is also `NA`. To get the result, you need to
add an option to `mean()`:
```
mean(starwars$height, na.rm = TRUE)
```
```
## [1] 174.358
```
Let’s also take a look at the standard deviation:
```
sd(starwars$height, na.rm = TRUE)
```
```
## [1] 34.77043
```
It might be more informative to compute these two statistics by sex, so for this, we are going
to use `aggregate()`:
```
aggregate(starwars$height,
by = list(sex = starwars$sex),
mean)
```
```
## sex x
## 1 female NA
## 2 hermaphroditic 175
## 3 male NA
## 4 none NA
```
Oh, shoot! Most groups have missing values in them, so we get `NA` back. We need to use `na.rm = TRUE`
just like before. Thankfully, it is possible to pass this option to `mean()` inside `aggregate()` as well:
```
aggregate(starwars$height,
by = list(sex = starwars$sex),
mean, na.rm = TRUE)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
Later in the book, we are also going to see how to define our own functions (with the default options that
are useful to us), and this will also help in this sort of situation.
Even though we can use `na.rm = TRUE`, let’s also use `subset()` to filter out the `NA` values beforehand:
```
starwars_no_nas <- subset(starwars,
!is.na(height))
aggregate(starwars_no_nas$height,
by = list(sex = starwars_no_nas$sex),
mean)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
(`aggregate()` also has a `subset =` option, but I prefer to explicitely subset the data set with `subset()`).
Even if you are not familiar with `aggregate()`, I believe the above lines are quite
self\-explanatory. You need to provide `aggregate()` with 3 things; the variable you want to
summarize (or only the data frame, if you want to summarize all variables), a list of grouping
variables and then the function that will be applied to each subgroup. And by the way, to test for
`NA`, one uses the function `is.na()` not something like `species == "NA"` or anything like that.
`!is.na()` does the opposite (`!` reverses booleans, so `!TRUE` becomes `FALSE` and vice\-versa).
You can easily add another grouping variable:
```
aggregate(starwars_no_nas$height,
by = list(Sex = starwars_no_nas$sex,
`Hair color` = starwars_no_nas$hair_color),
mean)
```
```
## Sex Hair color x
## 1 female auburn 150.0000
## 2 male auburn, grey 180.0000
## 3 male auburn, white 182.0000
## 4 female black 166.3333
## 5 male black 176.2500
## 6 male blond 176.6667
## 7 female blonde 168.0000
## 8 female brown 160.4000
## 9 male brown 182.6667
## 10 male brown, grey 178.0000
## 11 male grey 170.0000
## 12 female none 188.2500
## 13 male none 182.2414
## 14 none none 148.0000
## 15 female white 167.0000
## 16 male white 152.3333
```
or use another function:
```
aggregate(starwars_no_nas$height,
by = list(Sex = starwars_no_nas$sex),
sd)
```
```
## Sex x
## 1 female 15.32256
## 2 hermaphroditic NA
## 3 male 36.01075
## 4 none 49.14977
```
(let’s ignore the `NA`s). It is important to note that `aggregate()` returns a `data.frame` object.
You can only give one function to `aggregate()`, so if you need the mean and the standard deviation of `height`,
you must do it in two steps.
Since R 4\.1, a new infix operator `|>` has been introduced, which is really handy for writing the kind of
code we’ve been looking at in this chapter. `|>` is also called a pipe, or the *base* pipe to distinguish
it from *another* pipe that we’ll discuss in the next section. For now, let’s learn about `|>`.
Consider the following:
```
10 |> sqrt()
```
```
## [1] 3.162278
```
This computes `sqrt(10)`; so what `|>` does, is pass the left hand side (`10`, in the example above) to the
right hand side (`sqrt()`). Using `|>` might seem more complicated and verbose than not using it, but you
will see in a bit why it can be useful. The next function I would like to introduce at this point is `with()`.
`with()` makes it possible to apply functions on `data.frame` columns without having to write `$` all the time.
For example, consider this:
```
mean(starwars$height, na.rm = TRUE)
```
```
## [1] 174.358
```
```
with(starwars,
mean(height, na.rm = TRUE))
```
```
## [1] 174.358
```
The advantage of using `with()` is that we can directly reference `height` without using `$`. Here again, this
is more verbose than simply using `$`… so why bother with it? It turns out that by combining `|>` and `with()`,
we can write very clean and concise code. Let’s go back to a previous example to illustrate this idea:
```
starwars_no_nas <- subset(starwars,
!is.na(height))
aggregate(starwars_no_nas$height,
by = list(sex = starwars_no_nas$sex),
mean)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
First, we created a new dataset where we filtered out rows where `height` is `NA`. This dataset is useless otherwise,
but we need it for the next part, where we actually do what we want (computing the average `height` by `sex`).
Using `|>` and `with()`, we can write this in one go:
```
starwars |>
subset(!is.na(sex)) |>
with(aggregate(height,
by = list(Species = species,
Sex = sex),
mean))
```
```
## Species Sex x
## 1 Clawdite female 168.0000
## 2 Human female NA
## 3 Kaminoan female 213.0000
## 4 Mirialan female 168.0000
## 5 Tholothian female 184.0000
## 6 Togruta female 178.0000
## 7 Twi'lek female 178.0000
## 8 Hutt hermaphroditic 175.0000
## 9 Aleena male 79.0000
## 10 Besalisk male 198.0000
## 11 Cerean male 198.0000
## 12 Chagrian male 196.0000
## 13 Dug male 112.0000
## 14 Ewok male 88.0000
## 15 Geonosian male 183.0000
## 16 Gungan male 208.6667
## 17 Human male NA
## 18 Iktotchi male 188.0000
## 19 Kaleesh male 216.0000
## 20 Kaminoan male 229.0000
## 21 Kel Dor male 188.0000
## 22 Mon Calamari male 180.0000
## 23 Muun male 191.0000
## 24 Nautolan male 196.0000
## 25 Neimodian male 191.0000
## 26 Pau'an male 206.0000
## 27 Quermian male 264.0000
## 28 Rodian male 173.0000
## 29 Skakoan male 193.0000
## 30 Sullustan male 160.0000
## 31 Toong male 163.0000
## 32 Toydarian male 137.0000
## 33 Trandoshan male 190.0000
## 34 Twi'lek male 180.0000
## 35 Vulptereen male 94.0000
## 36 Wookiee male 231.0000
## 37 Xexto male 122.0000
## 38 Yoda's species male 66.0000
## 39 Zabrak male 173.0000
## 40 Droid none NA
```
So let’s unpack this. In the first two rows, using `|>`, we pass the `starwars` `data.frame` to `subset()`:
```
starwars |>
subset(!is.na(sex))
```
```
## # A tibble: 83 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywa… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## 7 Beru White… 165 75 brown light blue 47 fema… femin… Tatooi…
## 8 R5-D4 97 32 <NA> white,… red NA none mascu… Tatooi…
## 9 Biggs Dark… 183 84 black light brown 24 male mascu… Tatooi…
## 10 Obi-Wan Ke… 182 77 auburn… fair blue-g… 57 male mascu… Stewjon
## # … with 73 more rows, 4 more variables: species <chr>, films <list>,
## # vehicles <list>, starships <list>, and abbreviated variable names
## # ¹hair_color, ²skin_color, ³eye_color, ⁴birth_year, ⁵homeworld
```
as I explained before, this is exactly the same as `subset(starwars, !is.na(sex))`. Then, we pass the result of
`subset()` to the next function, `with()`. The first argument of `with()` must be a `data.frame`, and this is exactly
what `subset()` returns! So now the output of `subset()` is passed down to `with()`, which makes it now possible
to reference the columns of the `data.frame` in `aggregate()` directly. If you have a hard time understanding what
is going on, you can use `quote()` to see what’s going on. `quote()` returns an expression with evaluating it:
```
quote(log(10))
```
```
## log(10)
```
Why am I bring this up? Well, since `a |> f()` is exactly equal to `f(a)`, quoting the code above will return
an expression with `|>`. For instance:
```
quote(10 |> log())
```
```
## log(10)
```
So let’s quote the big block of code from above:
```
quote(
starwars |>
subset(!is.na(sex)) |>
with(aggregate(height,
by = list(Species = species,
Sex = sex),
mean))
)
```
```
## with(subset(starwars, !is.na(sex)), aggregate(height, by = list(Species = species,
## Sex = sex), mean))
```
I think now you see why using `|>` makes code much clearer; the nested expression you would need to write otherwise
is much less readable, unless you define intermediate objects. And without `with()`, this is what you
would need to write:
```
b <- subset(starwars, !is.na(height))
aggregate(b$height, by = list(Species = b$species, Sex = b$sex), mean)
```
To finish this section, let’s say that you wanted to have the average `height` and `mass` by sex. In this case
you need to specify the columns in `aggregate()` with `cbind()` (let’s use `na.rm = TRUE` again instead of
`subset()`ing the data beforehand):
```
starwars |>
with(aggregate(cbind(height, mass),
by = list(Sex = sex),
FUN = mean, na.rm = TRUE))
```
```
## Sex height mass
## 1 female 169.2667 54.68889
## 2 hermaphroditic 175.0000 1358.00000
## 3 male 179.1053 81.00455
## 4 none 131.2000 69.75000
```
Let’s now continue with some more advanced operations using this fake dataset:
```
survey_data_base <- as.data.frame(
tibble::tribble(
~id, ~var1, ~var2, ~var3,
1, 1, 0.2, 0.3,
2, 1.4, 1.9, 4.1,
3, 0.1, 2.8, 8.9,
4, 1.7, 1.9, 7.6
)
)
```
```
survey_data_base
```
```
## id var1 var2 var3
## 1 1 1.0 0.2 0.3
## 2 2 1.4 1.9 4.1
## 3 3 0.1 2.8 8.9
## 4 4 1.7 1.9 7.6
```
Depending on what you want to do with this data, it is not in the right shape. For example, it
would not be possible to simply compute the average of `var1`, `var2` and `var3` for each `id`.
This is because this would require running `mean()` by row, but this is not very easy. This is
because R is not suited to row\-based workflows. Well I’m lying a little bit here, it turns here
that R comes with a `rowMeans()` function. So this would work:
```
survey_data_base |>
transform(mean_id = rowMeans(cbind(var1, var2, var3))) #transform adds a column to a data.frame
```
```
## id var1 var2 var3 mean_id
## 1 1 1.0 0.2 0.3 0.500000
## 2 2 1.4 1.9 4.1 2.466667
## 3 3 0.1 2.8 8.9 3.933333
## 4 4 1.7 1.9 7.6 3.733333
```
But there is no `rowSD()` or `rowMax()`, etc… so it is much better to reshape the data and put it in a
format that gives us maximum flexibility. To reshape the data, we’ll be using the aptly\-called `reshape()` command:
```
survey_data_long <- reshape(survey_data_base,
varying = list(2:4), v.names = "variable", direction = "long")
```
We can now easily compute the average of `variable` for each `id`:
```
aggregate(survey_data_long$variable,
by = list(Id = survey_data_long$id),
mean)
```
```
## Id x
## 1 1 0.500000
## 2 2 2.466667
## 3 3 3.933333
## 4 4 3.733333
```
or any other variable:
```
aggregate(survey_data_long$variable,
by = list(Id = survey_data_long$id),
max)
```
```
## Id x
## 1 1 1.0
## 2 2 4.1
## 3 3 8.9
## 4 4 7.6
```
As you can see, R comes with very powerful functions right out of the box, ready to use. When I was
studying, unfortunately, my professors had been brought up on FORTRAN loops, so we had to do to all
this using loops (not reshaping, thankfully), which was not so easy.
Now that we have seen how *base* R works, let’s redo the analysis using `{tidyverse}` verbs.
The `{tidyverse}` provides many more functions, each of them doing only one single thing. You will
shortly see why this is quite important; by focusing on just one task, and by focusing on the data frame
as the central object, it becomes possible to build really complex workflows, piece by piece,
very easily.
But before deep diving into the `{tidyverse}`, let’s take a moment to discuss about another infix
operator, `%>%`.
4\.2 Smoking is bad for you, but pipes are your friend
------------------------------------------------------
The title of this section might sound weird at first, but by the end of it, you’ll get this
(terrible) pun.
You probably know the following painting by René Magritte, *La trahison des images*:
It turns out there’s an R package from the `tidyverse` that is called `magrittr`. What does this
package do? This package introduced *pipes* to R, way before `|>` in R 4\.1\. Pipes are a concept
from the Unix operating system; if you’re using a GNU\+Linux distribution or macOS, you’re basically
using a *modern* unix (that’s an oversimplification, but I’m an economist by training, and
outrageously oversimplifying things is what we do, deal with it). The *magrittr* pipe is written as
`%>%`. Just like `|>`, `%>%` takes the left hand side to feed it as the first argument of the
function in the right hand side. Try the following:
```
library(magrittr)
```
```
16 %>% sqrt
```
```
## [1] 4
```
You can chain multiple functions, as you can with `|>`:
```
16 %>%
sqrt %>%
log
```
```
## [1] 1.386294
```
But unlike with `|>`, you can omit `()`. `%>%` also has other features. For example, you can
pipe things to other infix operators. For example, `+`. You can use `+` as usual:
```
2 + 12
```
```
## [1] 14
```
Or as a prefix operator:
```
`+`(2, 12)
```
```
## [1] 14
```
You can use this notation with `%>%`:
```
16 %>% sqrt %>% `+`(18)
```
```
## [1] 22
```
This also works using `|>` since R version 4\.2, but only if you use the `_` pipe placeholder:
```
16 |> sqrt() |> `+`(x = _, 18)
```
```
## [1] 22
```
The output of `16` (`16`) got fed to `sqrt()`, and the output of `sqrt(16)` (4\) got fed to `+(18)`
(so we got `+(4, 18)` \= 22\). Without `%>%` you’d write the line just above like this:
```
sqrt(16) + 18
```
```
## [1] 22
```
Just like before, with `|>`, this might seem overly complicated, but using these pipes will
make our code much more readable. I’m sure you’ll be convinced by the end of this chapter.
`%>%` is not the only pipe operator in `magrittr`. There’s `%T%`, `%<>%` and `%$%`. All have their
uses, but are basically shortcuts to some common tasks with `%>%` plus another function. Which
means that you can live without them, and because of this, I will not discuss them.
4\.3 The `{tidyverse}`’s *enfant prodige*: `{dplyr}`
----------------------------------------------------
The best way to get started with the tidyverse packages is to get to know `{dplyr}`. `{dplyr}`
provides a lot of very useful functions that makes it very easy to get discriptive statistics or
add new columns to your data.
### 4\.3\.1 A first taste of data manipulation with `{dplyr}`
This section will walk you through a typical analysis using `{dplyr}` funcitons. Just go with it; I
will give more details in the next sections.
First, let’s load `{dplyr}` and the included `starwars` dataset. Let’s also take a look at the
first 5 lines of the dataset:
```
library(dplyr)
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
`data(starwars)` loads the example dataset called `starwars` that is included in the package
`{dplyr}`. As I said earlier, this is just an example; you could have loaded an external dataset,
from a `.csv` file for instance. This does not matter for what comes next.
Like we saw earlier, R includes a lot of functions for descriptive statistics, such as `mean()`,
`sd()`, `cov()`, and many more. What `{dplyr}` brings to the table is a grammar of data
manipulation that makes it very easy to apply descriptive statistics functions, or any other,
very easily.
Just like before, we are going to compute the average height by `sex`:
```
starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex mean_height
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
The very nice thing about using `%>%` and `{dplyr}` verbs/functions, is that this is really
readable. The above three lines can be translated like so in English:
*Take the starwars dataset, then group by sex, then compute the mean height (for each subgroup) by
omitting missing values.*
`%>%` can be translated by “then”. Without `%>%` you would need to change the code to:
```
summarise(group_by(starwars, sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
Unlike with the *base* approach, each function does only one thing. With the base function
`aggregate()` was used to also define the subgroups. This is not the case with `{dplyr}`; one
function to create the groups (`group_by()`) and then one function to compute the summaries
(`summarise()`). Also, `group_by()` creates a specific subgroup for individuals where `sex` is
missing. This is the last line in the data frame, where `sex` is `NA`. Another nice thing is that
you can specify the column containing the average height. I chose to name it `mean_height`.
Now, let’s suppose that we want to filter some data first:
```
starwars %>%
filter(gender == "masculine") %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex mean_height
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
Again, the `%>%` makes the above lines of code very easy to read. Without it, one would need to
write:
```
summarise(group_by(filter(starwars, gender == "masculine"), sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
I think you agree with me that this is not very readable. One way to make it more readable would
be to save intermediary variables:
```
filtered_data <- filter(starwars, gender == "masculine")
grouped_data <- group_by(filter(starwars, gender == "masculine"), sex)
summarise(grouped_data, mean(height))
```
```
## # A tibble: 3 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male NA
## 3 none NA
```
But this can get very tedious. Once you’re used to `%>%`, you won’t go back to not use it.
Before continuing and to make things clearer; `filter()`, `group_by()` and `summarise()` are
functions that are included in `{dplyr}`. `%>%` is actually a function from `{magrittr}`, but this
package gets loaded on the fly when you load `{dplyr}`, so you do not need to worry about it.
The result of all these operations that use `{dplyr}` functions are actually other datasets, or
`tibbles`. This means that you can save them in variable, or write them to disk, and then work with
these as any other datasets.
```
mean_height <- starwars %>%
group_by(sex) %>%
summarise(mean(height))
class(mean_height)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
```
head(mean_height)
```
```
## # A tibble: 5 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 female NA
## 2 hermaphroditic 175
## 3 male NA
## 4 none NA
## 5 <NA> NA
```
You could then write this data to disk using `rio::export()` for instance. If you need more than
the mean of the height, you can keep adding as many functions as needed (another advantage over
`aggregate()`:
```
summary_table <- starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n())
summary_table
```
```
## # A tibble: 5 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 hermaphroditic 175 NA 1
## 3 male 179. 1297. 60
## 4 none 131. 2416. 6
## 5 <NA> 181. 8.33 4
```
I’ve added more functions, namely `var()`, to get the variance of height, and `n()`, which
is a function from `{dplyr}`, not base R, to get the number of observations. This is quite useful,
because we see that there is a group with only one individual. Let’s focus on the
sexes for which we have more than 1 individual. Since we save all the previous operations (which
produce a `tibble`) in a variable, we can keep going from there:
```
summary_table2 <- summary_table %>%
filter(n_obs > 1)
summary_table2
```
```
## # A tibble: 4 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
## 4 <NA> 181. 8.33 4
```
As mentioned before, there’s a lot of `NA`s; this is because by default, `mean()` and `var()`
return `NA` if even one single observation is `NA`. This is good, because it forces you to look at
the data to see what is going on. If you would get a number, even if there were `NA`s you could
very easily miss these missing values. It is better for functions to fail early and often than the
opposite. This is way we keep using `na.rm = TRUE` for `mean()` and `var()`.
Now let’s actually take a look at the rows where `sex` is `NA`:
```
starwars %>%
filter(is.na(sex))
```
```
## # A tibble: 4 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Ric Olié 183 NA brown fair blue NA <NA> <NA> Naboo
## 2 Quarsh Pana… 183 NA black dark brown 62 <NA> <NA> Naboo
## 3 Sly Moore 178 48 none pale white NA <NA> <NA> Umbara
## 4 Captain Pha… NA NA unknown unknown unknown NA <NA> <NA> <NA>
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
There’s only 4 rows where `sex` is `NA`. Let’s ignore them:
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## # A tibble: 3 × 4
## sex ave_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
```
And why not compute the same table, but first add another stratifying variable?
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex, eye_color) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## `summarise()` has grouped output by 'sex'. You can override using the `.groups`
## argument.
```
```
## # A tibble: 12 × 5
## # Groups: sex [3]
## sex eye_color ave_height var_height n_obs
## <chr> <chr> <dbl> <dbl> <int>
## 1 female black 196. 612. 2
## 2 female blue 167 118. 6
## 3 female brown 160 42 5
## 4 female hazel 178 NA 2
## 5 male black 182 1197 7
## 6 male blue 190. 434. 12
## 7 male brown 167. 1663. 15
## 8 male orange 181. 1306. 7
## 9 male red 190. 0.5 2
## 10 male unknown 136 6498 2
## 11 male yellow 180. 2196. 9
## 12 none red 131 3571 3
```
Ok, that’s it for a first taste. We have already discovered some very useful `{dplyr}` functions,
`filter()`, `group_by()` and summarise `summarise()`.
Now, we are going to learn more about these functions in more detail.
### 4\.3\.2 Filter the rows of a dataset with `filter()`
We’re going to use the `Gasoline` dataset from the `plm` package, so install that first:
```
install.packages("plm")
```
Then load the required data:
```
data(Gasoline, package = "plm")
```
and load dplyr:
```
library(dplyr)
```
This dataset gives the consumption of gasoline for 18 countries from 1960 to 1978\. When you load
the data like this, it is a standard `data.frame`. `{dplyr}` functions can be used on standard
`data.frame` objects, but also on `tibble`s. `tibble`s are just like data frame, but with a better
print method (and other niceties). I’ll discuss the `{tibble}` package later, but for now, let’s
convert the data to a `tibble` and change its name, and also transform the `country` column to
lower case:
```
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
`filter()` is pretty straightforward. What if you would like to subset the data to focus on the
year 1969? Simple:
```
filter(gasoline, year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
Let’s use `%>%`, since we’re familiar with it now:
```
gasoline %>%
filter(year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
You can also filter more than just one year, by using the `%in%` operator:
```
gasoline %>%
filter(year %in% seq(1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
It is also possible use `between()`, a helper function:
```
gasoline %>%
filter(between(year, 1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
To select non\-consecutive years:
```
gasoline %>%
filter(year %in% c(1969, 1973, 1977))
```
```
## # A tibble: 54 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1973 4.20 -5.90 -0.594 -8.49
## 3 austria 1977 3.93 -5.83 -0.422 -8.25
## 4 belgium 1969 3.85 -5.86 -0.355 -8.52
## 5 belgium 1973 3.90 -5.64 -0.373 -8.31
## 6 belgium 1977 3.85 -5.56 -0.432 -8.14
## 7 canada 1969 4.86 -5.56 -1.04 -8.10
## 8 canada 1973 4.90 -5.41 -1.13 -7.94
## 9 canada 1977 4.81 -5.34 -1.07 -7.77
## 10 denmark 1969 4.17 -5.72 -0.407 -8.47
## # … with 44 more rows
```
`%in%` tests if an object is part of a set.
### 4\.3\.3 Select columns with `select()`
While `filter()` allows you to keep or discard rows of data, `select()` allows you to keep or
discard entire columns. To keep columns:
```
gasoline %>%
select(country, year, lrpmg)
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
To discard them:
```
gasoline %>%
select(-country, -year, -lrpmg)
```
```
## # A tibble: 342 × 3
## lgaspcar lincomep lcarpcap
## <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -9.77
## 2 4.10 -6.43 -9.61
## 3 4.07 -6.41 -9.46
## 4 4.06 -6.37 -9.34
## 5 4.04 -6.32 -9.24
## 6 4.03 -6.29 -9.12
## 7 4.05 -6.25 -9.02
## 8 4.05 -6.23 -8.93
## 9 4.05 -6.21 -8.85
## 10 4.05 -6.15 -8.79
## # … with 332 more rows
```
To rename them:
```
gasoline %>%
select(country, date = year, lrpmg)
```
```
## # A tibble: 342 × 3
## country date lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
There’s also `rename()`:
```
gasoline %>%
rename(date = year)
```
```
## # A tibble: 342 × 6
## country date lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`rename()` does not do any kind of selection, but just renames.
You can also use `select()` to re\-order columns:
```
gasoline %>%
select(year, country, lrpmg, everything())
```
```
## # A tibble: 342 × 6
## year country lrpmg lgaspcar lincomep lcarpcap
## <int> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 1960 austria -0.335 4.17 -6.47 -9.77
## 2 1961 austria -0.351 4.10 -6.43 -9.61
## 3 1962 austria -0.380 4.07 -6.41 -9.46
## 4 1963 austria -0.414 4.06 -6.37 -9.34
## 5 1964 austria -0.445 4.04 -6.32 -9.24
## 6 1965 austria -0.497 4.03 -6.29 -9.12
## 7 1966 austria -0.467 4.05 -6.25 -9.02
## 8 1967 austria -0.506 4.05 -6.23 -8.93
## 9 1968 austria -0.522 4.05 -6.21 -8.85
## 10 1969 austria -0.559 4.05 -6.15 -8.79
## # … with 332 more rows
```
`everything()` is a helper function, and there’s also `starts_with()`, and `ends_with()`. For
example, what if we are only interested in columns whose name start with “l”?
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`ends_with()` works in a similar fashion. There is also `contains()`:
```
gasoline %>%
select(country, year, contains("car"))
```
```
## # A tibble: 342 × 4
## country year lgaspcar lcarpcap
## <chr> <int> <dbl> <dbl>
## 1 austria 1960 4.17 -9.77
## 2 austria 1961 4.10 -9.61
## 3 austria 1962 4.07 -9.46
## 4 austria 1963 4.06 -9.34
## 5 austria 1964 4.04 -9.24
## 6 austria 1965 4.03 -9.12
## 7 austria 1966 4.05 -9.02
## 8 austria 1967 4.05 -8.93
## 9 austria 1968 4.05 -8.85
## 10 austria 1969 4.05 -8.79
## # … with 332 more rows
```
You can read more about these helper functions [here](https://tidyselect.r-lib.org/reference/language.html), but we’re going to look more into
them in a coming section.
Another verb, similar to `select()`, is `pull()`. Let’s compare the two:
```
gasoline %>%
select(lrpmg)
```
```
## # A tibble: 342 × 1
## lrpmg
## <dbl>
## 1 -0.335
## 2 -0.351
## 3 -0.380
## 4 -0.414
## 5 -0.445
## 6 -0.497
## 7 -0.467
## 8 -0.506
## 9 -0.522
## 10 -0.559
## # … with 332 more rows
```
```
gasoline %>%
pull(lrpmg) %>%
head() # using head() because there's 337 elements in total
```
```
## [1] -0.3345476 -0.3513276 -0.3795177 -0.4142514 -0.4453354 -0.4970607
```
`pull()`, unlike `select()`, does not return a `tibble`, but only the column you want, as a
vector.
### 4\.3\.4 Group the observations of your dataset with `group_by()`
`group_by()` is a very useful verb; as the name implies, it allows you to create groups and then,
for example, compute descriptive statistics by groups. For example, let’s group our data by
country:
```
gasoline %>%
group_by(country)
```
```
## # A tibble: 342 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
It looks like nothing much happened, but if you look at the second line of the output you can read
the following:
```
## # Groups: country [18]
```
this means that the data is grouped, and every computation you will do now will take these groups
into account. It is also possible to group by more than one variable:
```
gasoline %>%
group_by(country, year)
```
```
## # A tibble: 342 × 6
## # Groups: country, year [342]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
and so on. You can then also ungroup:
```
gasoline %>%
group_by(country, year) %>%
ungroup()
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Once your data is grouped, the operations that will follow will be executed inside each group.
### 4\.3\.5 Get summary statistics with `summarise()`
Ok, now that we have learned the basic verbs, we can start to do more interesting stuff. For
example, one might want to compute the average gasoline consumption in each country, for
the whole period:
```
gasoline %>%
group_by(country) %>%
summarise(mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country `mean(lgaspcar)`
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
`mean()` was given as an argument to `summarise()`, which is a `{dplyr}` verb. What we get is
another `tibble`, that contains the variable we used to group, as well as the average per country.
We can also rename this column:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
and because the output is a `tibble`, we can continue to use `{dplyr}` verbs on it:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar)) %>%
filter(country == "france")
```
```
## # A tibble: 1 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 france 3.82
```
`summarise()` is a very useful verb. For example, we can compute several descriptive statistics at once:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
```
## # A tibble: 18 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92
## 2 belgium 3.92 0.103 4.16 3.82
## 3 canada 4.86 0.0262 4.90 4.81
## 4 denmark 4.19 0.158 4.50 4.00
## 5 france 3.82 0.0499 3.91 3.75
## 6 germany 3.89 0.0239 3.93 3.85
## 7 greece 4.88 0.255 5.38 4.48
## 8 ireland 4.23 0.0437 4.33 4.16
## 9 italy 3.73 0.220 4.05 3.38
## 10 japan 4.70 0.684 6.00 3.95
## 11 netherla 4.08 0.286 4.65 3.71
## 12 norway 4.11 0.123 4.44 3.96
## 13 spain 4.06 0.317 4.75 3.62
## 14 sweden 4.01 0.0364 4.07 3.91
## 15 switzerl 4.24 0.102 4.44 4.05
## 16 turkey 5.77 0.329 6.16 5.14
## 17 u.k. 3.98 0.0479 4.10 3.91
## 18 u.s.a. 4.82 0.0219 4.86 4.79
```
Because the output is a `tibble`, you can save it in a variable of course:
```
desc_gasoline <- gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
And then you can answer questions such as, *which country has the maximum average gasoline
consumption?*:
```
desc_gasoline %>%
filter(max(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 5.77 0.329 6.16 5.14
```
Turns out it’s Turkey. What about the minimum consumption?
```
desc_gasoline %>%
filter(min(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 italy 3.73 0.220 4.05 3.38
```
Because the output of `{dplyr}` verbs is a tibble, it is possible to continue working with it. This
is one shortcoming of using the base `summary()` function. The object returned by that function is
not very easy to manipulate.
### 4\.3\.6 Adding columns with `mutate()` and `transmute()`
`mutate()` adds a column to the `tibble`, which can contain any transformation of any other
variable:
```
gasoline %>%
group_by(country) %>%
mutate(n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap `n()`
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
Using `mutate()` I’ve added a column that counts how many times the country appears in the `tibble`,
using `n()`, another `{dplyr}` function. There’s also `count()` and `tally()`, which we are going to
see further down. It is also possible to rename the column on the fly:
```
gasoline %>%
group_by(country) %>%
mutate(count = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap count
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
It is possible to do any arbitrary operation:
```
gasoline %>%
group_by(country) %>%
mutate(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap spam
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 0.100
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 0.0978
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 0.0969
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 0.0991
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 0.102
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 0.104
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 0.110
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 0.113
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 0.115
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 0.122
## # … with 332 more rows
```
`transmute()` is the same as `mutate()`, but only returns the created variable:
```
gasoline %>%
group_by(country) %>%
transmute(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 2
## # Groups: country [18]
## country spam
## <chr> <dbl>
## 1 austria 0.100
## 2 austria 0.0978
## 3 austria 0.0969
## 4 austria 0.0991
## 5 austria 0.102
## 6 austria 0.104
## 7 austria 0.110
## 8 austria 0.113
## 9 austria 0.115
## 10 austria 0.122
## # … with 332 more rows
```
### 4\.3\.7 Joining `tibble`s with `full_join()`, `left_join()`, `right_join()` and all the others
I will end this section on `{dplyr}` with the very useful verbs: the `*_join()` verbs. Let’s first
start by loading another dataset from the `plm` package. `SumHes` and let’s convert it to `tibble`
and rename it:
```
data(SumHes, package = "plm")
pwt <- SumHes %>%
as_tibble() %>%
mutate(country = tolower(country))
```
Let’s take a quick look at the data:
```
glimpse(pwt)
```
```
## Rows: 3,250
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "algeria", "algeria", "algeria", "algeria", "algeria", "algeri…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 10800, 11016, 11236, 11460, 11690, 11923, 12267, 12622, 12986,…
## $ gdp <int> 1723, 1599, 1275, 1517, 1589, 1584, 1548, 1600, 1758, 1835, 18…
## $ sr <dbl> 19.9, 21.1, 15.0, 13.9, 10.6, 11.0, 8.3, 11.3, 15.1, 18.2, 19.…
```
We can merge both `gasoline` and `pwt` by country and year, as these two variables are common to
both datasets. There are more countries and years in the `pwt` dataset, so when merging both, and
depending on which function you use, you will either have `NA`’s for the variables where there is
no match, or rows that will be dropped. Let’s start with `full_join`:
```
gas_pwt_full <- gasoline %>%
full_join(pwt, by = c("country", "year"))
```
Let’s see which countries and years are included:
```
gas_pwt_full %>%
count(country, year)
```
```
## # A tibble: 3,307 × 3
## country year n
## <chr> <int> <int>
## 1 algeria 1960 1
## 2 algeria 1961 1
## 3 algeria 1962 1
## 4 algeria 1963 1
## 5 algeria 1964 1
## 6 algeria 1965 1
## 7 algeria 1966 1
## 8 algeria 1967 1
## 9 algeria 1968 1
## 10 algeria 1969 1
## # … with 3,297 more rows
```
As you see, every country and year was included, but what happened for, say, the U.S.S.R? This country
is in `pwt` but not in `gasoline` at all:
```
gas_pwt_full %>%
filter(country == "u.s.s.r.")
```
```
## # A tibble: 26 × 11
## country year lgaspcar lincomep lrpmg lcarp…¹ opec com pop gdp sr
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <fct> <fct> <int> <int> <dbl>
## 1 u.s.s.r. 1960 NA NA NA NA no yes 214400 2397 37.9
## 2 u.s.s.r. 1961 NA NA NA NA no yes 217896 2542 39.4
## 3 u.s.s.r. 1962 NA NA NA NA no yes 221449 2656 38.4
## 4 u.s.s.r. 1963 NA NA NA NA no yes 225060 2681 38.4
## 5 u.s.s.r. 1964 NA NA NA NA no yes 227571 2854 39.5
## 6 u.s.s.r. 1965 NA NA NA NA no yes 230109 3049 39.9
## 7 u.s.s.r. 1966 NA NA NA NA no yes 232676 3247 39.9
## 8 u.s.s.r. 1967 NA NA NA NA no yes 235272 3454 40.2
## 9 u.s.s.r. 1968 NA NA NA NA no yes 237896 3730 40.6
## 10 u.s.s.r. 1969 NA NA NA NA no yes 240550 3808 37.9
## # … with 16 more rows, and abbreviated variable name ¹lcarpcap
```
As you probably guessed, the variables from `gasoline` that are not included in `pwt` are filled with
`NA`s. One could remove all these lines and only keep countries for which these variables are not
`NA` everywhere with `filter()`, but there is a simpler solution:
```
gas_pwt_inner <- gasoline %>%
inner_join(pwt, by = c("country", "year"))
```
Let’s use the `tabyl()` from the `janitor` packages which is a very nice alternative to the `table()`
function from base R:
```
library(janitor)
gas_pwt_inner %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only countries with values in both datasets were returned. It’s almost every country from `gasoline`,
apart from Germany (called “germany west” in `pwt` and “germany” in `gasoline`. I left it as is to
provide an example of a country not in `pwt`). Let’s also look at the variables:
```
glimpse(gas_pwt_inner)
```
```
## Rows: 285
## Columns: 11
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ pop <int> 7048, 7087, 7130, 7172, 7215, 7255, 7308, 7338, 7362, 7384, 7…
## $ gdp <int> 5143, 5388, 5481, 5688, 5978, 6144, 6437, 6596, 6847, 7162, 7…
## $ sr <dbl> 24.3, 24.5, 23.3, 22.9, 25.2, 25.2, 26.7, 25.6, 25.7, 26.1, 2…
```
The variables from both datasets are in the joined data.
Contrast this to `semi_join()`:
```
gas_pwt_semi <- gasoline %>%
semi_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_semi)
```
```
## Rows: 285
## Columns: 6
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only columns of `gasoline` are returned, and only rows of `gasoline` that were matched with rows
from `pwt`. `semi_join()` is not a commutative operation:
```
pwt_gas_semi <- pwt %>%
semi_join(gasoline, by = c("country", "year"))
glimpse(pwt_gas_semi)
```
```
## Rows: 285
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "canada", "canada", "canada", "canada", "canada", "canada", "c…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 17910, 18270, 18614, 18963, 19326, 19678, 20049, 20411, 20744,…
## $ gdp <int> 7258, 7261, 7605, 7876, 8244, 8664, 9093, 9231, 9582, 9975, 10…
## $ sr <dbl> 22.7, 21.5, 22.1, 21.9, 22.9, 24.8, 25.4, 23.1, 22.6, 23.4, 21…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
The rows are the same, but not the columns.
`left_join()` and `right_join()` return all the rows from either the dataset that is on the
“left” (the first argument of the fonction) or on the “right” (the second argument of the
function) but all columns from both datasets. So depending on which countries you’re interested in,
you’re going to use either one of these functions:
```
gas_pwt_left <- gasoline %>%
left_join(pwt, by = c("country", "year"))
gas_pwt_left %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.05555556
## belgium 19 0.05555556
## canada 19 0.05555556
## denmark 19 0.05555556
## france 19 0.05555556
## germany 19 0.05555556
## greece 19 0.05555556
## ireland 19 0.05555556
## italy 19 0.05555556
## japan 19 0.05555556
## netherla 19 0.05555556
## norway 19 0.05555556
## spain 19 0.05555556
## sweden 19 0.05555556
## switzerl 19 0.05555556
## turkey 19 0.05555556
## u.k. 19 0.05555556
## u.s.a. 19 0.05555556
```
```
gas_pwt_right <- gasoline %>%
right_join(pwt, by = c("country", "year"))
gas_pwt_right %>%
tabyl(country) %>%
head()
```
```
## country n percent
## algeria 26 0.008
## angola 26 0.008
## argentina 26 0.008
## australia 26 0.008
## austria 26 0.008
## bangladesh 26 0.008
```
The last merge function is `anti_join()`:
```
gas_pwt_anti <- gasoline %>%
anti_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_anti)
```
```
## Rows: 57
## Columns: 6
## $ country <chr> "germany", "germany", "germany", "germany", "germany", "germa…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 3.916953, 3.885345, 3.871484, 3.848782, 3.868993, 3.861049, 3…
## $ lincomep <dbl> -6.159837, -6.120923, -6.094258, -6.068361, -6.013442, -5.966…
## $ lrpmg <dbl> -0.1859108, -0.2309538, -0.3438417, -0.3746467, -0.3996526, -…
## $ lcarpcap <dbl> -9.342481, -9.183841, -9.037280, -8.913630, -8.811013, -8.711…
```
```
gas_pwt_anti %>%
tabyl(country)
```
```
## country n percent
## germany 19 0.3333333
## netherla 19 0.3333333
## switzerl 19 0.3333333
```
`gas_pwt_anti` has the columns the `gasoline` dataset as well as the only country from `gasoline`
that is not in `pwt`: “germany”.
That was it for the basic `{dplyr}` verbs. Next, we’re going to learn about `{tidyr}`.
4\.4 Reshaping and sprucing up data with `{tidyr}`
--------------------------------------------------
Note: this section is going to be a lot harder than anything you’ve seen until now. Reshaping
data is tricky, and to really grok it, you need time, and you need to run each line, and see what
happens. Take your time, and don’t be discouraged.
Another important package from the `{tidyverse}` that goes hand in hand with `{dplyr}` is `{tidyr}`.
`{tidyr}` is the package you need when it’s time to reshape data.
I will start by presenting `pivot_wider()` and `pivot_longer()`.
### 4\.4\.1 `pivot_wider()` and `pivot_longer()`
Let’s first create a fake dataset:
```
library(tidyr)
```
```
survey_data <- tribble(
~id, ~variable, ~value,
1, "var1", 1,
1, "var2", 0.2,
NA, "var3", 0.3,
2, "var1", 1.4,
2, "var2", 1.9,
2, "var3", 4.1,
3, "var1", 0.1,
3, "var2", 2.8,
3, "var3", 8.9,
4, "var1", 1.7,
NA, "var2", 1.9,
4, "var3", 7.6
)
head(survey_data)
```
```
## # A tibble: 6 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
```
I used the `tribble()` function from the `{tibble}` package to create this fake dataset.
I’ll discuss this package later, for now, let’s focus on `{tidyr}.`
Let’s suppose that we need the data to be in the wide format which means `var1`, `var2` and `var3`
need to be their own columns. To do this, we need to use the `pivot_wider()` function. Why *wide*?
Because the data set will be wide, meaning, having more columns than rows.
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value)
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 NA
## 2 NA NA 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 NA 7.6
```
Let’s go through `pivot_wider()`’s arguments: the first is `id_cols =` which requires the variable
that uniquely identifies the rows to be supplied. `names_from =` is where you input the variable that will
generate the names of the new columns. In our case, the `variable` colmuns has three values; `var1`,
`var2` and `var3`, and these are now the names of the new columns. Finally, `values_from =` is where
you can specify the column containing the values that will fill the data frame.
I find the argument names `names_from =` and `values_from =` quite explicit.
As you can see, there are some missing values. Let’s suppose that we know that these missing values
are true 0’s. `pivot_wider()` has an argument called `values_fill =` that makes it easy to replace
the missing values:
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value,
values_fill = list(value = 0))
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 0
## 2 NA 0 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 0 7.6
```
A list of variables and their respective values to replace NA’s with must be supplied to `values_fill`.
Let’s now use another dataset, which you can get from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from: [http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId\=12950\&IF\_Language\=eng\&MainTheme\=2\&FldrName\=3\&RFPath\=91](http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId=12950&IF_Language=eng&MainTheme=2&FldrName=3&RFPath=91)). This data set gives the unemployment rate for each Luxembourguish
canton from 2001 to 2015\. We will come back to this data later on to learn how to plot it. For now,
let’s use it to learn more about `{tidyr}`.
```
unemp_lux_data <- rio::import(
"https://raw.githubusercontent.com/b-rodrigues/modern_R/master/datasets/unemployment/all/unemployment_lux_all.csv"
)
head(unemp_lux_data)
```
```
## division year active_population of_which_non_wage_earners
## 1 Beaufort 2001 688 85
## 2 Beaufort 2002 742 85
## 3 Beaufort 2003 773 85
## 4 Beaufort 2004 828 80
## 5 Beaufort 2005 866 96
## 6 Beaufort 2006 893 87
## of_which_wage_earners total_employed_population unemployed
## 1 568 653 35
## 2 631 716 26
## 3 648 733 40
## 4 706 786 42
## 5 719 815 51
## 6 746 833 60
## unemployment_rate_in_percent
## 1 5.09
## 2 3.50
## 3 5.17
## 4 5.07
## 5 5.89
## 6 6.72
```
Now, let’s suppose that for our purposes, it would make more sense to have the data in a wide format,
where columns are “divison times year” and the value is the unemployment rate. This can be easily done
with providing more columns to `names_from =`.
```
unemp_lux_data2 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017),
str_detect(division, ".*ange$"),
!str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column()
unemp_lux_data2 %>%
pivot_wider(names_from = c(division, year),
values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 48 × 49
## rowid Bertr…¹ Bertr…² Bertr…³ Diffe…⁴ Diffe…⁵ Diffe…⁶ Dudel…⁷ Dudel…⁸ Dudel…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.69 NA NA NA NA NA NA NA NA
## 2 2 NA 5.65 NA NA NA NA NA NA NA
## 3 3 NA NA 5.35 NA NA NA NA NA NA
## 4 4 NA NA NA 13.2 NA NA NA NA NA
## 5 5 NA NA NA NA 12.6 NA NA NA NA
## 6 6 NA NA NA NA NA 11.4 NA NA NA
## 7 7 NA NA NA NA NA NA 9.35 NA NA
## 8 8 NA NA NA NA NA NA NA 9.37 NA
## 9 9 NA NA NA NA NA NA NA NA 8.53
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 38 more rows, 39 more variables: Frisange_2013 <dbl>,
## # Frisange_2014 <dbl>, Frisange_2015 <dbl>, Hesperange_2013 <dbl>,
## # Hesperange_2014 <dbl>, Hesperange_2015 <dbl>, Leudelange_2013 <dbl>,
## # Leudelange_2014 <dbl>, Leudelange_2015 <dbl>, Mondercange_2013 <dbl>,
## # Mondercange_2014 <dbl>, Mondercange_2015 <dbl>, Pétange_2013 <dbl>,
## # Pétange_2014 <dbl>, Pétange_2015 <dbl>, Rumelange_2013 <dbl>,
## # Rumelange_2014 <dbl>, Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, …
```
In the `filter()` statement, I only kept data from 2013 to 2017, “division”s ending with the string
“ange” (“division” can be a canton or a commune, for example “Canton Redange”, a canton, or
“Hesperange” a commune), and removed the cantons as I’m only interested in communes. If you don’t
understand this `filter()` statement, don’t fret; this is not important for what follows. I then
only kept the columns I’m interested in and pivoted the data to a wide format. Also, I needed to
add a unique identifier to the data frame. For this, I used `rowid_to_column()` function, from the
`{tibble}` package, which adds a new column to the data frame with an id, going from 1 to the
number of rows in the data frame. If I did not add this identifier, the statement would work still:
```
unemp_lux_data3 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017), str_detect(division, ".*ange$"), !str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent)
unemp_lux_data3 %>%
pivot_wider(names_from = c(division, year), values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 1 × 48
## Bertrange_2013 Bertr…¹ Bertr…² Diffe…³ Diffe…⁴ Diffe…⁵ Dudel…⁶ Dudel…⁷ Dudel…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.69 5.65 5.35 13.2 12.6 11.4 9.35 9.37 8.53
## # … with 39 more variables: Frisange_2013 <dbl>, Frisange_2014 <dbl>,
## # Frisange_2015 <dbl>, Hesperange_2013 <dbl>, Hesperange_2014 <dbl>,
## # Hesperange_2015 <dbl>, Leudelange_2013 <dbl>, Leudelange_2014 <dbl>,
## # Leudelange_2015 <dbl>, Mondercange_2013 <dbl>, Mondercange_2014 <dbl>,
## # Mondercange_2015 <dbl>, Pétange_2013 <dbl>, Pétange_2014 <dbl>,
## # Pétange_2015 <dbl>, Rumelange_2013 <dbl>, Rumelange_2014 <dbl>,
## # Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, Schifflange_2014 <dbl>, …
```
and actually look even better, but only because there are no repeated values; there is only one
unemployment rate for each “commune times year”. I will come back to this later on, with another
example that might be clearer. These last two code blocks are intense; make sure you go through
each lien step by step and understand what is going on.
You might have noticed that because there is no data for the years 2016 and 2017, these columns do
not appear in the data. But suppose that we need to have these columns, so that a colleague from
another department can fill in the values. This is possible by providing a data frame with the
detailed specifications of the result data frame. This optional data frame must have at least two
columns, `.name`, which are the column names you want, and `.value` which contains the values.
Also, the function that uses this spec is a `pivot_wider_spec()`, and not `pivot_wider()`.
```
unemp_spec <- unemp_lux_data %>%
tidyr::expand(division,
year = c(year, 2016, 2017),
.value = "unemployment_rate_in_percent") %>%
unite(".name", division, year, remove = FALSE)
unemp_spec
```
Here, I use another function, `tidyr::expand()`, which returns every combinations (cartesian product)
of every variable from a dataset.
To make it work, we still need to create a column that uniquely identifies each row in the data:
```
unemp_lux_data4 <- unemp_lux_data %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column() %>%
pivot_wider_spec(spec = unemp_spec)
unemp_lux_data4
```
```
## # A tibble: 1,770 × 2,007
## rowid Beauf…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸ Beauf…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.09 NA NA NA NA NA NA NA NA
## 2 2 NA 3.5 NA NA NA NA NA NA NA
## 3 3 NA NA 5.17 NA NA NA NA NA NA
## 4 4 NA NA NA 5.07 NA NA NA NA NA
## 5 5 NA NA NA NA 5.89 NA NA NA NA
## 6 6 NA NA NA NA NA 6.72 NA NA NA
## 7 7 NA NA NA NA NA NA 4.3 NA NA
## 8 8 NA NA NA NA NA NA NA 7.08 NA
## 9 9 NA NA NA NA NA NA NA NA 8.52
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 1,760 more rows, 1,997 more variables: Beaufort_2010 <dbl>,
## # Beaufort_2011 <dbl>, Beaufort_2012 <dbl>, Beaufort_2013 <dbl>,
## # Beaufort_2014 <dbl>, Beaufort_2015 <dbl>, Beaufort_2016 <dbl>,
## # Beaufort_2017 <dbl>, Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>,
## # Bech_2004 <dbl>, Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>,
## # Bech_2008 <dbl>, Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>,
## # Bech_2012 <dbl>, Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, …
```
You can notice that now we have columns for 2016 and 2017 too. Let’s clean the data a little bit more:
```
unemp_lux_data4 %>%
select(-rowid) %>%
fill(matches(".*"), .direction = "down") %>%
slice(n())
```
```
## # A tibble: 1 × 2,006
## Beaufort_2001 Beaufo…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.09 3.5 5.17 5.07 5.89 6.72 4.3 7.08 8.52
## # … with 1,997 more variables: Beaufort_2010 <dbl>, Beaufort_2011 <dbl>,
## # Beaufort_2012 <dbl>, Beaufort_2013 <dbl>, Beaufort_2014 <dbl>,
## # Beaufort_2015 <dbl>, Beaufort_2016 <dbl>, Beaufort_2017 <dbl>,
## # Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>, Bech_2004 <dbl>,
## # Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>, Bech_2008 <dbl>,
## # Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>, Bech_2012 <dbl>,
## # Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, Bech_2016 <dbl>, …
```
We will learn about `fill()`, anoher `{tidyr}` function a bit later in this chapter, but its basic
purpose is to fill rows with whatever value comes before or after the missing values. `slice(n())`
then only keeps the last row of the data frame, which is the row that contains all the values (expect
for 2016 and 2017, which has missing values, as we wanted).
Here is another example of the importance of having an identifier column when using a spec:
```
data(mtcars)
mtcars_spec <- mtcars %>%
tidyr::expand(am, cyl, .value = "mpg") %>%
unite(".name", am, cyl, remove = FALSE)
mtcars_spec
```
We can now transform the data:
```
mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
As you can see, there are several values of “mpg” for some combinations of “am” times “cyl”. If
we remove the other columns, each row will not be uniquely identified anymore. This results in a
warning message, and a tibble that contains list\-columns:
```
mtcars %>%
select(am, cyl, mpg) %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## Warning: Values from `mpg` are not uniquely identified; output will contain list-cols.
## * Use `values_fn = list` to suppress this warning.
## * Use `values_fn = {summary_fun}` to summarise duplicates.
## * Use the following dplyr code to identify duplicates.
## {data} %>%
## dplyr::group_by(am, cyl) %>%
## dplyr::summarise(n = dplyr::n(), .groups = "drop") %>%
## dplyr::filter(n > 1L)
```
```
## # A tibble: 1 × 6
## `0_4` `0_6` `0_8` `1_4` `1_6` `1_8`
## <list> <list> <list> <list> <list> <list>
## 1 <dbl [3]> <dbl [4]> <dbl [12]> <dbl [8]> <dbl [3]> <dbl [2]>
```
We are going to learn about list\-columns in the next section. List\-columns are very powerful, and
mastering them will be important. But generally speaking, when reshaping data, if you get list\-columns
back it often means that something went wrong.
So you have to be careful with this.
`pivot_longer()` is used when you need to go from a wide to a long dataset, meaning, a dataset
where there are some columns that should not be columns, but rather, the levels of a factor
variable. Let’s suppose that the “am” column is split into two columns, `1` for automatic and `0`
for manual transmissions, and that the values filling these colums are miles per gallon, “mpg”:
```
mtcars_wide_am <- mtcars %>%
pivot_wider(names_from = am, values_from = mpg)
mtcars_wide_am %>%
select(`0`, `1`, everything())
```
```
## # A tibble: 32 × 11
## `0` `1` cyl disp hp drat wt qsec vs gear carb
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 NA 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 NA 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 NA 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 21.4 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 5 18.7 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 6 18.1 NA 6 225 105 2.76 3.46 20.2 1 3 1
## 7 14.3 NA 8 360 245 3.21 3.57 15.8 0 3 4
## 8 24.4 NA 4 147. 62 3.69 3.19 20 1 4 2
## 9 22.8 NA 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 19.2 NA 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
As you can see, the “0” and “1” columns should not be their own columns, unless there is a very
specific and good reason they should… but rather, they should be the levels of another column (in
our case, “am”).
We can go back to a long dataset like so:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
In the cols argument, you need to list all the variables that need to be transformed. Only `1` and
`0` must be pivoted, so I list them. Just for illustration purposes, imagine that we would need
to pivot 50 columns. It would be faster to list the columns that do not need to be pivoted. This
can be achieved by listing the columns that must be excluded with `-` in front, and maybe using
`match()` with a regular expression:
```
mtcars_wide_am %>%
pivot_longer(cols = -matches("^[[:alpha:]]"),
names_to = "am",
values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
Every column that starts with a letter is ok, so there is no need to pivot them. I use the `match()`
function with a regular expression so that I don’t have to type the names of all the columns. `select()`
is used to re\-order the columns, only for viewing purposes
`names_to =` takes a string as argument, which will be the name of the name column containing the
levels `0` and `1`, and `values_to =` also takes a string as argument, which will be the name of
the column containing the values. Finally, you can see that there are a lot of `NA`s in the
output. These can be removed easily:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg", values_drop_na = TRUE) %>%
select(am, mpg, everything())
```
```
## # A tibble: 32 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 5 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## 6 0 18.1 6 225 105 2.76 3.46 20.2 1 3 1
## 7 0 14.3 8 360 245 3.21 3.57 15.8 0 3 4
## 8 0 24.4 4 147. 62 3.69 3.19 20 1 4 2
## 9 0 22.8 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 19.2 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now for a more advanced example, let’s suppose that we are dealing with the following wide dataset:
```
mtcars_wide <- mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
mtcars_wide
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
The difficulty here is that we have columns with two levels of information. For instance, the
column “0\_4” contains the miles per gallon values for manual cars (`0`) with `4` cylinders.
The first step is to first pivot the columns:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
select(am_cyl, mpg, everything())
```
```
## # A tibble: 32 × 10
## am_cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1_6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1_6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1_4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0_6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0_8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0_6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0_8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0_4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0_4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0_6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now we only need to separate the “am\_cyl” column into two new columns, “am” and “cyl”:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
separate(am_cyl, into = c("am", "cyl"), sep = "_") %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
It is also possible to construct a specification data frame, just like for `pivot_wider_spec()`.
This time, I’m using the `build_longer_spec()` function that makes it easy to build specifications:
```
mtcars_spec_long <- mtcars_wide %>%
build_longer_spec(matches("0|1"),
values_to = "mpg") %>%
separate(name, c("am", "cyl"), sep = "_")
mtcars_spec_long
```
```
## # A tibble: 6 × 4
## .name .value am cyl
## <chr> <chr> <chr> <chr>
## 1 0_4 mpg 0 4
## 2 0_6 mpg 0 6
## 3 0_8 mpg 0 8
## 4 1_4 mpg 1 4
## 5 1_6 mpg 1 6
## 6 1_8 mpg 1 8
```
This spec can now be specified to `pivot_longer()`:
```
mtcars_wide %>%
pivot_longer_spec(spec = mtcars_spec_long,
values_drop_na = TRUE) %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Defining specifications give a lot of flexibility and in some complicated cases are the way to go.
### 4\.4\.2 `fill()` and `full_seq()`
`fill()` is pretty useful to… fill in missing values. For instance, in `survey_data`, some “id”s
are missing:
```
survey_data
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
It seems pretty obvious that the first `NA` is supposed to be `1` and the second missing is supposed
to be `4`. With `fill()`, this is pretty easy to achieve:
```
survey_data %>%
fill(.direction = "down", id)
```
`full_seq()` is similar:
```
full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1)
```
```
## [1] "2018-08-01" "2018-08-02" "2018-08-03"
```
We can add this as the date column to our survey data:
```
survey_data %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 NA var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 NA var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
I use the base `rep()` function to repeat the date 4 times and then using `mutate()` I have added
it the data frame.
Putting all these operations together:
```
survey_data %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 1 var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 4 var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
You should be careful when imputing missing values though. The method described above is called
`Last Observation Carried Forward`, and sometimes it makes sense, like here, but sometimes it doesn’t and
doing this will introduce bias in your analysis. Discussing how to handle missing values in your analysis
is outside of the scope of this book, but there are many resources available. You may want to check
out the vignettes of the `{mice}` [package](https://amices.org/mice/articles/overview.html), which
lists many resources to get you started.
### 4\.4\.3 Put order in your columns with `separate()`, `unite()`, and in your rows with `separate_rows()`
Sometimes, data can be in a format that makes working with it needlessly painful. For example, you
get this:
```
survey_data_not_tidy
```
```
## # A tibble: 12 × 3
## id variable_date value
## <dbl> <chr> <dbl>
## 1 1 var1/2018-08-01 1
## 2 1 var2/2018-08-02 0.2
## 3 1 var3/2018-08-03 0.3
## 4 2 var1/2018-08-01 1.4
## 5 2 var2/2018-08-02 1.9
## 6 2 var3/2018-08-03 4.1
## 7 3 var1/2018-08-01 0.1
## 8 3 var2/2018-08-02 2.8
## 9 3 var3/2018-08-03 8.9
## 10 4 var1/2018-08-01 1.7
## 11 4 var2/2018-08-02 1.9
## 12 4 var3/2018-08-03 7.6
```
Dealing with this is simple, thanks to `separate()`:
```
survey_data_not_tidy %>%
separate(variable_date, into = c("variable", "date"), sep = "/")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
The `variable_date` column gets separated into two columns, `variable` and `date`. One also needs
to specify the separator, in this case “/”.
`unite()` is the reverse operation, which can be useful when you are confronted to this situation:
```
survey_data2
```
```
## # A tibble: 12 × 6
## id variable year month day value
## <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 1 var1 2018 08 01 1
## 2 1 var2 2018 08 02 0.2
## 3 1 var3 2018 08 03 0.3
## 4 2 var1 2018 08 01 1.4
## 5 2 var2 2018 08 02 1.9
## 6 2 var3 2018 08 03 4.1
## 7 3 var1 2018 08 01 0.1
## 8 3 var2 2018 08 02 2.8
## 9 3 var3 2018 08 03 8.9
## 10 4 var1 2018 08 01 1.7
## 11 4 var2 2018 08 02 1.9
## 12 4 var3 2018 08 03 7.6
```
In some situation, it is better to have the date as a single column:
```
survey_data2 %>%
unite(date, year, month, day, sep = "-")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
Another awful situation is the following:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
`separate_rows()` saves the day:
```
survey_data_from_hell %>%
separate_rows(variable, value)
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <chr>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
So to summarise… you can go from this:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
to this:
```
survey_data_clean
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
quite easily:
```
survey_data_from_hell %>%
separate_rows(variable, value, convert = TRUE) %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
4\.5 Working on many columns with `if_any()`, `if_all()` and `across()`
-----------------------------------------------------------------------
### 4\.5\.1 Filtering rows where several columns verify a condition
Let’s go back to the `gasoline` data from the `{Ecdat}` package.
When using `filter()`, it is only possible to filter one column at a time. For example, you can
only filter rows where a column equals “France” for instance. But suppose that we have a condition that we want
to use to filter out a lot of columns at once. For example, for every column that is of type
`numeric`, keep only the lines where the condition *value \> \-8* is satisfied. The next line does
that:
```
gasoline %>%
filter(if_any(where(is.numeric), \(x)(`>`(x, -8))))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The above code is using the `if_any()` function, included in `{dplyr}`. It also uses
`where()`, which must be used for predicate functions like `is.numeric()`, or `is.character()`, etc.
You can think of `if_any()` as a function that helps you select the columns to which to apply the
function. You can read the code above like this:
*Start with the gasoline data, then filter rows that are greater than \-8 across the columns
which are numeric*
or similar. `if_any()`, `if_all()` and `across()` makes operations like these very easy to achieve.
Sometimes, you’d want to filter rows from columns that end their labels with a letter, for instance
`"p"`. This can again be achieved using another helper, `ends_with()`, instead of `where()`:
```
gasoline %>%
filter(if_any(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 340 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 330 more rows
```
We already know about `ends_with()` and `starts_with()`. So the above line means “for the columns
whose name end with a ‘p’ only keep the lines where, for all the selected columns, the values are
strictly superior to `-8`”.
`if_all()` works exactly the same way, but think of the `if` in `if_all()` as having the conditions
separated by `and` while the `if` of `if_any()` being separated by `or`. So for example, the
code above, where `if_any()` is replaced by `if_all()`, results in a much smaller data frame:
```
gasoline %>%
filter(if_all(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 30 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 canada 1972 4.89 -5.44 -1.10 -7.99
## 2 canada 1973 4.90 -5.41 -1.13 -7.94
## 3 canada 1974 4.89 -5.42 -1.12 -7.90
## 4 canada 1975 4.89 -5.38 -1.19 -7.87
## 5 canada 1976 4.84 -5.36 -1.06 -7.81
## 6 canada 1977 4.81 -5.34 -1.07 -7.77
## 7 canada 1978 4.86 -5.31 -1.07 -7.79
## 8 germany 1978 3.88 -5.56 -0.628 -7.95
## 9 sweden 1975 3.97 -7.68 -2.77 -7.99
## 10 sweden 1976 3.98 -7.67 -2.82 -7.96
## # … with 20 more rows
```
because here, we only keep rows for columns that end with “p” where ALL of them are simultaneously
greater than 8\.
### 4\.5\.2 Selecting several columns at once
In a previous section we already played around a little bit with `select()` and some helpers,
`everything()`, `starts_with()` and `ends_with()`. But there are many ways that you can use
helper functions to select several columns easily:
```
gasoline %>%
select(where(is.numeric))
```
```
## # A tibble: 342 × 5
## year lgaspcar lincomep lrpmg lcarpcap
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1960 4.17 -6.47 -0.335 -9.77
## 2 1961 4.10 -6.43 -0.351 -9.61
## 3 1962 4.07 -6.41 -0.380 -9.46
## 4 1963 4.06 -6.37 -0.414 -9.34
## 5 1964 4.04 -6.32 -0.445 -9.24
## 6 1965 4.03 -6.29 -0.497 -9.12
## 7 1966 4.05 -6.25 -0.467 -9.02
## 8 1967 4.05 -6.23 -0.506 -8.93
## 9 1968 4.05 -6.21 -0.522 -8.85
## 10 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Selecting by column position is also possible:
```
gasoline %>%
select(c(1, 2, 5))
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
As is selecting columns starting or ending with a certain string of characters, as discussed previously:
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Another very neat trick is selecting columns that may or may not exist in your data frame. For this quick examples
let’s use the `mtcars` dataset:
```
sort(colnames(mtcars))
```
```
## [1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "mpg" "qsec" "vs"
## [11] "wt"
```
Let’s create a vector with some column names:
```
cols_to_select <- c("mpg", "cyl", "am", "nonsense")
```
The following selects the columns that exist
in the data frame but shows a warning for the column that does not exist:
```
mtcars %>%
select(any_of(cols_to_select))
```
```
## mpg cyl am
## Mazda RX4 21.0 6 1
## Mazda RX4 Wag 21.0 6 1
## Datsun 710 22.8 4 1
## Hornet 4 Drive 21.4 6 0
## Hornet Sportabout 18.7 8 0
## Valiant 18.1 6 0
## Duster 360 14.3 8 0
## Merc 240D 24.4 4 0
## Merc 230 22.8 4 0
## Merc 280 19.2 6 0
## Merc 280C 17.8 6 0
## Merc 450SE 16.4 8 0
## Merc 450SL 17.3 8 0
## Merc 450SLC 15.2 8 0
## Cadillac Fleetwood 10.4 8 0
## Lincoln Continental 10.4 8 0
## Chrysler Imperial 14.7 8 0
## Fiat 128 32.4 4 1
## Honda Civic 30.4 4 1
## Toyota Corolla 33.9 4 1
## Toyota Corona 21.5 4 0
## Dodge Challenger 15.5 8 0
## AMC Javelin 15.2 8 0
## Camaro Z28 13.3 8 0
## Pontiac Firebird 19.2 8 0
## Fiat X1-9 27.3 4 1
## Porsche 914-2 26.0 4 1
## Lotus Europa 30.4 4 1
## Ford Pantera L 15.8 8 1
## Ferrari Dino 19.7 6 1
## Maserati Bora 15.0 8 1
## Volvo 142E 21.4 4 1
```
and finally, if you want it to fail, don’t use any helper:
```
mtcars %>%
select(cols_to_select)
```
```
Error: Can't subset columns that don't exist.
The column `nonsense` doesn't exist.
```
or use `all_of()`:
```
mtcars %>%
select(all_of(cols_to_select))
```
```
✖ Column `nonsense` doesn't exist.
```
Bulk\-renaming can be achieved using `rename_with()`
```
gasoline %>%
rename_with(toupper, is.numeric)
```
```
## # A tibble: 342 × 6
## country YEAR LGASPCAR LINCOMEP LRPMG LCARPCAP
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
you can also pass functions to `rename_with()`:
```
gasoline %>%
rename_with(\(x)(paste0("new_", x)))
```
```
## # A tibble: 342 × 6
## new_country new_year new_lgaspcar new_lincomep new_lrpmg new_lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The reason I’m talking about renaming in a section about selecting is because you can
also rename with select:
```
gasoline %>%
select(YEAR = year)
```
```
## # A tibble: 342 × 1
## YEAR
## <int>
## 1 1960
## 2 1961
## 3 1962
## 4 1963
## 5 1964
## 6 1965
## 7 1966
## 8 1967
## 9 1968
## 10 1969
## # … with 332 more rows
```
but of course here, you only keep that one column, and you can’t rename with a function.
### 4\.5\.3 Summarising with `across()`
`across()` is used for summarising data. It allows to aggregations… *across* several columns. It
is especially useful with `group_by()`. To illustrate how `group_by()` works with `across()` I have
to first modify the `gasoline` data a little bit. As you can see below, the `year` column is of
type `double`:
```
gasoline %>%
lapply(typeof)
```
```
## $country
## [1] "character"
##
## $year
## [1] "integer"
##
## $lgaspcar
## [1] "double"
##
## $lincomep
## [1] "double"
##
## $lrpmg
## [1] "double"
##
## $lcarpcap
## [1] "double"
```
(we’ll discuss `lapply()` in a later chapter, but just to give you a little taste, `lapply()` applies
a function to each element of a list or of a data frame, in this case, `lapply()` applied the `typeof()`
function to each column of the `gasoline` data set, returning the type of each column)
Let’s change that to character:
```
gasoline <- gasoline %>%
mutate(year = as.character(year),
country = as.character(country))
```
This now allows me to group by type of columns for instance:
```
gasoline %>%
group_by(across(where(is.character))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
This is faster than having to write:
```
gasoline %>%
group_by(country, year) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
You may think that having two write the name of two variables is not a huge deal, which is true.
But imagine that you have dozens of character columns that you want to group by.
With `across()` and the helper functions, it doesn’t matter if the data frame has 2 columns
you need to group by or 2000\. All that matters is that you can find some commonalities between
all these columns that make it easy to select them. It can be their type, as we have seen
before, or their label:
```
gasoline %>%
group_by(across(contains("y"))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but it’s also possible to `group_by()` position:
```
gasoline %>%
group_by(across(c(1, 2))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
Using a sequence is also possible:
```
gasoline %>%
group_by(across(seq(1:2))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but be careful, selecting by position is dangerous. If the position of columns changes, your code
will fail. Selecting by type or label is much more robust, especially by label, since types can
change as well (for example a date column can easily be exported as character column, etc).
### 4\.5\.4 `summarise()` across many columns
Summarising across many columns is really incredibly useful and in my opinion one of the best
arguments in favour of switching to a `{tidyverse}` only workflow:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), mean))
```
```
## # A tibble: 18 × 5
## country lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 -6.12 -0.486 -8.85
## 2 belgium 3.92 -5.85 -0.326 -8.63
## 3 canada 4.86 -5.58 -1.05 -8.08
## 4 denmark 4.19 -5.76 -0.358 -8.58
## 5 france 3.82 -5.87 -0.253 -8.45
## 6 germany 3.89 -5.85 -0.517 -8.51
## 7 greece 4.88 -6.61 -0.0339 -10.8
## 8 ireland 4.23 -6.44 -0.348 -9.04
## 9 italy 3.73 -6.35 -0.152 -8.83
## 10 japan 4.70 -6.25 -0.287 -9.95
## 11 netherla 4.08 -5.92 -0.370 -8.82
## 12 norway 4.11 -5.75 -0.278 -8.77
## 13 spain 4.06 -5.63 0.739 -9.90
## 14 sweden 4.01 -7.82 -2.71 -8.25
## 15 switzerl 4.24 -5.93 -0.902 -8.54
## 16 turkey 5.77 -7.34 -0.422 -12.5
## 17 u.k. 3.98 -6.02 -0.459 -8.55
## 18 u.s.a. 4.82 -5.45 -1.21 -7.78
```
But where `summarise()` and `across()` really shine is when you want to apply several functions
to many columns at once:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_li…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -6.12 0.235 -5.76 -6.47
## 2 belgium 3.92 0.103 4.16 3.82 -5.85 0.227 -5.53 -6.22
## 3 canada 4.86 0.0262 4.90 4.81 -5.58 0.193 -5.31 -5.89
## 4 denmark 4.19 0.158 4.50 4.00 -5.76 0.176 -5.48 -6.06
## 5 france 3.82 0.0499 3.91 3.75 -5.87 0.241 -5.53 -6.26
## 6 germany 3.89 0.0239 3.93 3.85 -5.85 0.193 -5.56 -6.16
## 7 greece 4.88 0.255 5.38 4.48 -6.61 0.331 -6.15 -7.16
## 8 ireland 4.23 0.0437 4.33 4.16 -6.44 0.162 -6.19 -6.72
## 9 italy 3.73 0.220 4.05 3.38 -6.35 0.217 -6.08 -6.73
## 10 japan 4.70 0.684 6.00 3.95 -6.25 0.425 -5.71 -6.99
## 11 netherla 4.08 0.286 4.65 3.71 -5.92 0.193 -5.66 -6.22
## 12 norway 4.11 0.123 4.44 3.96 -5.75 0.201 -5.42 -6.09
## 13 spain 4.06 0.317 4.75 3.62 -5.63 0.278 -5.29 -6.17
## 14 sweden 4.01 0.0364 4.07 3.91 -7.82 0.126 -7.67 -8.07
## 15 switzerl 4.24 0.102 4.44 4.05 -5.93 0.124 -5.75 -6.16
## 16 turkey 5.77 0.329 6.16 5.14 -7.34 0.331 -6.89 -7.84
## 17 u.k. 3.98 0.0479 4.10 3.91 -6.02 0.107 -5.84 -6.19
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -5.45 0.148 -5.22 -5.70
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, max_lrpmg <dbl>,
## # min_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # max_lcarpcap <dbl>, min_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷max_lincomep, ⁸min_lincomep
```
Here, I first started by grouping by `country`, then I applied the `mean()`, `sd()`, `max()` and
`min()` functions to every column starting with the character `"l"`. `tibble::lst()` allows you to
create a list just like with `list()` but names its arguments automatically. So the `mean()` function
gets name `"mean"`, and so on. Finally, I use the `.names =` argument to create the template for
the new column names. `{fn}_{col}` creates new column names of the form *function name \_ column name*.
As mentioned before, `across()` works with other helper functions:
```
gasoline %>%
group_by(country) %>%
summarise(across(contains("car"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 9
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_lc…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -8.85 0.473 -8.21 -9.77
## 2 belgium 3.92 0.103 4.16 3.82 -8.63 0.417 -8.10 -9.41
## 3 canada 4.86 0.0262 4.90 4.81 -8.08 0.195 -7.77 -8.38
## 4 denmark 4.19 0.158 4.50 4.00 -8.58 0.349 -8.20 -9.33
## 5 france 3.82 0.0499 3.91 3.75 -8.45 0.344 -8.01 -9.15
## 6 germany 3.89 0.0239 3.93 3.85 -8.51 0.406 -7.95 -9.34
## 7 greece 4.88 0.255 5.38 4.48 -10.8 0.839 -9.57 -12.2
## 8 ireland 4.23 0.0437 4.33 4.16 -9.04 0.345 -8.55 -9.70
## 9 italy 3.73 0.220 4.05 3.38 -8.83 0.639 -8.11 -10.1
## 10 japan 4.70 0.684 6.00 3.95 -9.95 1.20 -8.59 -12.2
## 11 netherla 4.08 0.286 4.65 3.71 -8.82 0.617 -8.16 -10.0
## 12 norway 4.11 0.123 4.44 3.96 -8.77 0.438 -8.17 -9.68
## 13 spain 4.06 0.317 4.75 3.62 -9.90 0.960 -8.63 -11.6
## 14 sweden 4.01 0.0364 4.07 3.91 -8.25 0.242 -7.96 -8.74
## 15 switzerl 4.24 0.102 4.44 4.05 -8.54 0.378 -8.03 -9.26
## 16 turkey 5.77 0.329 6.16 5.14 -12.5 0.751 -11.2 -13.5
## 17 u.k. 3.98 0.0479 4.10 3.91 -8.55 0.281 -8.26 -9.12
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -7.78 0.162 -7.54 -8.02
## # … with abbreviated variable names ¹mean_lgaspcar, ²sd_lgaspcar,
## # ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lcarpcap, ⁶sd_lcarpcap, ⁷max_lcarpcap,
## # ⁸min_lcarpcap
```
This is very likely the quickest, most elegant way to summarise that many columns.
There’s also a way to *summarise where*:
```
gasoline %>%
group_by(country) %>%
summarise(across(where(is.numeric), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
This allows you to summarise every column that contains real numbers. The difference between
`is.double()` and `is.numeric()` is that `is.numeric()` returns `TRUE` for integers too, whereas
`is.double()` returns `TRUE` for real numbers only (integers are real numbers too, but you know
what I mean). It is also possible to summarise every column at once:
```
gasoline %>%
select(-year) %>%
group_by(country) %>%
summarise(across(everything(), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
I removed the `year` variable because it’s not a variable for which we want to have descriptive
statistics.
4\.6 Other useful `{tidyverse}` functions
-----------------------------------------
### 4\.6\.1 `if_else()`, `case_when()` and `recode()`
Some other very useful `{tidyverse}` functions are `if_else()` and `case_when`. These two
functions, combined with `mutate()` make it easy to create a new variable whose values must
respect certain conditions. For instance, we might want to have a dummy that equals `1` if a country
in the European Union (to simplify, say as of 2017\) and `0` if not. First let’s create a list of
countries that are in the EU:
```
eu_countries <- c("austria", "belgium", "bulgaria", "croatia", "republic of cyprus",
"czech republic", "denmark", "estonia", "finland", "france", "germany",
"greece", "hungary", "ireland", "italy", "latvia", "lithuania", "luxembourg",
"malta", "netherla", "poland", "portugal", "romania", "slovakia", "slovenia",
"spain", "sweden", "u.k.")
```
I’ve had to change “netherlands” to “netherla” because that’s how the country is called in the
`gasoline` data. Now let’s create a dummy variable that equals `1` for EU countries, and `0` for the others:
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, 1, 0))
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 1
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 1
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 1
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 1
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 1
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 1
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 1
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 1
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 1
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 1
## # … with 332 more rows
```
Instead of `1` and `0`, we can of course use strings (I add `filter(year == 1960)` at the end to
have a better view of what happened):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, "yes", "no")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 yes
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 yes
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 no
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 yes
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 yes
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 yes
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 yes
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 yes
## 9 italy 1960 4.05 -6.73 0.165 -10.1 yes
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 no
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 yes
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 no
## 13 spain 1960 4.75 -6.17 1.13 -11.6 yes
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 yes
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 no
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 no
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 yes
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 no
```
I think that `if_else()` is fairly straightforward, especially if you know `ifelse()` already. You
might be wondering what is the difference between these two. `if_else()` is stricter than
`ifelse()` and does not do type conversion. Compare the two next lines:
```
ifelse(1 == 1, "0", 1)
```
```
## [1] "0"
```
```
if_else(1 == 1, "0", 1)
```
```
Error: `false` must be type string, not double
```
Type conversion, especially without a warning is very dangerous. `if_else()`’s behaviour which
consists in failing as soon as possble avoids a lot of pain and suffering, especially when
programming non\-interactively.
`if_else()` also accepts an optional argument, that allows you to specify what should be returned
in case of `NA`:
```
if_else(1 <= NA, 0, 1, 999)
```
```
## [1] 999
```
```
# Or
if_else(1 <= NA, 0, 1, NA_real_)
```
```
## [1] NA
```
`case_when()` can be seen as a generalization of `if_else()`. Whenever you want to use multiple
`if_else()`s, that’s when you know you should use `case_when()` (I’m adding the filter at the end
for the same reason as before, to see the output better):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(region = case_when(
country %in% c("france", "italy", "turkey", "greece", "spain") ~ "mediterranean",
country %in% c("germany", "austria", "switzerl", "belgium", "netherla") ~ "central europe",
country %in% c("canada", "u.s.a.", "u.k.", "ireland") ~ "anglosphere",
country %in% c("denmark", "norway", "sweden") ~ "nordic",
country %in% c("japan") ~ "asia")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap region
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 central europe
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 central europe
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 anglosphere
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 nordic
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 mediterranean
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 central europe
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 mediterranean
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 anglosphere
## 9 italy 1960 4.05 -6.73 0.165 -10.1 mediterranean
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 asia
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 central europe
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 nordic
## 13 spain 1960 4.75 -6.17 1.13 -11.6 mediterranean
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 nordic
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 central europe
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 mediterranean
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 anglosphere
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 anglosphere
```
If all you want is to recode values, you can use `recode()`. For example, the Netherlands is
written as “NETHERLA” in the `gasoline` data, which is quite ugly. Same for Switzerland:
```
gasoline <- gasoline %>%
mutate(country = tolower(country)) %>%
mutate(country = recode(country, "netherla" = "netherlands", "switzerl" = "switzerland"))
```
I saved the data with these changes as they will become useful in the future. Let’s take a look at
the data:
```
gasoline %>%
filter(country %in% c("netherlands", "switzerland"), year == 1960)
```
```
## # A tibble: 2 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 netherlands 1960 4.65 -6.22 -0.201 -10.0
## 2 switzerland 1960 4.40 -6.16 -0.823 -9.26
```
### 4\.6\.2 `lead()` and `lag()`
`lead()` and `lag()` are especially useful in econometrics. When I was doing my masters, in 4 B.d.
(*Before dplyr*) lagging variables in panel data was quite tricky. Now, with `{dplyr}` it’s really
very easy:
```
gasoline %>%
group_by(country) %>%
mutate(lag_lgaspcar = lag(lgaspcar)) %>%
mutate(lead_lgaspcar = lead(lgaspcar)) %>%
filter(year %in% seq(1960, 1963))
```
```
## # A tibble: 72 × 8
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap lag_lgaspcar lead_lgaspcar
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 NA 4.10
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 4.17 4.07
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 4.10 4.06
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 4.07 4.04
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41 NA 4.12
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30 4.16 4.08
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22 4.12 4.00
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11 4.08 3.99
## 9 canada 1960 4.86 -5.89 -0.972 -8.38 NA 4.83
## 10 canada 1961 4.83 -5.88 -0.972 -8.35 4.86 4.85
## # … with 62 more rows
```
To lag every variable, remember that you can use `mutate_if()`:
```
gasoline %>%
group_by(country) %>%
mutate_if(is.double, lag) %>%
filter(year %in% seq(1960, 1963))
```
```
## `mutate_if()` ignored the following grouping variables:
## • Column `country`
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1961 4.83 -5.88 -0.972 -8.35
## # … with 62 more rows
```
you can replace `lag()` with `lead()`, but just keep in mind that the columns get transformed in
place.
### 4\.6\.3 `ntile()`
The last helper function I will discuss is `ntile()`. There are some other, so do read `mutate()`’s
documentation with `help(mutate)`!
If you need quantiles, you need `ntile()`. Let’s see how it works:
```
gasoline %>%
mutate(quintile = ntile(lgaspcar, 5)) %>%
mutate(decile = ntile(lgaspcar, 10)) %>%
select(country, year, lgaspcar, quintile, decile)
```
```
## # A tibble: 342 × 5
## country year lgaspcar quintile decile
## <chr> <dbl> <dbl> <int> <int>
## 1 austria 1960 4.17 3 6
## 2 austria 1961 4.10 3 6
## 3 austria 1962 4.07 3 5
## 4 austria 1963 4.06 3 5
## 5 austria 1964 4.04 3 5
## 6 austria 1965 4.03 3 5
## 7 austria 1966 4.05 3 5
## 8 austria 1967 4.05 3 5
## 9 austria 1968 4.05 3 5
## 10 austria 1969 4.05 3 5
## # … with 332 more rows
```
`quintile` and `decile` do not hold the values but the quantile the value lies in. If you want to
have a column that contains the median for instance, you can use good ol’ `quantile()`:
```
gasoline %>%
group_by(country) %>%
mutate(median = quantile(lgaspcar, 0.5)) %>% # quantile(x, 0.5) is equivalent to median(x)
filter(year == 1960) %>%
select(country, year, median)
```
```
## # A tibble: 18 × 3
## # Groups: country [18]
## country year median
## <chr> <dbl> <dbl>
## 1 austria 1960 4.05
## 2 belgium 1960 3.88
## 3 canada 1960 4.86
## 4 denmark 1960 4.16
## 5 france 1960 3.81
## 6 germany 1960 3.89
## 7 greece 1960 4.89
## 8 ireland 1960 4.22
## 9 italy 1960 3.74
## 10 japan 1960 4.52
## 11 netherlands 1960 3.99
## 12 norway 1960 4.08
## 13 spain 1960 3.99
## 14 sweden 1960 4.00
## 15 switzerland 1960 4.26
## 16 turkey 1960 5.72
## 17 u.k. 1960 3.98
## 18 u.s.a. 1960 4.81
```
### 4\.6\.4 `arrange()`
`arrange()` re\-orders the whole `tibble` according to values of the supplied variable:
```
gasoline %>%
arrange(lgaspcar)
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 italy 1977 3.38 -6.10 0.164 -8.15
## 2 italy 1978 3.39 -6.08 0.0348 -8.11
## 3 italy 1976 3.43 -6.12 0.103 -8.17
## 4 italy 1974 3.50 -6.13 -0.223 -8.26
## 5 italy 1975 3.52 -6.17 -0.0327 -8.22
## 6 spain 1978 3.62 -5.29 0.621 -8.63
## 7 italy 1972 3.63 -6.21 -0.215 -8.38
## 8 italy 1971 3.65 -6.22 -0.148 -8.47
## 9 spain 1977 3.65 -5.30 0.526 -8.73
## 10 italy 1973 3.65 -6.16 -0.325 -8.32
## # … with 332 more rows
```
If you want to re\-order the `tibble` in descending order of the variable:
```
gasoline %>%
arrange(desc(lgaspcar))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 1966 6.16 -7.51 -0.356 -13.0
## 2 turkey 1960 6.13 -7.80 -0.253 -13.5
## 3 turkey 1961 6.11 -7.79 -0.343 -13.4
## 4 turkey 1962 6.08 -7.84 -0.408 -13.2
## 5 turkey 1968 6.08 -7.42 -0.365 -12.8
## 6 turkey 1963 6.08 -7.63 -0.225 -13.3
## 7 turkey 1964 6.06 -7.63 -0.252 -13.2
## 8 turkey 1967 6.04 -7.46 -0.335 -12.8
## 9 japan 1960 6.00 -6.99 -0.145 -12.2
## 10 turkey 1965 5.82 -7.62 -0.293 -12.9
## # … with 332 more rows
```
`arrange`’s documentation alerts the user that re\-ording by group is only possible by explicitely
specifying an option:
```
gasoline %>%
filter(year %in% seq(1960, 1963)) %>%
group_by(country) %>%
arrange(desc(lgaspcar), .by_group = TRUE)
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1962 4.85 -5.84 -0.979 -8.32
## # … with 62 more rows
```
This is especially useful for plotting. We’ll see this in Chapter 6\.
### 4\.6\.5 `tally()` and `count()`
`tally()` and `count()` count the number of observations in your data. I believe `count()` is the
more useful of the two, as it counts the number of observations within a group that you can provide:
```
gasoline %>%
count(country)
```
```
## # A tibble: 18 × 2
## country n
## <chr> <int>
## 1 austria 19
## 2 belgium 19
## 3 canada 19
## 4 denmark 19
## 5 france 19
## 6 germany 19
## 7 greece 19
## 8 ireland 19
## 9 italy 19
## 10 japan 19
## 11 netherlands 19
## 12 norway 19
## 13 spain 19
## 14 sweden 19
## 15 switzerland 19
## 16 turkey 19
## 17 u.k. 19
## 18 u.s.a. 19
```
There’s also `add_count()` which adds the column to the data:
```
gasoline %>%
add_count(country)
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
`add_count()` is a shortcut for the following code:
```
gasoline %>%
group_by(country) %>%
mutate(n = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
where `n()` is a `{dplyr}` function that can only be used within `summarise()`, `mutate()` and
`filter()`.
4\.7 Special packages for special kinds of data: `{forcats}`, `{lubridate}`, and `{stringr}`
--------------------------------------------------------------------------------------------
### 4\.7\.1 🐱🐱🐱🐱
Factor variables are very useful but not very easy to manipulate. `forcats` contains very useful
functions that make working on factor variables painless. In my opinion, the four following functions, `fct_recode()`, `fct_relevel()`, `fct_reorder()` and `fct_relabel()`, are the ones you must
know, so that’s what I’ll be showing.
Remember in chapter 3 when I very quickly explained what were `factor` variables? In this section,
we are going to work a little bit with these type of variable. `factor`s are very useful, and the
`forcats` package includes some handy functions to work with them. First, let’s load the `forcats` package:
```
library(forcats)
```
as an example, we are going to work with the `gss_cat` dataset that is included in `forcats`. Let’s
load the data:
```
data(gss_cat)
head(gss_cat)
```
```
## # A tibble: 6 × 9
## year marital age race rincome partyid relig denom tvhours
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int>
## 1 2000 Never married 26 White $8000 to 9999 Ind,near r… Prot… Sout… 12
## 2 2000 Divorced 48 White $8000 to 9999 Not str re… Prot… Bapt… NA
## 3 2000 Widowed 67 White Not applicable Independent Prot… No d… 2
## 4 2000 Never married 39 White Not applicable Ind,near r… Orth… Not … 4
## 5 2000 Divorced 25 White Not applicable Not str de… None Not … 1
## 6 2000 Married 25 White $20000 - 24999 Strong dem… Prot… Sout… NA
```
as you can see, `marital`, `race`, `rincome` and `partyid` are all factor variables. Let’s take a closer
look at `marital`:
```
str(gss_cat$marital)
```
```
## Factor w/ 6 levels "No answer","Never married",..: 2 4 5 2 4 6 2 4 6 6 ...
```
and let’s see `rincome`:
```
str(gss_cat$rincome)
```
```
## Factor w/ 16 levels "No answer","Don't know",..: 8 8 16 16 16 5 4 9 4 4 ...
```
`factor` variables have different levels and the `forcats` package includes functions that allow
you to recode, collapse and do all sorts of things on these levels. For example , using
`forcats::fct_recode()` you can recode levels:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_recode(marital,
refuse = "No answer",
never_married = "Never married",
divorced = "Separated",
divorced = "Divorced",
widowed = "Widowed",
married = "Married"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## refuse 17 0.0007913234
## never_married 5416 0.2521063166
## divorced 4126 0.1920588372
## widowed 1807 0.0841130196
## married 10117 0.4709305032
```
Using `fct_recode()`, I was able to recode the levels and collapse `Separated` and `Divorced` to
a single category called `divorced`. As you can see, `refuse` and `widowed` are less than 10%, so
maybe you’d want to lump these categories together:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_lump(marital, prop = 0.10, other_level = "other"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## never_married 5416 0.25210632
## divorced 4126 0.19205884
## married 10117 0.47093050
## other 1824 0.08490434
```
`fct_reorder()` is especially useful for plotting. We will explore plotting in the next chapter,
but to show you why `fct_reorder()` is so useful, I will create a barplot, first without
using `fct_reorder()` to re\-order the factors, then with reordering. Do not worry if you don’t
understand all the code for now:
```
gss_cat %>%
tabyl(marital) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
It would be much better if the categories were ordered by frequency. This is easy to do with
`fct_reorder()`:
```
gss_cat %>%
tabyl(marital) %>%
mutate(marital = fct_reorder(marital, n, .desc = FALSE)) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
Much better! In Chapter 6, we are going to learn about `{ggplot2}`.
The last family of function I’d like to mention are the `fct_lump*()` functions. These make it possible
to lump several levels of a factor into a new *other* level:
```
gss_cat %>%
mutate(
# Description of the different functions taken from help(fct_lump)
denom_lowfreq = fct_lump_lowfreq(denom), # lumps together the least frequent levels, ensuring that "other" is still the smallest level.
denom_min = fct_lump_min(denom, min = 10), # lumps levels that appear fewer than min times.
denom_n = fct_lump_n(denom, n = 3), # lumps all levels except for the n most frequent (or least frequent if n < 0)
denom_prop = fct_lump_prop(denom, prop = 0.10) # lumps levels that appear in fewer prop * n times.
)
```
```
## # A tibble: 21,483 × 13
## year marital age race rincome partyid relig denom tvhours denom…¹ denom…²
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int> <fct> <fct>
## 1 2000 never_… 26 White $8000 … Ind,ne… Prot… Sout… 12 Southe… Southe…
## 2 2000 divorc… 48 White $8000 … Not st… Prot… Bapt… NA Baptis… Baptis…
## 3 2000 other 67 White Not ap… Indepe… Prot… No d… 2 No den… No den…
## 4 2000 never_… 39 White Not ap… Ind,ne… Orth… Not … 4 Not ap… Not ap…
## 5 2000 divorc… 25 White Not ap… Not st… None Not … 1 Not ap… Not ap…
## 6 2000 married 25 White $20000… Strong… Prot… Sout… NA Southe… Southe…
## 7 2000 never_… 36 White $25000… Not st… Chri… Not … 3 Not ap… Not ap…
## 8 2000 divorc… 44 White $7000 … Ind,ne… Prot… Luth… NA Luther… Luther…
## 9 2000 married 44 White $25000… Not st… Prot… Other 0 Other Other
## 10 2000 married 47 White $25000… Strong… Prot… Sout… 3 Southe… Southe…
## # … with 21,473 more rows, 2 more variables: denom_n <fct>, denom_prop <fct>,
## # and abbreviated variable names ¹denom_lowfreq, ²denom_min
```
There’s many other, so I’d advise you go through the package’s function [reference](https://forcats.tidyverse.org/reference/index.html).
### 4\.7\.2 Get your dates right with `{lubridate}`
`{lubridate}` is yet another tidyverse package, that makes dealing with dates or durations (and intervals) as
painless as possible. I do not use every function contained in the package daily, and as such will
only focus on some of the functions. However, if you have to deal with dates often,
you might want to explore the package thouroughly.
#### 4\.7\.2\.1 Defining dates, the tidy way
Let’s load new dataset, called *independence* from the Github repo of the book:
```
independence_path <- tempfile(fileext = "rds")
download.file(url = "https://github.com/b-rodrigues/modern_R/blob/master/datasets/independence.rds?raw=true",
destfile = independence_path)
independence <- readRDS(independence_path)
```
This dataset was scraped from the following Wikipedia [page](https://en.wikipedia.org/wiki/Decolonisation_of_Africa#Timeline).
It shows when African countries gained independence and from which colonial powers. In Chapter 10, I
will show you how to scrape Wikipedia pages using R. For now, let’s take a look at the contents
of the dataset:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ indep…² first…³ indep…⁴
## <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Liberia Liberia United… 26 Jul… Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal O… United… 31 May… Louis … South …
## 3 Egypt Sultanate of Egypt United… 28 Feb… Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 10 Feb… Haile … -
## 5 Libya British Military Administration… United… 24 Dec… Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1 Janu… Ismail… -
## 7 Tunisia French Protectorate of Tunisia France 20 Mar… Muhamm… -
## 8 Morocco French Protectorate in Morocco … France… 2 Marc… Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 6 Marc… Kwame … Gold C…
## 10 Guinea French West Africa France 2 Octo… Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
as you can see, the date of independence is in a format that might make it difficult to answer questions
such as *Which African countries gained independence before 1960 ?* for two reasons. First of all,
the date uses the name of the month instead of the number of the month, and second of all the type of
the independence day column is *character* and not “date”. So our first task is to correctly define the column
as being of type date, while making sure that R understands that *January* is supposed to be “01”, and so
on. There are several helpful functions included in `{lubridate}` to convert columns to dates. For instance
if the column you want to convert is of the form “2012\-11\-21”, then you would use the function `ymd()`,
for “year\-month\-day”. If, however the column is “2012\-21\-11”, then you would use `ydm()`. There’s
a few of these helper functions, and they can handle a lot of different formats for dates. In our case,
having the name of the month instead of the number might seem quite problematic, but it turns out
that this is a case that `{lubridate}` handles painfully:
```
library(lubridate)
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
```
independence <- independence %>%
mutate(independence_date = dmy(independence_date))
```
```
## Warning: 5 failed to parse.
```
Some dates failed to parse, for instance for Morocco. This is because these countries have several
independence dates; this means that the string to convert looks like:
```
"2 March 1956
7 April 1956
10 April 1958
4 January 1969"
```
which obviously cannot be converted by `{lubridate}` without further manipulation. I ignore these cases for
simplicity’s sake.
#### 4\.7\.2\.2 Data manipulation with dates
Let’s take a look at the data now:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ independ…² first…³ indep…⁴
## <chr> <chr> <chr> <date> <chr> <chr>
## 1 Liberia Liberia United… 1847-07-26 Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal… United… 1910-05-31 Louis … South …
## 3 Egypt Sultanate of Egypt United… 1922-02-28 Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 1947-02-10 Haile … -
## 5 Libya British Military Administrat… United… 1951-12-24 Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1956-01-01 Ismail… -
## 7 Tunisia French Protectorate of Tunis… France 1956-03-20 Muhamm… -
## 8 Morocco French Protectorate in Moroc… France… NA Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 1957-03-06 Kwame … Gold C…
## 10 Guinea French West Africa France 1958-10-02 Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
As you can see, we now have a date column in the right format. We can now answer questions such as
*Which countries gained independence before 1960?* quite easily, by using the functions `year()`,
`month()` and `day()`. Let’s see which countries gained independence before 1960:
```
independence %>%
filter(year(independence_date) <= 1960) %>%
pull(country)
```
```
## [1] "Liberia" "South Africa"
## [3] "Egypt" "Eritrea"
## [5] "Libya" "Sudan"
## [7] "Tunisia" "Ghana"
## [9] "Guinea" "Cameroon"
## [11] "Togo" "Mali"
## [13] "Madagascar" "Democratic Republic of the Congo"
## [15] "Benin" "Niger"
## [17] "Burkina Faso" "Ivory Coast"
## [19] "Chad" "Central African Republic"
## [21] "Republic of the Congo" "Gabon"
## [23] "Mauritania"
```
You guessed it, `year()` extracts the year of the date column and converts it as a *numeric* so that we can work
on it. This is the same for `month()` or `day()`. Let’s try to see if countries gained their independence on
Christmas Eve:
```
independence %>%
filter(month(independence_date) == 12,
day(independence_date) == 24) %>%
pull(country)
```
```
## [1] "Libya"
```
Seems like Libya was the only one! You can also operate on dates. For instance, let’s compute the difference between
two dates, using the `interval()` column:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since)
```
```
## # A tibble: 54 × 2
## country independent_since
## <chr> <Interval>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC
## 8 Morocco NA--NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC
## # … with 44 more rows
```
The `independent_since` column now contains an *interval* object that we can convert to years:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since) %>%
mutate(years_independent = as.numeric(independent_since, "years"))
```
```
## # A tibble: 54 × 3
## country independent_since years_independent
## <chr> <Interval> <dbl>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC 175.
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC 112.
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC 101.
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC 75.7
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC 70.8
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC 66.8
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC 66.6
## 8 Morocco NA--NA NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC 65.6
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC 64.1
## # … with 44 more rows
```
We can now see for how long the last country to gain independence has been independent.
Because the data is not tidy (in some cases, an African country was colonized by two powers,
see Libya), I will only focus on 4 European colonial powers: Belgium, France, Portugal and the United Kingdom:
```
independence %>%
filter(colonial_power %in% c("Belgium", "France", "Portugal", "United Kingdom")) %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
mutate(years_independent = as.numeric(independent_since, "years")) %>%
group_by(colonial_power) %>%
summarise(last_colony_independent_for = min(years_independent, na.rm = TRUE))
```
```
## # A tibble: 4 × 2
## colonial_power last_colony_independent_for
## <chr> <dbl>
## 1 Belgium 60.3
## 2 France 45.3
## 3 Portugal 47.0
## 4 United Kingdom 46.3
```
#### 4\.7\.2\.3 Arithmetic with dates
Adding or substracting days to dates is quite easy:
```
ymd("2018-12-31") + 16
```
```
## [1] "2019-01-16"
```
It is also possible to be more explicit and use `days()`:
```
ymd("2018-12-31") + days(16)
```
```
## [1] "2019-01-16"
```
To add years, you can use `years()`:
```
ymd("2018-12-31") + years(1)
```
```
## [1] "2019-12-31"
```
But you have to be careful with leap years:
```
ymd("2016-02-29") + years(1)
```
```
## [1] NA
```
Because 2017 is not a leap year, the above computation returns `NA`. The same goes for months with
a different number of days:
```
ymd("2018-12-31") + months(2)
```
```
## [1] NA
```
The way to solve these issues is to use the special `%m+%` infix operator:
```
ymd("2016-02-29") %m+% years(1)
```
```
## [1] "2017-02-28"
```
and for months:
```
ymd("2018-12-31") %m+% months(2)
```
```
## [1] "2019-02-28"
```
`{lubridate}` contains many more functions. If you often work with dates, duration or interval
data, `{lubridate}` is a package that you have to add to your toolbox.
### 4\.7\.3 Manipulate strings with `{stringr}`
`{stringr}` contains functions to manipulate strings. In Chapter 10, I will teach you about regular
expressions, but the functions contained in `{stringr}` allow you to already do a lot of work on
strings, without needing to be a regular expression expert.
I will discuss the most common string operations: detecting, locating, matching, searching and
replacing, and exctracting/removing strings.
To introduce these operations, let us use an ALTO file of an issue of *The Winchester News* from
October 31, 1910, which you can find on this
[link](https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt) (to see
how the newspaper looked like,
[click here](https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/)). I re\-hosted
the file on a public gist for archiving purposes. While working on the book, the original site went
down several times…
ALTO is an XML schema for the description of text OCR and layout information of pages for digitzed
material, such as newspapers (source: [ALTO Wikipedia page](https://en.wikipedia.org/wiki/ALTO_(XML))).
For more details, you can read my
[blogpost](https://www.brodrigues.co/blog/2019-01-13-newspapers_mets_alto/)
on the matter, but for our current purposes, it is enough to know that the file contains the text
of newspaper articles. The file looks like this:
```
<TextLine HEIGHT="138.0" WIDTH="2434.0" HPOS="4056.0" VPOS="5814.0">
<String STYLEREFS="ID7" HEIGHT="108.0" WIDTH="393.0" HPOS="4056.0" VPOS="5838.0" CONTENT="timore" WC="0.82539684">
<ALTERNATIVE>timole</ALTERNATIVE>
<ALTERNATIVE>tlnldre</ALTERNATIVE>
<ALTERNATIVE>timor</ALTERNATIVE>
<ALTERNATIVE>insole</ALTERNATIVE>
<ALTERNATIVE>landed</ALTERNATIVE>
</String>
<SP WIDTH="74.0" HPOS="4449.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="105.0" WIDTH="432.0" HPOS="4524.0" VPOS="5847.0" CONTENT="market" WC="0.95238096"/>
<SP WIDTH="116.0" HPOS="4956.0" VPOS="5847.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="138.0" HPOS="5073.0" VPOS="5883.0" CONTENT="as" WC="0.96825397"/>
<SP WIDTH="74.0" HPOS="5211.0" VPOS="5883.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="285.0" HPOS="5286.0" VPOS="5877.0" CONTENT="were" WC="1.0">
<ALTERNATIVE>verc</ALTERNATIVE>
<ALTERNATIVE>veer</ALTERNATIVE>
</String>
<SP WIDTH="68.0" HPOS="5571.0" VPOS="5877.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="147.0" HPOS="5640.0" VPOS="5838.0" CONTENT="all" WC="1.0"/>
<SP WIDTH="83.0" HPOS="5787.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="183.0" HPOS="5871.0" VPOS="5835.0" CONTENT="the" WC="0.95238096">
<ALTERNATIVE>tll</ALTERNATIVE>
<ALTERNATIVE>Cu</ALTERNATIVE>
<ALTERNATIVE>tall</ALTERNATIVE>
</String>
<SP WIDTH="75.0" HPOS="6054.0" VPOS="5835.0"/>
<String STYLEREFS="ID3" HEIGHT="132.0" WIDTH="351.0" HPOS="6129.0" VPOS="5814.0" CONTENT="cattle" WC="0.95238096"/>
</TextLine>
```
We are interested in the strings after `CONTENT=`. We are going to use functions from the `{stringr}`
package to get the strings after `CONTENT=`. In Chapter 10, we are going to explore this file
again, but using complex regular expressions to get all the content in one go.
#### 4\.7\.3\.1 Getting text data into Rstudio
First of all, let us read in the file:
```
winchester <- read_lines("https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt")
```
Even though the file is an XML file, I still read it in using `read_lines()` and not `read_xml()`
from the `{xml2}` package. This is for the purposes of the current exercise, and also because I
always have trouble with XML files, and prefer to treat them as simple text files, and use regular
expressions to get what I need.
Now that the ALTO file is read in and saved in the `winchester` variable, you might want to print
the whole thing in the console. Before that, take a look at the structure:
```
str(winchester)
```
```
## chr [1:43] "" ...
```
So the `winchester` variable is a character atomic vector with 43 elements. So first, we need to
understand what these elements are. Let’s start with the first one:
```
winchester[1]
```
```
## [1] ""
```
Ok, so it seems like the first element is part of the header of the file. What about the second one?
```
winchester[2]
```
```
## [1] "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"><base href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\"><style>body{margin-left:0;margin-right:0;margin-top:0}#bN015htcoyT__google-cache-hdr{background:#f5f5f5;font:13px arial,sans-serif;text-align:left;color:#202020;border:0;margin:0;border-bottom:1px solid #cecece;line-height:16px;padding:16px 28px 24px 28px}#bN015htcoyT__google-cache-hdr *{display:inline;font:inherit;text-align:inherit;color:inherit;line-height:inherit;background:none;border:0;margin:0;padding:0;letter-spacing:0}#bN015htcoyT__google-cache-hdr a{text-decoration:none;color:#1a0dab}#bN015htcoyT__google-cache-hdr a:hover{text-decoration:underline}#bN015htcoyT__google-cache-hdr a:visited{color:#609}#bN015htcoyT__google-cache-hdr div{display:block;margin-top:4px}#bN015htcoyT__google-cache-hdr b{font-weight:bold;display:inline-block;direction:ltr}</style><div id=\"bN015htcoyT__google-cache-hdr\"><div><span>This is Google's cache of <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml</a>.</span> <span>It is a snapshot of the page as it appeared on 21 Jan 2019 05:18:18 GMT.</span> <span>The <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">current page</a> could have changed in the meantime.</span> <a href=\"http://support.google.com/websearch/bin/answer.py?hl=en&p=cached&answer=1687222\"><span>Learn more</span>.</a></div><div><span style=\"display:inline-block;margin-top:8px;margin-right:104px;white-space:nowrap\"><span style=\"margin-right:28px\"><span style=\"font-weight:bold\">Full version</span></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=1&vwsrc=0\"><span>Text-only version</span></a></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=0&vwsrc=1\"><span>View source</span></a></span></span></div><span style=\"display:inline-block;margin-top:8px;color:#717171\"><span>Tip: To quickly find your search term on this page, press <b>Ctrl+F</b> or <b>⌘-F</b> (Mac) and use the find bar.</span></span></div><div style=\"position:relative;\"><?xml version=\"1.0\" encoding=\"UTF-8\"?>"
```
Same. So where is the content? The file is very large, so if you print it in the console, it will
take quite some time to print, and you will not really be able to make out anything. The best
way would be to try to detect the string `CONTENT` and work from there.
#### 4\.7\.3\.2 Detecting, getting the position and locating strings
When confronted to an atomic vector of strings, you might want to know inside which elements you
can find certain strings. For example, to know which elements of `winchester` contain the string
`CONTENT`, use `str_detect()`:
```
winchester %>%
str_detect("CONTENT")
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [25] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [37] FALSE FALSE FALSE FALSE FALSE FALSE TRUE
```
This returns a boolean atomic vector of the same length as `winchester`. If the string `CONTENT` is
nowhere to be found, the result will equal `FALSE`, if not it will equal `TRUE`. Here it is easy to
see that the last element contains the string `CONTENT`. But what if instead of having 43 elements,
the vector had 24192 elements? And hundreds would contain the string `CONTENT`? It would be easier
to instead have the indices of the vector where one can find the word `CONTENT`. This is possible
with `str_which()`:
```
winchester %>%
str_which("CONTENT")
```
```
## [1] 43
```
Here, the result is 43, meaning that the 43rd element of `winchester` contains the string `CONTENT`
somewhere. If we need more precision, we can use `str_locate()` and `str_locate_all()`. To explain
how both these functions work, let’s create a very small example:
```
ancient_philosophers <- c("aristotle", "plato", "epictetus", "seneca the younger", "epicurus", "marcus aurelius")
```
Now suppose I am interested in philosophers whose name ends in `us`. Let us use `str_locate()` first:
```
ancient_philosophers %>%
str_locate("us")
```
```
## start end
## [1,] NA NA
## [2,] NA NA
## [3,] 8 9
## [4,] NA NA
## [5,] 7 8
## [6,] 5 6
```
You can interpret the result as follows: in the rows, the index of the vector where the
string `us` is found. So the 3rd, 5th and 6th philosopher have `us` somewhere in their name.
The result also has two columns: `start` and `end`. These give the position of the string. So the
string `us` can be found starting at position 8 of the 3rd element of the vector, and ends at position
9\. Same goes for the other philisophers. However, consider Marcus Aurelius. He has two names, both
ending with `us`. However, `str_locate()` only shows the position of the `us` in `Marcus`.
To get both `us` strings, you need to use `str_locate_all()`:
```
ancient_philosophers %>%
str_locate_all("us")
```
```
## [[1]]
## start end
##
## [[2]]
## start end
##
## [[3]]
## start end
## [1,] 8 9
##
## [[4]]
## start end
##
## [[5]]
## start end
## [1,] 7 8
##
## [[6]]
## start end
## [1,] 5 6
## [2,] 14 15
```
Now we get the position of the two `us` in Marcus Aurelius. Doing this on the `winchester` vector
will give use the position of the `CONTENT` string, but this is not really important right now. What
matters is that you know how `str_locate()` and `str_locate_all()` work.
So now that we know what interests us in the 43nd element of `winchester`, let’s take a closer
look at it:
```
winchester[43]
```
As you can see, it’s a mess:
```
<TextLine HEIGHT=\"126.0\" WIDTH=\"1731.0\" HPOS=\"17160.0\" VPOS=\"21252.0\"><String HEIGHT=\"114.0\" WIDTH=\"354.0\" HPOS=\"17160.0\" VPOS=\"21264.0\" CONTENT=\"0tV\" WC=\"0.8095238\"/><SP WIDTH=\"131.0\" HPOS=\"17514.0\" VPOS=\"21264.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"111.0\" WIDTH=\"474.0\" HPOS=\"17646.0\" VPOS=\"21258.0\" CONTENT=\"BATES\" WC=\"1.0\"/><SP WIDTH=\"140.0\" HPOS=\"18120.0\" VPOS=\"21258.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"114.0\" WIDTH=\"630.0\" HPOS=\"18261.0\" VPOS=\"21252.0\" CONTENT=\"President\" WC=\"1.0\"><ALTERNATIVE>Prcideht</ALTERNATIVE><ALTERNATIVE>Pride</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"153.0\" WIDTH=\"1689.0\" HPOS=\"17145.0\" VPOS=\"21417.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"258.0\" HPOS=\"17145.0\" VPOS=\"21439.0\" CONTENT=\"WM\" WC=\"0.82539684\"><TextLine HEIGHT=\"120.0\" WIDTH=\"2211.0\" HPOS=\"16788.0\" VPOS=\"21870.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"102.0\" HPOS=\"16788.0\" VPOS=\"21894.0\" CONTENT=\"It\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"16890.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"93.0\" HPOS=\"16962.0\" VPOS=\"21885.0\" CONTENT=\"is\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17055.0\" VPOS=\"21885.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"102.0\" WIDTH=\"417.0\" HPOS=\"17136.0\" VPOS=\"21879.0\" CONTENT=\"seldom\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17553.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"267.0\" HPOS=\"17634.0\" VPOS=\"21873.0\" CONTENT=\"hard\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"17901.0\" VPOS=\"21873.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"87.0\" WIDTH=\"111.0\" HPOS=\"17982.0\" VPOS=\"21879.0\" CONTENT=\"to\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"18093.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"219.0\" HPOS=\"18174.0\" VPOS=\"21870.0\" CONTENT=\"find\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18393.0\" VPOS=\"21870.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"66.0\" HPOS=\"18471.0\" VPOS=\"21894.0\" CONTENT=\"a\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18537.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"78.0\" WIDTH=\"384.0\" HPOS=\"18615.0\" VPOS=\"21888.0\" CONTENT=\"succes\" WC=\"0.82539684\"><ALTERNATIVE>success</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"126.0\" WIDTH=\"2316.0\" HPOS=\"16662.0\" VPOS=\"22008.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"75.0\" WIDTH=\"183.0\" HPOS=\"16662.0\" VPOS=\"22059.0\" CONTENT=\"sor\" WC=\"1.0\"><ALTERNATIVE>soar</ALTERNATIVE></String><SP WIDTH=\"72.0\" HPOS=\"16845.0\" VPOS=\"22059.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"90.0\" WIDTH=\"168.0\" HPOS=\"16917.0\" VPOS=\"22035.0\" CONTENT=\"for\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"17085.0\" VPOS=\"22035.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"267.0\" HPOS=\"17157.0\" VPOS=\"22050.0\" CONTENT=\"even\" WC=\"1.0\"><ALTERNATIVE>cen</ALTERNATIVE><ALTERNATIVE>cent</ALTERNATIVE></String><SP WIDTH=\"77.0\" HPOS=\"17434.0\" VPOS=\"22050.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"66.0\" WIDTH=\"63.0\" HPOS=\"17502.0\" VPOS=\"22044.0\"
```
The file was imported without any newlines. So we need to insert them ourselves, by splitting the
string in a clever way.
#### 4\.7\.3\.3 Splitting strings
There are two functions included in `{stringr}` to split strings, `str_split()` and `str_split_fixed()`.
Let’s go back to our ancient philosophers. Two of them, Seneca the Younger and Marcus Aurelius have
something else in common than both being Roman Stoic philosophers. Their names are composed of several
words. If we want to split their names at the space character, we can use `str_split()` like this:
```
ancient_philosophers %>%
str_split(" ")
```
```
## [[1]]
## [1] "aristotle"
##
## [[2]]
## [1] "plato"
##
## [[3]]
## [1] "epictetus"
##
## [[4]]
## [1] "seneca" "the" "younger"
##
## [[5]]
## [1] "epicurus"
##
## [[6]]
## [1] "marcus" "aurelius"
```
`str_split()` also has a `simplify = TRUE` option:
```
ancient_philosophers %>%
str_split(" ", simplify = TRUE)
```
```
## [,1] [,2] [,3]
## [1,] "aristotle" "" ""
## [2,] "plato" "" ""
## [3,] "epictetus" "" ""
## [4,] "seneca" "the" "younger"
## [5,] "epicurus" "" ""
## [6,] "marcus" "aurelius" ""
```
This time, the returned object is a matrix.
What about `str_split_fixed()`? The difference is that here you can specify the number of pieces
to return. For example, you could consider the name “Aurelius” to be the middle name of Marcus Aurelius,
and the “the younger” to be the middle name of Seneca the younger. This means that you would want
to split the name only at the first space character, and not at all of them. This is easily achieved
with `str_split_fixed()`:
```
ancient_philosophers %>%
str_split_fixed(" ", 2)
```
```
## [,1] [,2]
## [1,] "aristotle" ""
## [2,] "plato" ""
## [3,] "epictetus" ""
## [4,] "seneca" "the younger"
## [5,] "epicurus" ""
## [6,] "marcus" "aurelius"
```
This gives the expected result.
So how does this help in our case? Well, if you look at how the ALTO file looks like, at the beginning
of this section, you will notice that every line ends with the “\>” character. So let’s split at
that character!
```
winchester_text <- winchester[43] %>%
str_split(">")
```
Let’s take a closer look at `winchester_text`:
```
str(winchester_text)
```
```
## List of 1
## $ : chr [1:19706] "</processingStepSettings" "<processingSoftware" "<softwareCreator" "iArchives</softwareCreator" ...
```
So this is a list of length one, and the first, and only, element of that list is an atomic vector
with 19706 elements. Since this is a list of only one element, we can simplify it by saving the
atomic vector in a variable:
```
winchester_text <- winchester_text[[1]]
```
Let’s now look at some lines:
```
winchester_text[1232:1245]
```
```
## [1] "<SP WIDTH=\"66.0\" HPOS=\"5763.0\" VPOS=\"9696.0\"/"
## [2] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"612.0\" HPOS=\"5829.0\" VPOS=\"9693.0\" CONTENT=\"Louisville\" WC=\"1.0\""
## [3] "<ALTERNATIVE"
## [4] "Loniile</ALTERNATIVE"
## [5] "<ALTERNATIVE"
## [6] "Lenities</ALTERNATIVE"
## [7] "</String"
## [8] "</TextLine"
## [9] "<TextLine HEIGHT=\"150.0\" WIDTH=\"2520.0\" HPOS=\"4032.0\" VPOS=\"9849.0\""
## [10] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"510.0\" HPOS=\"4032.0\" VPOS=\"9861.0\" CONTENT=\"Tobacco\" WC=\"1.0\"/"
## [11] "<SP WIDTH=\"113.0\" HPOS=\"4542.0\" VPOS=\"9861.0\"/"
## [12] "<String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"696.0\" HPOS=\"4656.0\" VPOS=\"9861.0\" CONTENT=\"Warehouse\" WC=\"1.0\""
## [13] "<ALTERNATIVE"
## [14] "WHrchons</ALTERNATIVE"
```
This now looks easier to handle. We can narrow it down to the lines that only contain the string
we are interested in, “CONTENT”. First, let’s get the indices:
```
content_winchester_index <- winchester_text %>%
str_which("CONTENT")
```
How many lines contain the string “CONTENT”?
```
length(content_winchester_index)
```
```
## [1] 4462
```
As you can see, this reduces the amount of data we have to work with. Let us save this is a new
variable:
```
content_winchester <- winchester_text[content_winchester_index]
```
#### 4\.7\.3\.4 Matching strings
Matching strings is useful, but only in combination with regular expressions. As stated at the
beginning of this section, we are going to learn about regular expressions in Chapter 10, but in
order to make this section useful, we are going to learn the easiest, but perhaps the most useful
regular expression: `.*`.
Let’s go back to our ancient philosophers, and use `str_match()` and see what happens. Let’s match
the “us” string:
```
ancient_philosophers %>%
str_match("us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "us"
## [4,] NA
## [5,] "us"
## [6,] "us"
```
Not very useful, but what about the regular expression `.*`? How could it help?
```
ancient_philosophers %>%
str_match(".*us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "epictetus"
## [4,] NA
## [5,] "epicurus"
## [6,] "marcus aurelius"
```
That’s already very interesting! So how does `.*` work? To understand, let’s first start by using
`.` alone:
```
ancient_philosophers %>%
str_match(".us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "tus"
## [4,] NA
## [5,] "rus"
## [6,] "cus"
```
This also matched whatever symbol comes just before the “u” from “us”. What if we use two `.` instead?
```
ancient_philosophers %>%
str_match("..us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "etus"
## [4,] NA
## [5,] "urus"
## [6,] "rcus"
```
This time, we get the two symbols that immediately precede “us”. Instead of continuing like this
we now use the `*`, which matches zero or more of `.`. So by combining `*` and `.`, we can match
any symbol repeatedly, until there is nothing more to match. Note that there is also `+`, which works
similarly to `*`, but it matches one or more symbols.
There is also a `str_match_all()`:
```
ancient_philosophers %>%
str_match_all(".*us")
```
```
## [[1]]
## [,1]
##
## [[2]]
## [,1]
##
## [[3]]
## [,1]
## [1,] "epictetus"
##
## [[4]]
## [,1]
##
## [[5]]
## [,1]
## [1,] "epicurus"
##
## [[6]]
## [,1]
## [1,] "marcus aurelius"
```
In this particular case it does not change the end result, but keep it in mind for cases like this one:
```
c("haha", "huhu") %>%
str_match("ha")
```
```
## [,1]
## [1,] "ha"
## [2,] NA
```
and:
```
c("haha", "huhu") %>%
str_match_all("ha")
```
```
## [[1]]
## [,1]
## [1,] "ha"
## [2,] "ha"
##
## [[2]]
## [,1]
```
What if we want to match names containing the letter “t”? Easy:
```
ancient_philosophers %>%
str_match(".*t.*")
```
```
## [,1]
## [1,] "aristotle"
## [2,] "plato"
## [3,] "epictetus"
## [4,] "seneca the younger"
## [5,] NA
## [6,] NA
```
So how does this help us with our historical newspaper? Let’s try to get the strings that come
after “CONTENT”:
```
winchester_content <- winchester_text %>%
str_match("CONTENT.*")
```
Let’s use our faithful `str()` function to take a look:
```
winchester_content %>%
str
```
```
## chr [1:19706, 1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA ...
```
Hum, there’s a lot of `NA` values! This is because a lot of the lines from the file did not have the
string “CONTENT”, so there is no match possible. Let’s us remove all these `NA`s. Because the
result is a matrix, we cannot use the `filter()` function from `{dplyr}`. So we need to convert it
to a tibble first:
```
winchester_content <- winchester_content %>%
as.tibble() %>%
filter(!is.na(V1))
```
```
## Warning: `as.tibble()` was deprecated in tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
```
```
## Warning: The `x` argument of `as_tibble.matrix()` must have unique column names if `.name_repair` is omitted as of tibble 2.0.0.
## Using compatibility `.name_repair`.
```
Because matrix columns do not have names, when a matrix gets converted into a tibble, the firt column
gets automatically called `V1`. This is why I filter on this column. Let’s take a look at the data:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## V1
## <chr>
## 1 "CONTENT=\"J\" WC=\"0.8095238\"/"
## 2 "CONTENT=\"a\" WC=\"0.8095238\"/"
## 3 "CONTENT=\"Ira\" WC=\"0.95238096\"/"
## 4 "CONTENT=\"mj\" WC=\"0.8095238\"/"
## 5 "CONTENT=\"iI\" WC=\"0.8095238\"/"
## 6 "CONTENT=\"tE1r\" WC=\"0.8095238\"/"
```
#### 4\.7\.3\.5 Searching and replacing strings
We are getting close to the final result. We still need to do some cleaning however. Since our data
is inside a nice tibble, we might as well stick with it. So let’s first rename the column and
change all the strings to lowercase:
```
winchester_content <- winchester_content %>%
mutate(content = tolower(V1)) %>%
select(-V1)
```
Let’s take a look at the result:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" wc=\"0.8095238\"/"
## 2 "content=\"a\" wc=\"0.8095238\"/"
## 3 "content=\"ira\" wc=\"0.95238096\"/"
## 4 "content=\"mj\" wc=\"0.8095238\"/"
## 5 "content=\"ii\" wc=\"0.8095238\"/"
## 6 "content=\"te1r\" wc=\"0.8095238\"/"
```
The second part of the string, “wc\=….” is not really interesting. Let’s search and replace this
with an empty string, using `str_replace()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "wc.*", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" "
## 2 "content=\"a\" "
## 3 "content=\"ira\" "
## 4 "content=\"mj\" "
## 5 "content=\"ii\" "
## 6 "content=\"te1r\" "
```
We need to use the regular expression from before to replace “wc” and every character that follows.
The same can be use to remove “content\=”:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
We are almost done, but some cleaning is still necessary:
#### 4\.7\.3\.6 Exctracting or removing strings
Now, because I now the ALTO spec, I know how to find words that are split between two sentences:
```
winchester_content %>%
filter(str_detect(content, "hyppart"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "\"aver\" subs_type=\"hyppart1\" subs_content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "\"considera\" subs_type=\"hyppart1\" subs_content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "\"re\" subs_type=\"hyppart1\" subs_content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "\"install\" subs_type=\"hyppart1\" subs_content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "\"be\" subs_type=\"hyppart1\" subs_content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
For instance, the word “average” was split over two lines, the first part of the word, “aver” on the
first line, and the second part of the word, “age”, on the second line. We want to keep what comes
after “subs\_content”. Let’s extract the word “average” using `str_extract()`. However, because only
some words were split between two lines, we first need to detect where the string “hyppart1” is
located, and only then can we extract what comes after “subs\_content”. Thus, we need to combine
`str_detect()` to first detect the string, and then `str_extract()` to extract what comes after
“subs\_content”:
```
winchester_content <- winchester_content %>%
mutate(content = if_else(str_detect(content, "hyppart1"),
str_extract_all(content, "content=.*", simplify = TRUE),
content))
```
Let’s take a look at the result:
```
winchester_content %>%
filter(str_detect(content, "content"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
We still need to get rid of the string “content\=” and then of all the strings that contain “hyppart2”,
which are not needed now:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", "")) %>%
mutate(content = if_else(str_detect(content, "hyppart2"), NA_character_, content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
Almost done! We only need to remove the `"` characters:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace_all(content, "\"", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "j "
## 2 "a "
## 3 "ira "
## 4 "mj "
## 5 "ii "
## 6 "te1r "
```
Let’s remove space characters with `str_trim()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_trim(content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 j
## 2 a
## 3 ira
## 4 mj
## 5 ii
## 6 te1r
```
To finish off this section, let’s remove stop words (words that do not add any meaning to a sentence,
such as “as”, “and”…) and words that are composed of less than 3 characters. You can find a dataset
with stopwords inside the `{stopwords}` package:
```
library(stopwords)
data(data_stopwords_stopwordsiso)
eng_stopwords <- tibble("content" = data_stopwords_stopwordsiso$en)
winchester_content <- winchester_content %>%
anti_join(eng_stopwords) %>%
filter(nchar(content) > 3)
```
```
## Joining, by = "content"
```
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 te1r
## 2 jilas
## 3 edition
## 4 winchester
## 5 news
## 6 injuries
```
That’s it for this section! You now know how to work with strings, but in Chapter 10 we are going
one step further by learning about regular expressions, which offer much more power.
### 4\.7\.4 Tidy data frames with `{tibble}`
We have already seen and used several functions from the `{tibble}` package. Let’s now go through
some more useful functions.
#### 4\.7\.4\.1 Creating tibbles
`tribble()` makes it easy to create tibble row by row, manually:
It is also possible to create a tibble from a named list:
```
as_tibble(list("combustion" = c("oil", "diesel", "oil", "electric"),
"doors" = c(3, 5, 5, 5)))
```
```
## # A tibble: 4 × 2
## combustion doors
## <chr> <dbl>
## 1 oil 3
## 2 diesel 5
## 3 oil 5
## 4 electric 5
```
```
enframe(list("combustion" = c(1,2), "doors" = c(1,2,4), "cylinders" = c(1,8,9,10)))
```
```
## # A tibble: 3 × 2
## name value
## <chr> <list>
## 1 combustion <dbl [2]>
## 2 doors <dbl [3]>
## 3 cylinders <dbl [4]>
```
4\.8 List\-columns
------------------
To learn about list\-columns, let’s first focus on a single character of the `starwars` dataset:
```
data(starwars)
```
```
starwars %>%
filter(name == "Luke Skywalker") %>%
glimpse()
```
```
## Rows: 1
## Columns: 14
## $ name <chr> "Luke Skywalker"
## $ height <int> 172
## $ mass <dbl> 77
## $ hair_color <chr> "blond"
## $ skin_color <chr> "fair"
## $ eye_color <chr> "blue"
## $ birth_year <dbl> 19
## $ sex <chr> "male"
## $ gender <chr> "masculine"
## $ homeworld <chr> "Tatooine"
## $ species <chr> "Human"
## $ films <list> <"The Empire Strikes Back", "Revenge of the Sith", "Return …
## $ vehicles <list> <"Snowspeeder", "Imperial Speeder Bike">
## $ starships <list> <"X-wing", "Imperial shuttle">
```
We see that the columns `films`, `vehicles` and `starships` (at the bottom) are all lists, and in
the case of `films`, it lists all the films where Luke Skywalker has appeared. What if you want to
take a closer look at films where Luke Skywalker appeared?
```
starwars %>%
filter(name == "Luke Skywalker") %>%
pull(films)
```
```
## [[1]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
```
`pull()` is a `{dplyr}` function that extract (pulls) the column you’re interested in. It is quite
useful when you want to inspect a column. Instead of just looking at Luke Skywalker’s films,
let’s pull the complete `films` column instead:
```
starwars %>%
head() %>% # let's just look at the first six rows
pull(films)
```
```
## [[1]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
##
## [[2]]
## [1] "The Empire Strikes Back" "Attack of the Clones"
## [3] "The Phantom Menace" "Revenge of the Sith"
## [5] "Return of the Jedi" "A New Hope"
##
## [[3]]
## [1] "The Empire Strikes Back" "Attack of the Clones"
## [3] "The Phantom Menace" "Revenge of the Sith"
## [5] "Return of the Jedi" "A New Hope"
## [7] "The Force Awakens"
##
## [[4]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
##
## [[5]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
##
## [[6]]
## [1] "Attack of the Clones" "Revenge of the Sith" "A New Hope"
```
Let’s stop here a moment. As you see, the `films` column contains several items in it. How is it
possible that a single *cell* contains more than one film? This is because what is actually
contained in the cell is not the seven films, as seven separate characters, but an atomic vector
that happens to have seven elements. But it is still only one vector. *Zooming* in into the data
frame helps understand:
In the picture above we see three columns. The first two, `name` and `sex` are what you’re used
to see, just one element defining the characters `name` and `sex` respectively. The last one
also contains only one element for each character, it just so happens to be a complete
vector of characters. Because what is inside the *cells* of a list\-column can be very different
things (as list can contain anything), you have to think a bit about it in order to extract
insights from such columns. List\-columns may seem arcane, but they are extremely powerful
once you master them.
As an example, suppose we want to create a numerical variable which counts the number of movies
in which the characters have appeared. For this we need to compute the length of the list, or count
the number of elements this list has. Let’s try with `length()` a base R function:
```
starwars %>%
filter(name == "Luke Skywalker") %>%
pull(films) %>%
length()
```
```
## [1] 1
```
This might be surprising, but remember that a list with only one element, has a length of 1:
```
length(
list(words) # this creates a list which one element. This element is a list of 980 words.
)
```
```
## [1] 1
```
Even though `words` contain a vector of 980 words, if we put this very long vector inside the
first element of list, `length(list(words))` will this compute the length of the list. Let’s
see what happens if we create a more complex list:
```
numbers <- seq(1, 5)
length(
list(words, # this creates a list which one element. This element is a list of 980 words.
numbers) # numbers contains numbers 1 through 5
)
```
```
## [1] 2
```
`list(words, numbers)` is now a list of two elements, `words` and `numbers`. If we want to compute
the length of `words` and `numbers`, we need to learn about another powerful concept called
*higher\-order functions*. We are going to learn about this in greater detail in Chapter 8\. For now,
let’s use the fact that our list `films` is contained inside of a data frame, and use a convenience
function included in `{dplyr}` to handle situations like this:
```
starwars <- starwars %>%
rowwise() %>% # <- Apply the next steps for each row individually
mutate(n_films = length(films))
```
`dplyr::rowwise()` is useful when working with list\-columns because whatever instructions follow
get run on the single element contained in the list. The picture below illustrates this:
Let’s take a look at the characters and the number of films they have appeared in:
```
starwars %>%
select(name, films, n_films)
```
```
## # A tibble: 87 × 3
## # Rowwise:
## name films n_films
## <chr> <list> <int>
## 1 Luke Skywalker <chr [5]> 5
## 2 C-3PO <chr [6]> 6
## 3 R2-D2 <chr [7]> 7
## 4 Darth Vader <chr [4]> 4
## 5 Leia Organa <chr [5]> 5
## 6 Owen Lars <chr [3]> 3
## 7 Beru Whitesun lars <chr [3]> 3
## 8 R5-D4 <chr [1]> 1
## 9 Biggs Darklighter <chr [1]> 1
## 10 Obi-Wan Kenobi <chr [6]> 6
## # … with 77 more rows
```
Now we can, for example, create a factor variable that groups characters by asking whether they appeared only in
1 movie, or more:
```
starwars <- starwars %>%
mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie",
n_films >= 1 ~ "More than 1 movie"))
```
You can also create list\-columns with your own datasets, by using `tidyr::nest()`. Remember the
fake `survey_data` I created to illustrate `pivot_longer()` and `pivot_wider()`? Let’s go back to that dataset
again:
```
survey_data <- tribble(
~id, ~variable, ~value,
1, "var1", 1,
1, "var2", 0.2,
NA, "var3", 0.3,
2, "var1", 1.4,
2, "var2", 1.9,
2, "var3", 4.1,
3, "var1", 0.1,
3, "var2", 2.8,
3, "var3", 8.9,
4, "var1", 1.7,
NA, "var2", 1.9,
4, "var3", 7.6
)
print(survey_data)
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
```
nested_data <- survey_data %>%
group_by(id) %>%
nest()
glimpse(nested_data)
```
```
## Rows: 5
## Columns: 2
## Groups: id [5]
## $ id <dbl> 1, NA, 2, 3, 4
## $ data <list> [<tbl_df[2 x 2]>], [<tbl_df[2 x 2]>], [<tbl_df[3 x 2]>], [<tbl_df…
```
This creates a new tibble, with columns `id` and `data`. `data` is a list\-column that contains
tibbles; each tibble is the `variable` and `value` for each individual:
```
nested_data %>%
filter(id == "1") %>%
pull(data)
```
```
## [[1]]
## # A tibble: 2 × 2
## variable value
## <chr> <dbl>
## 1 var1 1
## 2 var2 0.2
```
As you can see, for individual 1, the column data contains a 2x2 tibble with columns `variable` and
`value`. Because `group_by()` followed by `nest()` is so useful, there is a wrapper around these two functions
called `group_nest()`:
```
survey_data %>%
group_nest(id)
```
```
## # A tibble: 5 × 2
## id data
## <dbl> <list<tibble[,2]>>
## 1 1 [2 × 2]
## 2 2 [3 × 2]
## 3 3 [3 × 2]
## 4 4 [2 × 2]
## 5 NA [2 × 2]
```
You might be wondering why this is useful, because this seems to introduce an unnecessary
layer of complexity. The usefulness of list\-columns will become apparent in the next chapters,
where we are going to learn how to repeat actions over, say, individuals. So if you’ve reached
the end of this section and still didn’t really grok list\-columns, go take some fresh air and
come back to this section again later on.
4\.9 Going beyond descriptive statistics and data manipulation
--------------------------------------------------------------
The `{tidyverse}` collection of packages can do much more than simply data manipulation and
descriptive statisics. You can use the principles we have covered and the functions you now know
to do much more. For instance, you can use a few `{tidyverse}` functions to do Monte Carlo simulations,
for example to estimate \\(\\pi\\).
Draw the unit circle inside the unit square, the ratio of the area of the circle to the area of the
square will be \\(\\pi/4\\). Then shot K arrows at the square; roughly \\(K\*\\pi/4\\) should have fallen
inside the circle. So if now you shoot N arrows at the square, and M fall inside the circle, you have
the following relationship \\(M \= N\*\\pi/4\\). You can thus compute \\(\\pi\\) like so: \\(\\pi \= 4\*M/N\\).
The more arrows N you throw at the square, the better approximation of \\(\\pi\\) you’ll have. Let’s
try to do this with a tidy Monte Carlo simulation. First, let’s randomly pick some points inside
the unit square:
```
library(tidyverse)
n <- 5000
set.seed(2019)
points <- tibble("x" = runif(n), "y" = runif(n))
```
Now, to know if a point is inside the unit circle, we need to check wether \\(x^2 \+ y^2 \< 1\\). Let’s
add a new column to the `points` tibble, called `inside` equal to 1 if the point is inside the
unit circle and 0 if not:
```
points <- points %>%
mutate(inside = map2_dbl(.x = x, .y = y, ~ifelse(.x**2 + .y**2 < 1, 1, 0))) %>%
rowid_to_column("N")
```
Let’s take a look at `points`:
```
points
```
```
## # A tibble: 5,000 × 4
## N x y inside
## <int> <dbl> <dbl> <dbl>
## 1 1 0.770 0.984 0
## 2 2 0.713 0.0107 1
## 3 3 0.303 0.133 1
## 4 4 0.618 0.0378 1
## 5 5 0.0505 0.677 1
## 6 6 0.0432 0.0846 1
## 7 7 0.820 0.727 0
## 8 8 0.00961 0.0758 1
## 9 9 0.102 0.373 1
## 10 10 0.609 0.676 1
## # … with 4,990 more rows
```
Now, I can compute the estimation
of \\(\\pi\\) at each row, by computing the cumulative sum of the 1’s in the `inside` column and dividing
that by the current value of `N` column:
```
points <- points %>%
mutate(estimate = 4*cumsum(inside)/N)
```
`cumsum(inside)` is the `M` from the formula. Now, we can finish by plotting the result:
```
ggplot(points) +
geom_line(aes(y = estimate, x = N)) +
geom_hline(yintercept = pi)
```
In the next chapter, we are going to learn all about `{ggplot2}`, the package I used in the lines
above to create this plot.
As the number of tries grows, the estimation of \\(\\pi\\) gets better.
Using a data frame as a structure to hold our simulated points and the results makes it very easy
to avoid loops, and thus write code that is more concise and easier to follow.
If you studied a quantitative field in university, you might have done a similar exercise at the
time, very likely by defining a matrix to hold your points, and an empty vector to hold whether a
particular point was inside the unit circle. Then you wrote a loop to compute whether
a point was inside the unit circle, save this result in the before\-defined empty vector and then
compute the estimation of \\(\\pi\\). Again, I take this opportunity here to stress that there is nothing
wrong with this approach per se, but R is better suited for a workflow where lists or data frames
are the central objects and where the analyst operates over them with functional programming techniques.
4\.10 Exercises
---------------
### Exercise 1
* Combine `mutate()` with `across()` to exponentiate every column of type `double` of the `gasoline` dataset.
To obtain the `gasoline` dataset, run the following lines:
```
data(Gasoline, package = "plm")
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
* Exponeniate columns starting with the character `"l"` of the `gasoline` dataset.
* Convert all columns’ classes into the character class.
### Exercise 2
Load the `LaborSupply` dataset from the `{Ecdat}` package and answer the following questions:
* Compute the average annual hours worked by year (plus standard deviation)
* What age group worked the most hours in the year 1982?
* Create a variable, `n_years` that equals the number of years an individual stays in the panel. Is the panel balanced?
* Which are the individuals that do not have any kids during the whole period? Create a variable, `no_kids`, that flags these individuals (1 \= no kids, 0 \= kids)
* Using the `no_kids` variable from before compute the average wage, standard deviation and number of observations in each group for the year 1980 (no kids group vs kids group).
* Create the lagged logarithm of hours worked and wages. Remember that this is a panel.
### Exercise 3
* What does the following code do? Copy and paste it in an R interpreter to find out!
```
LaborSupply %>%
group_by(id) %>%
mutate(across(starts_with("l"), tibble::lst(lag, lead)))
```
* Using `summarise()` and `across()`, compute the mean, standard deviation and number of individuals of `lnhr` and `lnwg` for each individual.
4\.1 A data exploration exercice using *base* R
-----------------------------------------------
Let’s first load the `starwars` data set, included in the `{dplyr}` package:
```
library(dplyr)
data(starwars)
```
Let’s first take a look at the data:
```
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
This data contains information on Star Wars characters. The first question you have to answer is
to find the average height of the characters:
```
mean(starwars$height)
```
```
## [1] NA
```
As discussed in Chapter 2, `$` allows you to access columns of a `data.frame` objects.
Because there are `NA` values in the data, the result is also `NA`. To get the result, you need to
add an option to `mean()`:
```
mean(starwars$height, na.rm = TRUE)
```
```
## [1] 174.358
```
Let’s also take a look at the standard deviation:
```
sd(starwars$height, na.rm = TRUE)
```
```
## [1] 34.77043
```
It might be more informative to compute these two statistics by sex, so for this, we are going
to use `aggregate()`:
```
aggregate(starwars$height,
by = list(sex = starwars$sex),
mean)
```
```
## sex x
## 1 female NA
## 2 hermaphroditic 175
## 3 male NA
## 4 none NA
```
Oh, shoot! Most groups have missing values in them, so we get `NA` back. We need to use `na.rm = TRUE`
just like before. Thankfully, it is possible to pass this option to `mean()` inside `aggregate()` as well:
```
aggregate(starwars$height,
by = list(sex = starwars$sex),
mean, na.rm = TRUE)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
Later in the book, we are also going to see how to define our own functions (with the default options that
are useful to us), and this will also help in this sort of situation.
Even though we can use `na.rm = TRUE`, let’s also use `subset()` to filter out the `NA` values beforehand:
```
starwars_no_nas <- subset(starwars,
!is.na(height))
aggregate(starwars_no_nas$height,
by = list(sex = starwars_no_nas$sex),
mean)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
(`aggregate()` also has a `subset =` option, but I prefer to explicitely subset the data set with `subset()`).
Even if you are not familiar with `aggregate()`, I believe the above lines are quite
self\-explanatory. You need to provide `aggregate()` with 3 things; the variable you want to
summarize (or only the data frame, if you want to summarize all variables), a list of grouping
variables and then the function that will be applied to each subgroup. And by the way, to test for
`NA`, one uses the function `is.na()` not something like `species == "NA"` or anything like that.
`!is.na()` does the opposite (`!` reverses booleans, so `!TRUE` becomes `FALSE` and vice\-versa).
You can easily add another grouping variable:
```
aggregate(starwars_no_nas$height,
by = list(Sex = starwars_no_nas$sex,
`Hair color` = starwars_no_nas$hair_color),
mean)
```
```
## Sex Hair color x
## 1 female auburn 150.0000
## 2 male auburn, grey 180.0000
## 3 male auburn, white 182.0000
## 4 female black 166.3333
## 5 male black 176.2500
## 6 male blond 176.6667
## 7 female blonde 168.0000
## 8 female brown 160.4000
## 9 male brown 182.6667
## 10 male brown, grey 178.0000
## 11 male grey 170.0000
## 12 female none 188.2500
## 13 male none 182.2414
## 14 none none 148.0000
## 15 female white 167.0000
## 16 male white 152.3333
```
or use another function:
```
aggregate(starwars_no_nas$height,
by = list(Sex = starwars_no_nas$sex),
sd)
```
```
## Sex x
## 1 female 15.32256
## 2 hermaphroditic NA
## 3 male 36.01075
## 4 none 49.14977
```
(let’s ignore the `NA`s). It is important to note that `aggregate()` returns a `data.frame` object.
You can only give one function to `aggregate()`, so if you need the mean and the standard deviation of `height`,
you must do it in two steps.
Since R 4\.1, a new infix operator `|>` has been introduced, which is really handy for writing the kind of
code we’ve been looking at in this chapter. `|>` is also called a pipe, or the *base* pipe to distinguish
it from *another* pipe that we’ll discuss in the next section. For now, let’s learn about `|>`.
Consider the following:
```
10 |> sqrt()
```
```
## [1] 3.162278
```
This computes `sqrt(10)`; so what `|>` does, is pass the left hand side (`10`, in the example above) to the
right hand side (`sqrt()`). Using `|>` might seem more complicated and verbose than not using it, but you
will see in a bit why it can be useful. The next function I would like to introduce at this point is `with()`.
`with()` makes it possible to apply functions on `data.frame` columns without having to write `$` all the time.
For example, consider this:
```
mean(starwars$height, na.rm = TRUE)
```
```
## [1] 174.358
```
```
with(starwars,
mean(height, na.rm = TRUE))
```
```
## [1] 174.358
```
The advantage of using `with()` is that we can directly reference `height` without using `$`. Here again, this
is more verbose than simply using `$`… so why bother with it? It turns out that by combining `|>` and `with()`,
we can write very clean and concise code. Let’s go back to a previous example to illustrate this idea:
```
starwars_no_nas <- subset(starwars,
!is.na(height))
aggregate(starwars_no_nas$height,
by = list(sex = starwars_no_nas$sex),
mean)
```
```
## sex x
## 1 female 169.2667
## 2 hermaphroditic 175.0000
## 3 male 179.1053
## 4 none 131.2000
```
First, we created a new dataset where we filtered out rows where `height` is `NA`. This dataset is useless otherwise,
but we need it for the next part, where we actually do what we want (computing the average `height` by `sex`).
Using `|>` and `with()`, we can write this in one go:
```
starwars |>
subset(!is.na(sex)) |>
with(aggregate(height,
by = list(Species = species,
Sex = sex),
mean))
```
```
## Species Sex x
## 1 Clawdite female 168.0000
## 2 Human female NA
## 3 Kaminoan female 213.0000
## 4 Mirialan female 168.0000
## 5 Tholothian female 184.0000
## 6 Togruta female 178.0000
## 7 Twi'lek female 178.0000
## 8 Hutt hermaphroditic 175.0000
## 9 Aleena male 79.0000
## 10 Besalisk male 198.0000
## 11 Cerean male 198.0000
## 12 Chagrian male 196.0000
## 13 Dug male 112.0000
## 14 Ewok male 88.0000
## 15 Geonosian male 183.0000
## 16 Gungan male 208.6667
## 17 Human male NA
## 18 Iktotchi male 188.0000
## 19 Kaleesh male 216.0000
## 20 Kaminoan male 229.0000
## 21 Kel Dor male 188.0000
## 22 Mon Calamari male 180.0000
## 23 Muun male 191.0000
## 24 Nautolan male 196.0000
## 25 Neimodian male 191.0000
## 26 Pau'an male 206.0000
## 27 Quermian male 264.0000
## 28 Rodian male 173.0000
## 29 Skakoan male 193.0000
## 30 Sullustan male 160.0000
## 31 Toong male 163.0000
## 32 Toydarian male 137.0000
## 33 Trandoshan male 190.0000
## 34 Twi'lek male 180.0000
## 35 Vulptereen male 94.0000
## 36 Wookiee male 231.0000
## 37 Xexto male 122.0000
## 38 Yoda's species male 66.0000
## 39 Zabrak male 173.0000
## 40 Droid none NA
```
So let’s unpack this. In the first two rows, using `|>`, we pass the `starwars` `data.frame` to `subset()`:
```
starwars |>
subset(!is.na(sex))
```
```
## # A tibble: 83 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywa… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## 7 Beru White… 165 75 brown light blue 47 fema… femin… Tatooi…
## 8 R5-D4 97 32 <NA> white,… red NA none mascu… Tatooi…
## 9 Biggs Dark… 183 84 black light brown 24 male mascu… Tatooi…
## 10 Obi-Wan Ke… 182 77 auburn… fair blue-g… 57 male mascu… Stewjon
## # … with 73 more rows, 4 more variables: species <chr>, films <list>,
## # vehicles <list>, starships <list>, and abbreviated variable names
## # ¹hair_color, ²skin_color, ³eye_color, ⁴birth_year, ⁵homeworld
```
as I explained before, this is exactly the same as `subset(starwars, !is.na(sex))`. Then, we pass the result of
`subset()` to the next function, `with()`. The first argument of `with()` must be a `data.frame`, and this is exactly
what `subset()` returns! So now the output of `subset()` is passed down to `with()`, which makes it now possible
to reference the columns of the `data.frame` in `aggregate()` directly. If you have a hard time understanding what
is going on, you can use `quote()` to see what’s going on. `quote()` returns an expression with evaluating it:
```
quote(log(10))
```
```
## log(10)
```
Why am I bring this up? Well, since `a |> f()` is exactly equal to `f(a)`, quoting the code above will return
an expression with `|>`. For instance:
```
quote(10 |> log())
```
```
## log(10)
```
So let’s quote the big block of code from above:
```
quote(
starwars |>
subset(!is.na(sex)) |>
with(aggregate(height,
by = list(Species = species,
Sex = sex),
mean))
)
```
```
## with(subset(starwars, !is.na(sex)), aggregate(height, by = list(Species = species,
## Sex = sex), mean))
```
I think now you see why using `|>` makes code much clearer; the nested expression you would need to write otherwise
is much less readable, unless you define intermediate objects. And without `with()`, this is what you
would need to write:
```
b <- subset(starwars, !is.na(height))
aggregate(b$height, by = list(Species = b$species, Sex = b$sex), mean)
```
To finish this section, let’s say that you wanted to have the average `height` and `mass` by sex. In this case
you need to specify the columns in `aggregate()` with `cbind()` (let’s use `na.rm = TRUE` again instead of
`subset()`ing the data beforehand):
```
starwars |>
with(aggregate(cbind(height, mass),
by = list(Sex = sex),
FUN = mean, na.rm = TRUE))
```
```
## Sex height mass
## 1 female 169.2667 54.68889
## 2 hermaphroditic 175.0000 1358.00000
## 3 male 179.1053 81.00455
## 4 none 131.2000 69.75000
```
Let’s now continue with some more advanced operations using this fake dataset:
```
survey_data_base <- as.data.frame(
tibble::tribble(
~id, ~var1, ~var2, ~var3,
1, 1, 0.2, 0.3,
2, 1.4, 1.9, 4.1,
3, 0.1, 2.8, 8.9,
4, 1.7, 1.9, 7.6
)
)
```
```
survey_data_base
```
```
## id var1 var2 var3
## 1 1 1.0 0.2 0.3
## 2 2 1.4 1.9 4.1
## 3 3 0.1 2.8 8.9
## 4 4 1.7 1.9 7.6
```
Depending on what you want to do with this data, it is not in the right shape. For example, it
would not be possible to simply compute the average of `var1`, `var2` and `var3` for each `id`.
This is because this would require running `mean()` by row, but this is not very easy. This is
because R is not suited to row\-based workflows. Well I’m lying a little bit here, it turns here
that R comes with a `rowMeans()` function. So this would work:
```
survey_data_base |>
transform(mean_id = rowMeans(cbind(var1, var2, var3))) #transform adds a column to a data.frame
```
```
## id var1 var2 var3 mean_id
## 1 1 1.0 0.2 0.3 0.500000
## 2 2 1.4 1.9 4.1 2.466667
## 3 3 0.1 2.8 8.9 3.933333
## 4 4 1.7 1.9 7.6 3.733333
```
But there is no `rowSD()` or `rowMax()`, etc… so it is much better to reshape the data and put it in a
format that gives us maximum flexibility. To reshape the data, we’ll be using the aptly\-called `reshape()` command:
```
survey_data_long <- reshape(survey_data_base,
varying = list(2:4), v.names = "variable", direction = "long")
```
We can now easily compute the average of `variable` for each `id`:
```
aggregate(survey_data_long$variable,
by = list(Id = survey_data_long$id),
mean)
```
```
## Id x
## 1 1 0.500000
## 2 2 2.466667
## 3 3 3.933333
## 4 4 3.733333
```
or any other variable:
```
aggregate(survey_data_long$variable,
by = list(Id = survey_data_long$id),
max)
```
```
## Id x
## 1 1 1.0
## 2 2 4.1
## 3 3 8.9
## 4 4 7.6
```
As you can see, R comes with very powerful functions right out of the box, ready to use. When I was
studying, unfortunately, my professors had been brought up on FORTRAN loops, so we had to do to all
this using loops (not reshaping, thankfully), which was not so easy.
Now that we have seen how *base* R works, let’s redo the analysis using `{tidyverse}` verbs.
The `{tidyverse}` provides many more functions, each of them doing only one single thing. You will
shortly see why this is quite important; by focusing on just one task, and by focusing on the data frame
as the central object, it becomes possible to build really complex workflows, piece by piece,
very easily.
But before deep diving into the `{tidyverse}`, let’s take a moment to discuss about another infix
operator, `%>%`.
4\.2 Smoking is bad for you, but pipes are your friend
------------------------------------------------------
The title of this section might sound weird at first, but by the end of it, you’ll get this
(terrible) pun.
You probably know the following painting by René Magritte, *La trahison des images*:
It turns out there’s an R package from the `tidyverse` that is called `magrittr`. What does this
package do? This package introduced *pipes* to R, way before `|>` in R 4\.1\. Pipes are a concept
from the Unix operating system; if you’re using a GNU\+Linux distribution or macOS, you’re basically
using a *modern* unix (that’s an oversimplification, but I’m an economist by training, and
outrageously oversimplifying things is what we do, deal with it). The *magrittr* pipe is written as
`%>%`. Just like `|>`, `%>%` takes the left hand side to feed it as the first argument of the
function in the right hand side. Try the following:
```
library(magrittr)
```
```
16 %>% sqrt
```
```
## [1] 4
```
You can chain multiple functions, as you can with `|>`:
```
16 %>%
sqrt %>%
log
```
```
## [1] 1.386294
```
But unlike with `|>`, you can omit `()`. `%>%` also has other features. For example, you can
pipe things to other infix operators. For example, `+`. You can use `+` as usual:
```
2 + 12
```
```
## [1] 14
```
Or as a prefix operator:
```
`+`(2, 12)
```
```
## [1] 14
```
You can use this notation with `%>%`:
```
16 %>% sqrt %>% `+`(18)
```
```
## [1] 22
```
This also works using `|>` since R version 4\.2, but only if you use the `_` pipe placeholder:
```
16 |> sqrt() |> `+`(x = _, 18)
```
```
## [1] 22
```
The output of `16` (`16`) got fed to `sqrt()`, and the output of `sqrt(16)` (4\) got fed to `+(18)`
(so we got `+(4, 18)` \= 22\). Without `%>%` you’d write the line just above like this:
```
sqrt(16) + 18
```
```
## [1] 22
```
Just like before, with `|>`, this might seem overly complicated, but using these pipes will
make our code much more readable. I’m sure you’ll be convinced by the end of this chapter.
`%>%` is not the only pipe operator in `magrittr`. There’s `%T%`, `%<>%` and `%$%`. All have their
uses, but are basically shortcuts to some common tasks with `%>%` plus another function. Which
means that you can live without them, and because of this, I will not discuss them.
4\.3 The `{tidyverse}`’s *enfant prodige*: `{dplyr}`
----------------------------------------------------
The best way to get started with the tidyverse packages is to get to know `{dplyr}`. `{dplyr}`
provides a lot of very useful functions that makes it very easy to get discriptive statistics or
add new columns to your data.
### 4\.3\.1 A first taste of data manipulation with `{dplyr}`
This section will walk you through a typical analysis using `{dplyr}` funcitons. Just go with it; I
will give more details in the next sections.
First, let’s load `{dplyr}` and the included `starwars` dataset. Let’s also take a look at the
first 5 lines of the dataset:
```
library(dplyr)
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
`data(starwars)` loads the example dataset called `starwars` that is included in the package
`{dplyr}`. As I said earlier, this is just an example; you could have loaded an external dataset,
from a `.csv` file for instance. This does not matter for what comes next.
Like we saw earlier, R includes a lot of functions for descriptive statistics, such as `mean()`,
`sd()`, `cov()`, and many more. What `{dplyr}` brings to the table is a grammar of data
manipulation that makes it very easy to apply descriptive statistics functions, or any other,
very easily.
Just like before, we are going to compute the average height by `sex`:
```
starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex mean_height
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
The very nice thing about using `%>%` and `{dplyr}` verbs/functions, is that this is really
readable. The above three lines can be translated like so in English:
*Take the starwars dataset, then group by sex, then compute the mean height (for each subgroup) by
omitting missing values.*
`%>%` can be translated by “then”. Without `%>%` you would need to change the code to:
```
summarise(group_by(starwars, sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
Unlike with the *base* approach, each function does only one thing. With the base function
`aggregate()` was used to also define the subgroups. This is not the case with `{dplyr}`; one
function to create the groups (`group_by()`) and then one function to compute the summaries
(`summarise()`). Also, `group_by()` creates a specific subgroup for individuals where `sex` is
missing. This is the last line in the data frame, where `sex` is `NA`. Another nice thing is that
you can specify the column containing the average height. I chose to name it `mean_height`.
Now, let’s suppose that we want to filter some data first:
```
starwars %>%
filter(gender == "masculine") %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex mean_height
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
Again, the `%>%` makes the above lines of code very easy to read. Without it, one would need to
write:
```
summarise(group_by(filter(starwars, gender == "masculine"), sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
I think you agree with me that this is not very readable. One way to make it more readable would
be to save intermediary variables:
```
filtered_data <- filter(starwars, gender == "masculine")
grouped_data <- group_by(filter(starwars, gender == "masculine"), sex)
summarise(grouped_data, mean(height))
```
```
## # A tibble: 3 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male NA
## 3 none NA
```
But this can get very tedious. Once you’re used to `%>%`, you won’t go back to not use it.
Before continuing and to make things clearer; `filter()`, `group_by()` and `summarise()` are
functions that are included in `{dplyr}`. `%>%` is actually a function from `{magrittr}`, but this
package gets loaded on the fly when you load `{dplyr}`, so you do not need to worry about it.
The result of all these operations that use `{dplyr}` functions are actually other datasets, or
`tibbles`. This means that you can save them in variable, or write them to disk, and then work with
these as any other datasets.
```
mean_height <- starwars %>%
group_by(sex) %>%
summarise(mean(height))
class(mean_height)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
```
head(mean_height)
```
```
## # A tibble: 5 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 female NA
## 2 hermaphroditic 175
## 3 male NA
## 4 none NA
## 5 <NA> NA
```
You could then write this data to disk using `rio::export()` for instance. If you need more than
the mean of the height, you can keep adding as many functions as needed (another advantage over
`aggregate()`:
```
summary_table <- starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n())
summary_table
```
```
## # A tibble: 5 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 hermaphroditic 175 NA 1
## 3 male 179. 1297. 60
## 4 none 131. 2416. 6
## 5 <NA> 181. 8.33 4
```
I’ve added more functions, namely `var()`, to get the variance of height, and `n()`, which
is a function from `{dplyr}`, not base R, to get the number of observations. This is quite useful,
because we see that there is a group with only one individual. Let’s focus on the
sexes for which we have more than 1 individual. Since we save all the previous operations (which
produce a `tibble`) in a variable, we can keep going from there:
```
summary_table2 <- summary_table %>%
filter(n_obs > 1)
summary_table2
```
```
## # A tibble: 4 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
## 4 <NA> 181. 8.33 4
```
As mentioned before, there’s a lot of `NA`s; this is because by default, `mean()` and `var()`
return `NA` if even one single observation is `NA`. This is good, because it forces you to look at
the data to see what is going on. If you would get a number, even if there were `NA`s you could
very easily miss these missing values. It is better for functions to fail early and often than the
opposite. This is way we keep using `na.rm = TRUE` for `mean()` and `var()`.
Now let’s actually take a look at the rows where `sex` is `NA`:
```
starwars %>%
filter(is.na(sex))
```
```
## # A tibble: 4 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Ric Olié 183 NA brown fair blue NA <NA> <NA> Naboo
## 2 Quarsh Pana… 183 NA black dark brown 62 <NA> <NA> Naboo
## 3 Sly Moore 178 48 none pale white NA <NA> <NA> Umbara
## 4 Captain Pha… NA NA unknown unknown unknown NA <NA> <NA> <NA>
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
There’s only 4 rows where `sex` is `NA`. Let’s ignore them:
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## # A tibble: 3 × 4
## sex ave_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
```
And why not compute the same table, but first add another stratifying variable?
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex, eye_color) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## `summarise()` has grouped output by 'sex'. You can override using the `.groups`
## argument.
```
```
## # A tibble: 12 × 5
## # Groups: sex [3]
## sex eye_color ave_height var_height n_obs
## <chr> <chr> <dbl> <dbl> <int>
## 1 female black 196. 612. 2
## 2 female blue 167 118. 6
## 3 female brown 160 42 5
## 4 female hazel 178 NA 2
## 5 male black 182 1197 7
## 6 male blue 190. 434. 12
## 7 male brown 167. 1663. 15
## 8 male orange 181. 1306. 7
## 9 male red 190. 0.5 2
## 10 male unknown 136 6498 2
## 11 male yellow 180. 2196. 9
## 12 none red 131 3571 3
```
Ok, that’s it for a first taste. We have already discovered some very useful `{dplyr}` functions,
`filter()`, `group_by()` and summarise `summarise()`.
Now, we are going to learn more about these functions in more detail.
### 4\.3\.2 Filter the rows of a dataset with `filter()`
We’re going to use the `Gasoline` dataset from the `plm` package, so install that first:
```
install.packages("plm")
```
Then load the required data:
```
data(Gasoline, package = "plm")
```
and load dplyr:
```
library(dplyr)
```
This dataset gives the consumption of gasoline for 18 countries from 1960 to 1978\. When you load
the data like this, it is a standard `data.frame`. `{dplyr}` functions can be used on standard
`data.frame` objects, but also on `tibble`s. `tibble`s are just like data frame, but with a better
print method (and other niceties). I’ll discuss the `{tibble}` package later, but for now, let’s
convert the data to a `tibble` and change its name, and also transform the `country` column to
lower case:
```
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
`filter()` is pretty straightforward. What if you would like to subset the data to focus on the
year 1969? Simple:
```
filter(gasoline, year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
Let’s use `%>%`, since we’re familiar with it now:
```
gasoline %>%
filter(year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
You can also filter more than just one year, by using the `%in%` operator:
```
gasoline %>%
filter(year %in% seq(1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
It is also possible use `between()`, a helper function:
```
gasoline %>%
filter(between(year, 1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
To select non\-consecutive years:
```
gasoline %>%
filter(year %in% c(1969, 1973, 1977))
```
```
## # A tibble: 54 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1973 4.20 -5.90 -0.594 -8.49
## 3 austria 1977 3.93 -5.83 -0.422 -8.25
## 4 belgium 1969 3.85 -5.86 -0.355 -8.52
## 5 belgium 1973 3.90 -5.64 -0.373 -8.31
## 6 belgium 1977 3.85 -5.56 -0.432 -8.14
## 7 canada 1969 4.86 -5.56 -1.04 -8.10
## 8 canada 1973 4.90 -5.41 -1.13 -7.94
## 9 canada 1977 4.81 -5.34 -1.07 -7.77
## 10 denmark 1969 4.17 -5.72 -0.407 -8.47
## # … with 44 more rows
```
`%in%` tests if an object is part of a set.
### 4\.3\.3 Select columns with `select()`
While `filter()` allows you to keep or discard rows of data, `select()` allows you to keep or
discard entire columns. To keep columns:
```
gasoline %>%
select(country, year, lrpmg)
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
To discard them:
```
gasoline %>%
select(-country, -year, -lrpmg)
```
```
## # A tibble: 342 × 3
## lgaspcar lincomep lcarpcap
## <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -9.77
## 2 4.10 -6.43 -9.61
## 3 4.07 -6.41 -9.46
## 4 4.06 -6.37 -9.34
## 5 4.04 -6.32 -9.24
## 6 4.03 -6.29 -9.12
## 7 4.05 -6.25 -9.02
## 8 4.05 -6.23 -8.93
## 9 4.05 -6.21 -8.85
## 10 4.05 -6.15 -8.79
## # … with 332 more rows
```
To rename them:
```
gasoline %>%
select(country, date = year, lrpmg)
```
```
## # A tibble: 342 × 3
## country date lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
There’s also `rename()`:
```
gasoline %>%
rename(date = year)
```
```
## # A tibble: 342 × 6
## country date lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`rename()` does not do any kind of selection, but just renames.
You can also use `select()` to re\-order columns:
```
gasoline %>%
select(year, country, lrpmg, everything())
```
```
## # A tibble: 342 × 6
## year country lrpmg lgaspcar lincomep lcarpcap
## <int> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 1960 austria -0.335 4.17 -6.47 -9.77
## 2 1961 austria -0.351 4.10 -6.43 -9.61
## 3 1962 austria -0.380 4.07 -6.41 -9.46
## 4 1963 austria -0.414 4.06 -6.37 -9.34
## 5 1964 austria -0.445 4.04 -6.32 -9.24
## 6 1965 austria -0.497 4.03 -6.29 -9.12
## 7 1966 austria -0.467 4.05 -6.25 -9.02
## 8 1967 austria -0.506 4.05 -6.23 -8.93
## 9 1968 austria -0.522 4.05 -6.21 -8.85
## 10 1969 austria -0.559 4.05 -6.15 -8.79
## # … with 332 more rows
```
`everything()` is a helper function, and there’s also `starts_with()`, and `ends_with()`. For
example, what if we are only interested in columns whose name start with “l”?
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`ends_with()` works in a similar fashion. There is also `contains()`:
```
gasoline %>%
select(country, year, contains("car"))
```
```
## # A tibble: 342 × 4
## country year lgaspcar lcarpcap
## <chr> <int> <dbl> <dbl>
## 1 austria 1960 4.17 -9.77
## 2 austria 1961 4.10 -9.61
## 3 austria 1962 4.07 -9.46
## 4 austria 1963 4.06 -9.34
## 5 austria 1964 4.04 -9.24
## 6 austria 1965 4.03 -9.12
## 7 austria 1966 4.05 -9.02
## 8 austria 1967 4.05 -8.93
## 9 austria 1968 4.05 -8.85
## 10 austria 1969 4.05 -8.79
## # … with 332 more rows
```
You can read more about these helper functions [here](https://tidyselect.r-lib.org/reference/language.html), but we’re going to look more into
them in a coming section.
Another verb, similar to `select()`, is `pull()`. Let’s compare the two:
```
gasoline %>%
select(lrpmg)
```
```
## # A tibble: 342 × 1
## lrpmg
## <dbl>
## 1 -0.335
## 2 -0.351
## 3 -0.380
## 4 -0.414
## 5 -0.445
## 6 -0.497
## 7 -0.467
## 8 -0.506
## 9 -0.522
## 10 -0.559
## # … with 332 more rows
```
```
gasoline %>%
pull(lrpmg) %>%
head() # using head() because there's 337 elements in total
```
```
## [1] -0.3345476 -0.3513276 -0.3795177 -0.4142514 -0.4453354 -0.4970607
```
`pull()`, unlike `select()`, does not return a `tibble`, but only the column you want, as a
vector.
### 4\.3\.4 Group the observations of your dataset with `group_by()`
`group_by()` is a very useful verb; as the name implies, it allows you to create groups and then,
for example, compute descriptive statistics by groups. For example, let’s group our data by
country:
```
gasoline %>%
group_by(country)
```
```
## # A tibble: 342 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
It looks like nothing much happened, but if you look at the second line of the output you can read
the following:
```
## # Groups: country [18]
```
this means that the data is grouped, and every computation you will do now will take these groups
into account. It is also possible to group by more than one variable:
```
gasoline %>%
group_by(country, year)
```
```
## # A tibble: 342 × 6
## # Groups: country, year [342]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
and so on. You can then also ungroup:
```
gasoline %>%
group_by(country, year) %>%
ungroup()
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Once your data is grouped, the operations that will follow will be executed inside each group.
### 4\.3\.5 Get summary statistics with `summarise()`
Ok, now that we have learned the basic verbs, we can start to do more interesting stuff. For
example, one might want to compute the average gasoline consumption in each country, for
the whole period:
```
gasoline %>%
group_by(country) %>%
summarise(mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country `mean(lgaspcar)`
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
`mean()` was given as an argument to `summarise()`, which is a `{dplyr}` verb. What we get is
another `tibble`, that contains the variable we used to group, as well as the average per country.
We can also rename this column:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
and because the output is a `tibble`, we can continue to use `{dplyr}` verbs on it:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar)) %>%
filter(country == "france")
```
```
## # A tibble: 1 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 france 3.82
```
`summarise()` is a very useful verb. For example, we can compute several descriptive statistics at once:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
```
## # A tibble: 18 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92
## 2 belgium 3.92 0.103 4.16 3.82
## 3 canada 4.86 0.0262 4.90 4.81
## 4 denmark 4.19 0.158 4.50 4.00
## 5 france 3.82 0.0499 3.91 3.75
## 6 germany 3.89 0.0239 3.93 3.85
## 7 greece 4.88 0.255 5.38 4.48
## 8 ireland 4.23 0.0437 4.33 4.16
## 9 italy 3.73 0.220 4.05 3.38
## 10 japan 4.70 0.684 6.00 3.95
## 11 netherla 4.08 0.286 4.65 3.71
## 12 norway 4.11 0.123 4.44 3.96
## 13 spain 4.06 0.317 4.75 3.62
## 14 sweden 4.01 0.0364 4.07 3.91
## 15 switzerl 4.24 0.102 4.44 4.05
## 16 turkey 5.77 0.329 6.16 5.14
## 17 u.k. 3.98 0.0479 4.10 3.91
## 18 u.s.a. 4.82 0.0219 4.86 4.79
```
Because the output is a `tibble`, you can save it in a variable of course:
```
desc_gasoline <- gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
And then you can answer questions such as, *which country has the maximum average gasoline
consumption?*:
```
desc_gasoline %>%
filter(max(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 5.77 0.329 6.16 5.14
```
Turns out it’s Turkey. What about the minimum consumption?
```
desc_gasoline %>%
filter(min(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 italy 3.73 0.220 4.05 3.38
```
Because the output of `{dplyr}` verbs is a tibble, it is possible to continue working with it. This
is one shortcoming of using the base `summary()` function. The object returned by that function is
not very easy to manipulate.
### 4\.3\.6 Adding columns with `mutate()` and `transmute()`
`mutate()` adds a column to the `tibble`, which can contain any transformation of any other
variable:
```
gasoline %>%
group_by(country) %>%
mutate(n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap `n()`
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
Using `mutate()` I’ve added a column that counts how many times the country appears in the `tibble`,
using `n()`, another `{dplyr}` function. There’s also `count()` and `tally()`, which we are going to
see further down. It is also possible to rename the column on the fly:
```
gasoline %>%
group_by(country) %>%
mutate(count = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap count
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
It is possible to do any arbitrary operation:
```
gasoline %>%
group_by(country) %>%
mutate(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap spam
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 0.100
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 0.0978
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 0.0969
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 0.0991
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 0.102
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 0.104
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 0.110
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 0.113
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 0.115
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 0.122
## # … with 332 more rows
```
`transmute()` is the same as `mutate()`, but only returns the created variable:
```
gasoline %>%
group_by(country) %>%
transmute(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 2
## # Groups: country [18]
## country spam
## <chr> <dbl>
## 1 austria 0.100
## 2 austria 0.0978
## 3 austria 0.0969
## 4 austria 0.0991
## 5 austria 0.102
## 6 austria 0.104
## 7 austria 0.110
## 8 austria 0.113
## 9 austria 0.115
## 10 austria 0.122
## # … with 332 more rows
```
### 4\.3\.7 Joining `tibble`s with `full_join()`, `left_join()`, `right_join()` and all the others
I will end this section on `{dplyr}` with the very useful verbs: the `*_join()` verbs. Let’s first
start by loading another dataset from the `plm` package. `SumHes` and let’s convert it to `tibble`
and rename it:
```
data(SumHes, package = "plm")
pwt <- SumHes %>%
as_tibble() %>%
mutate(country = tolower(country))
```
Let’s take a quick look at the data:
```
glimpse(pwt)
```
```
## Rows: 3,250
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "algeria", "algeria", "algeria", "algeria", "algeria", "algeri…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 10800, 11016, 11236, 11460, 11690, 11923, 12267, 12622, 12986,…
## $ gdp <int> 1723, 1599, 1275, 1517, 1589, 1584, 1548, 1600, 1758, 1835, 18…
## $ sr <dbl> 19.9, 21.1, 15.0, 13.9, 10.6, 11.0, 8.3, 11.3, 15.1, 18.2, 19.…
```
We can merge both `gasoline` and `pwt` by country and year, as these two variables are common to
both datasets. There are more countries and years in the `pwt` dataset, so when merging both, and
depending on which function you use, you will either have `NA`’s for the variables where there is
no match, or rows that will be dropped. Let’s start with `full_join`:
```
gas_pwt_full <- gasoline %>%
full_join(pwt, by = c("country", "year"))
```
Let’s see which countries and years are included:
```
gas_pwt_full %>%
count(country, year)
```
```
## # A tibble: 3,307 × 3
## country year n
## <chr> <int> <int>
## 1 algeria 1960 1
## 2 algeria 1961 1
## 3 algeria 1962 1
## 4 algeria 1963 1
## 5 algeria 1964 1
## 6 algeria 1965 1
## 7 algeria 1966 1
## 8 algeria 1967 1
## 9 algeria 1968 1
## 10 algeria 1969 1
## # … with 3,297 more rows
```
As you see, every country and year was included, but what happened for, say, the U.S.S.R? This country
is in `pwt` but not in `gasoline` at all:
```
gas_pwt_full %>%
filter(country == "u.s.s.r.")
```
```
## # A tibble: 26 × 11
## country year lgaspcar lincomep lrpmg lcarp…¹ opec com pop gdp sr
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <fct> <fct> <int> <int> <dbl>
## 1 u.s.s.r. 1960 NA NA NA NA no yes 214400 2397 37.9
## 2 u.s.s.r. 1961 NA NA NA NA no yes 217896 2542 39.4
## 3 u.s.s.r. 1962 NA NA NA NA no yes 221449 2656 38.4
## 4 u.s.s.r. 1963 NA NA NA NA no yes 225060 2681 38.4
## 5 u.s.s.r. 1964 NA NA NA NA no yes 227571 2854 39.5
## 6 u.s.s.r. 1965 NA NA NA NA no yes 230109 3049 39.9
## 7 u.s.s.r. 1966 NA NA NA NA no yes 232676 3247 39.9
## 8 u.s.s.r. 1967 NA NA NA NA no yes 235272 3454 40.2
## 9 u.s.s.r. 1968 NA NA NA NA no yes 237896 3730 40.6
## 10 u.s.s.r. 1969 NA NA NA NA no yes 240550 3808 37.9
## # … with 16 more rows, and abbreviated variable name ¹lcarpcap
```
As you probably guessed, the variables from `gasoline` that are not included in `pwt` are filled with
`NA`s. One could remove all these lines and only keep countries for which these variables are not
`NA` everywhere with `filter()`, but there is a simpler solution:
```
gas_pwt_inner <- gasoline %>%
inner_join(pwt, by = c("country", "year"))
```
Let’s use the `tabyl()` from the `janitor` packages which is a very nice alternative to the `table()`
function from base R:
```
library(janitor)
gas_pwt_inner %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only countries with values in both datasets were returned. It’s almost every country from `gasoline`,
apart from Germany (called “germany west” in `pwt` and “germany” in `gasoline`. I left it as is to
provide an example of a country not in `pwt`). Let’s also look at the variables:
```
glimpse(gas_pwt_inner)
```
```
## Rows: 285
## Columns: 11
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ pop <int> 7048, 7087, 7130, 7172, 7215, 7255, 7308, 7338, 7362, 7384, 7…
## $ gdp <int> 5143, 5388, 5481, 5688, 5978, 6144, 6437, 6596, 6847, 7162, 7…
## $ sr <dbl> 24.3, 24.5, 23.3, 22.9, 25.2, 25.2, 26.7, 25.6, 25.7, 26.1, 2…
```
The variables from both datasets are in the joined data.
Contrast this to `semi_join()`:
```
gas_pwt_semi <- gasoline %>%
semi_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_semi)
```
```
## Rows: 285
## Columns: 6
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only columns of `gasoline` are returned, and only rows of `gasoline` that were matched with rows
from `pwt`. `semi_join()` is not a commutative operation:
```
pwt_gas_semi <- pwt %>%
semi_join(gasoline, by = c("country", "year"))
glimpse(pwt_gas_semi)
```
```
## Rows: 285
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "canada", "canada", "canada", "canada", "canada", "canada", "c…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 17910, 18270, 18614, 18963, 19326, 19678, 20049, 20411, 20744,…
## $ gdp <int> 7258, 7261, 7605, 7876, 8244, 8664, 9093, 9231, 9582, 9975, 10…
## $ sr <dbl> 22.7, 21.5, 22.1, 21.9, 22.9, 24.8, 25.4, 23.1, 22.6, 23.4, 21…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
The rows are the same, but not the columns.
`left_join()` and `right_join()` return all the rows from either the dataset that is on the
“left” (the first argument of the fonction) or on the “right” (the second argument of the
function) but all columns from both datasets. So depending on which countries you’re interested in,
you’re going to use either one of these functions:
```
gas_pwt_left <- gasoline %>%
left_join(pwt, by = c("country", "year"))
gas_pwt_left %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.05555556
## belgium 19 0.05555556
## canada 19 0.05555556
## denmark 19 0.05555556
## france 19 0.05555556
## germany 19 0.05555556
## greece 19 0.05555556
## ireland 19 0.05555556
## italy 19 0.05555556
## japan 19 0.05555556
## netherla 19 0.05555556
## norway 19 0.05555556
## spain 19 0.05555556
## sweden 19 0.05555556
## switzerl 19 0.05555556
## turkey 19 0.05555556
## u.k. 19 0.05555556
## u.s.a. 19 0.05555556
```
```
gas_pwt_right <- gasoline %>%
right_join(pwt, by = c("country", "year"))
gas_pwt_right %>%
tabyl(country) %>%
head()
```
```
## country n percent
## algeria 26 0.008
## angola 26 0.008
## argentina 26 0.008
## australia 26 0.008
## austria 26 0.008
## bangladesh 26 0.008
```
The last merge function is `anti_join()`:
```
gas_pwt_anti <- gasoline %>%
anti_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_anti)
```
```
## Rows: 57
## Columns: 6
## $ country <chr> "germany", "germany", "germany", "germany", "germany", "germa…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 3.916953, 3.885345, 3.871484, 3.848782, 3.868993, 3.861049, 3…
## $ lincomep <dbl> -6.159837, -6.120923, -6.094258, -6.068361, -6.013442, -5.966…
## $ lrpmg <dbl> -0.1859108, -0.2309538, -0.3438417, -0.3746467, -0.3996526, -…
## $ lcarpcap <dbl> -9.342481, -9.183841, -9.037280, -8.913630, -8.811013, -8.711…
```
```
gas_pwt_anti %>%
tabyl(country)
```
```
## country n percent
## germany 19 0.3333333
## netherla 19 0.3333333
## switzerl 19 0.3333333
```
`gas_pwt_anti` has the columns the `gasoline` dataset as well as the only country from `gasoline`
that is not in `pwt`: “germany”.
That was it for the basic `{dplyr}` verbs. Next, we’re going to learn about `{tidyr}`.
### 4\.3\.1 A first taste of data manipulation with `{dplyr}`
This section will walk you through a typical analysis using `{dplyr}` funcitons. Just go with it; I
will give more details in the next sections.
First, let’s load `{dplyr}` and the included `starwars` dataset. Let’s also take a look at the
first 5 lines of the dataset:
```
library(dplyr)
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
`data(starwars)` loads the example dataset called `starwars` that is included in the package
`{dplyr}`. As I said earlier, this is just an example; you could have loaded an external dataset,
from a `.csv` file for instance. This does not matter for what comes next.
Like we saw earlier, R includes a lot of functions for descriptive statistics, such as `mean()`,
`sd()`, `cov()`, and many more. What `{dplyr}` brings to the table is a grammar of data
manipulation that makes it very easy to apply descriptive statistics functions, or any other,
very easily.
Just like before, we are going to compute the average height by `sex`:
```
starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex mean_height
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
The very nice thing about using `%>%` and `{dplyr}` verbs/functions, is that this is really
readable. The above three lines can be translated like so in English:
*Take the starwars dataset, then group by sex, then compute the mean height (for each subgroup) by
omitting missing values.*
`%>%` can be translated by “then”. Without `%>%` you would need to change the code to:
```
summarise(group_by(starwars, sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 5 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 female 169.
## 2 hermaphroditic 175
## 3 male 179.
## 4 none 131.
## 5 <NA> 181.
```
Unlike with the *base* approach, each function does only one thing. With the base function
`aggregate()` was used to also define the subgroups. This is not the case with `{dplyr}`; one
function to create the groups (`group_by()`) and then one function to compute the summaries
(`summarise()`). Also, `group_by()` creates a specific subgroup for individuals where `sex` is
missing. This is the last line in the data frame, where `sex` is `NA`. Another nice thing is that
you can specify the column containing the average height. I chose to name it `mean_height`.
Now, let’s suppose that we want to filter some data first:
```
starwars %>%
filter(gender == "masculine") %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex mean_height
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
Again, the `%>%` makes the above lines of code very easy to read. Without it, one would need to
write:
```
summarise(group_by(filter(starwars, gender == "masculine"), sex), mean(height, na.rm = TRUE))
```
```
## # A tibble: 3 × 2
## sex `mean(height, na.rm = TRUE)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male 179.
## 3 none 140
```
I think you agree with me that this is not very readable. One way to make it more readable would
be to save intermediary variables:
```
filtered_data <- filter(starwars, gender == "masculine")
grouped_data <- group_by(filter(starwars, gender == "masculine"), sex)
summarise(grouped_data, mean(height))
```
```
## # A tibble: 3 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 hermaphroditic 175
## 2 male NA
## 3 none NA
```
But this can get very tedious. Once you’re used to `%>%`, you won’t go back to not use it.
Before continuing and to make things clearer; `filter()`, `group_by()` and `summarise()` are
functions that are included in `{dplyr}`. `%>%` is actually a function from `{magrittr}`, but this
package gets loaded on the fly when you load `{dplyr}`, so you do not need to worry about it.
The result of all these operations that use `{dplyr}` functions are actually other datasets, or
`tibbles`. This means that you can save them in variable, or write them to disk, and then work with
these as any other datasets.
```
mean_height <- starwars %>%
group_by(sex) %>%
summarise(mean(height))
class(mean_height)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
```
head(mean_height)
```
```
## # A tibble: 5 × 2
## sex `mean(height)`
## <chr> <dbl>
## 1 female NA
## 2 hermaphroditic 175
## 3 male NA
## 4 none NA
## 5 <NA> NA
```
You could then write this data to disk using `rio::export()` for instance. If you need more than
the mean of the height, you can keep adding as many functions as needed (another advantage over
`aggregate()`:
```
summary_table <- starwars %>%
group_by(sex) %>%
summarise(mean_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n())
summary_table
```
```
## # A tibble: 5 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 hermaphroditic 175 NA 1
## 3 male 179. 1297. 60
## 4 none 131. 2416. 6
## 5 <NA> 181. 8.33 4
```
I’ve added more functions, namely `var()`, to get the variance of height, and `n()`, which
is a function from `{dplyr}`, not base R, to get the number of observations. This is quite useful,
because we see that there is a group with only one individual. Let’s focus on the
sexes for which we have more than 1 individual. Since we save all the previous operations (which
produce a `tibble`) in a variable, we can keep going from there:
```
summary_table2 <- summary_table %>%
filter(n_obs > 1)
summary_table2
```
```
## # A tibble: 4 × 4
## sex mean_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
## 4 <NA> 181. 8.33 4
```
As mentioned before, there’s a lot of `NA`s; this is because by default, `mean()` and `var()`
return `NA` if even one single observation is `NA`. This is good, because it forces you to look at
the data to see what is going on. If you would get a number, even if there were `NA`s you could
very easily miss these missing values. It is better for functions to fail early and often than the
opposite. This is way we keep using `na.rm = TRUE` for `mean()` and `var()`.
Now let’s actually take a look at the rows where `sex` is `NA`:
```
starwars %>%
filter(is.na(sex))
```
```
## # A tibble: 4 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Ric Olié 183 NA brown fair blue NA <NA> <NA> Naboo
## 2 Quarsh Pana… 183 NA black dark brown 62 <NA> <NA> Naboo
## 3 Sly Moore 178 48 none pale white NA <NA> <NA> Umbara
## 4 Captain Pha… NA NA unknown unknown unknown NA <NA> <NA> <NA>
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
There’s only 4 rows where `sex` is `NA`. Let’s ignore them:
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## # A tibble: 3 × 4
## sex ave_height var_height n_obs
## <chr> <dbl> <dbl> <int>
## 1 female 169. 235. 16
## 2 male 179. 1297. 60
## 3 none 131. 2416. 6
```
And why not compute the same table, but first add another stratifying variable?
```
starwars %>%
filter(!is.na(sex)) %>%
group_by(sex, eye_color) %>%
summarise(ave_height = mean(height, na.rm = TRUE),
var_height = var(height, na.rm = TRUE),
n_obs = n()) %>%
filter(n_obs > 1)
```
```
## `summarise()` has grouped output by 'sex'. You can override using the `.groups`
## argument.
```
```
## # A tibble: 12 × 5
## # Groups: sex [3]
## sex eye_color ave_height var_height n_obs
## <chr> <chr> <dbl> <dbl> <int>
## 1 female black 196. 612. 2
## 2 female blue 167 118. 6
## 3 female brown 160 42 5
## 4 female hazel 178 NA 2
## 5 male black 182 1197 7
## 6 male blue 190. 434. 12
## 7 male brown 167. 1663. 15
## 8 male orange 181. 1306. 7
## 9 male red 190. 0.5 2
## 10 male unknown 136 6498 2
## 11 male yellow 180. 2196. 9
## 12 none red 131 3571 3
```
Ok, that’s it for a first taste. We have already discovered some very useful `{dplyr}` functions,
`filter()`, `group_by()` and summarise `summarise()`.
Now, we are going to learn more about these functions in more detail.
### 4\.3\.2 Filter the rows of a dataset with `filter()`
We’re going to use the `Gasoline` dataset from the `plm` package, so install that first:
```
install.packages("plm")
```
Then load the required data:
```
data(Gasoline, package = "plm")
```
and load dplyr:
```
library(dplyr)
```
This dataset gives the consumption of gasoline for 18 countries from 1960 to 1978\. When you load
the data like this, it is a standard `data.frame`. `{dplyr}` functions can be used on standard
`data.frame` objects, but also on `tibble`s. `tibble`s are just like data frame, but with a better
print method (and other niceties). I’ll discuss the `{tibble}` package later, but for now, let’s
convert the data to a `tibble` and change its name, and also transform the `country` column to
lower case:
```
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
`filter()` is pretty straightforward. What if you would like to subset the data to focus on the
year 1969? Simple:
```
filter(gasoline, year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
Let’s use `%>%`, since we’re familiar with it now:
```
gasoline %>%
filter(year == 1969)
```
```
## # A tibble: 18 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 belgium 1969 3.85 -5.86 -0.355 -8.52
## 3 canada 1969 4.86 -5.56 -1.04 -8.10
## 4 denmark 1969 4.17 -5.72 -0.407 -8.47
## 5 france 1969 3.77 -5.84 -0.315 -8.37
## 6 germany 1969 3.90 -5.83 -0.589 -8.44
## 7 greece 1969 4.89 -6.59 -0.180 -10.7
## 8 ireland 1969 4.21 -6.38 -0.272 -8.95
## 9 italy 1969 3.74 -6.28 -0.248 -8.67
## 10 japan 1969 4.52 -6.16 -0.417 -9.61
## 11 netherla 1969 3.99 -5.88 -0.417 -8.63
## 12 norway 1969 4.09 -5.74 -0.338 -8.69
## 13 spain 1969 3.99 -5.60 0.669 -9.72
## 14 sweden 1969 3.99 -7.77 -2.73 -8.20
## 15 switzerl 1969 4.21 -5.91 -0.918 -8.47
## 16 turkey 1969 5.72 -7.39 -0.298 -12.5
## 17 u.k. 1969 3.95 -6.03 -0.383 -8.47
## 18 u.s.a. 1969 4.84 -5.41 -1.22 -7.79
```
You can also filter more than just one year, by using the `%in%` operator:
```
gasoline %>%
filter(year %in% seq(1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
It is also possible use `between()`, a helper function:
```
gasoline %>%
filter(between(year, 1969, 1973))
```
```
## # A tibble: 90 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1970 4.08 -6.08 -0.597 -8.73
## 3 austria 1971 4.11 -6.04 -0.654 -8.64
## 4 austria 1972 4.13 -5.98 -0.596 -8.54
## 5 austria 1973 4.20 -5.90 -0.594 -8.49
## 6 belgium 1969 3.85 -5.86 -0.355 -8.52
## 7 belgium 1970 3.87 -5.80 -0.378 -8.45
## 8 belgium 1971 3.87 -5.76 -0.399 -8.41
## 9 belgium 1972 3.91 -5.71 -0.311 -8.36
## 10 belgium 1973 3.90 -5.64 -0.373 -8.31
## # … with 80 more rows
```
To select non\-consecutive years:
```
gasoline %>%
filter(year %in% c(1969, 1973, 1977))
```
```
## # A tibble: 54 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1969 4.05 -6.15 -0.559 -8.79
## 2 austria 1973 4.20 -5.90 -0.594 -8.49
## 3 austria 1977 3.93 -5.83 -0.422 -8.25
## 4 belgium 1969 3.85 -5.86 -0.355 -8.52
## 5 belgium 1973 3.90 -5.64 -0.373 -8.31
## 6 belgium 1977 3.85 -5.56 -0.432 -8.14
## 7 canada 1969 4.86 -5.56 -1.04 -8.10
## 8 canada 1973 4.90 -5.41 -1.13 -7.94
## 9 canada 1977 4.81 -5.34 -1.07 -7.77
## 10 denmark 1969 4.17 -5.72 -0.407 -8.47
## # … with 44 more rows
```
`%in%` tests if an object is part of a set.
### 4\.3\.3 Select columns with `select()`
While `filter()` allows you to keep or discard rows of data, `select()` allows you to keep or
discard entire columns. To keep columns:
```
gasoline %>%
select(country, year, lrpmg)
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
To discard them:
```
gasoline %>%
select(-country, -year, -lrpmg)
```
```
## # A tibble: 342 × 3
## lgaspcar lincomep lcarpcap
## <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -9.77
## 2 4.10 -6.43 -9.61
## 3 4.07 -6.41 -9.46
## 4 4.06 -6.37 -9.34
## 5 4.04 -6.32 -9.24
## 6 4.03 -6.29 -9.12
## 7 4.05 -6.25 -9.02
## 8 4.05 -6.23 -8.93
## 9 4.05 -6.21 -8.85
## 10 4.05 -6.15 -8.79
## # … with 332 more rows
```
To rename them:
```
gasoline %>%
select(country, date = year, lrpmg)
```
```
## # A tibble: 342 × 3
## country date lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
There’s also `rename()`:
```
gasoline %>%
rename(date = year)
```
```
## # A tibble: 342 × 6
## country date lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`rename()` does not do any kind of selection, but just renames.
You can also use `select()` to re\-order columns:
```
gasoline %>%
select(year, country, lrpmg, everything())
```
```
## # A tibble: 342 × 6
## year country lrpmg lgaspcar lincomep lcarpcap
## <int> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 1960 austria -0.335 4.17 -6.47 -9.77
## 2 1961 austria -0.351 4.10 -6.43 -9.61
## 3 1962 austria -0.380 4.07 -6.41 -9.46
## 4 1963 austria -0.414 4.06 -6.37 -9.34
## 5 1964 austria -0.445 4.04 -6.32 -9.24
## 6 1965 austria -0.497 4.03 -6.29 -9.12
## 7 1966 austria -0.467 4.05 -6.25 -9.02
## 8 1967 austria -0.506 4.05 -6.23 -8.93
## 9 1968 austria -0.522 4.05 -6.21 -8.85
## 10 1969 austria -0.559 4.05 -6.15 -8.79
## # … with 332 more rows
```
`everything()` is a helper function, and there’s also `starts_with()`, and `ends_with()`. For
example, what if we are only interested in columns whose name start with “l”?
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
`ends_with()` works in a similar fashion. There is also `contains()`:
```
gasoline %>%
select(country, year, contains("car"))
```
```
## # A tibble: 342 × 4
## country year lgaspcar lcarpcap
## <chr> <int> <dbl> <dbl>
## 1 austria 1960 4.17 -9.77
## 2 austria 1961 4.10 -9.61
## 3 austria 1962 4.07 -9.46
## 4 austria 1963 4.06 -9.34
## 5 austria 1964 4.04 -9.24
## 6 austria 1965 4.03 -9.12
## 7 austria 1966 4.05 -9.02
## 8 austria 1967 4.05 -8.93
## 9 austria 1968 4.05 -8.85
## 10 austria 1969 4.05 -8.79
## # … with 332 more rows
```
You can read more about these helper functions [here](https://tidyselect.r-lib.org/reference/language.html), but we’re going to look more into
them in a coming section.
Another verb, similar to `select()`, is `pull()`. Let’s compare the two:
```
gasoline %>%
select(lrpmg)
```
```
## # A tibble: 342 × 1
## lrpmg
## <dbl>
## 1 -0.335
## 2 -0.351
## 3 -0.380
## 4 -0.414
## 5 -0.445
## 6 -0.497
## 7 -0.467
## 8 -0.506
## 9 -0.522
## 10 -0.559
## # … with 332 more rows
```
```
gasoline %>%
pull(lrpmg) %>%
head() # using head() because there's 337 elements in total
```
```
## [1] -0.3345476 -0.3513276 -0.3795177 -0.4142514 -0.4453354 -0.4970607
```
`pull()`, unlike `select()`, does not return a `tibble`, but only the column you want, as a
vector.
### 4\.3\.4 Group the observations of your dataset with `group_by()`
`group_by()` is a very useful verb; as the name implies, it allows you to create groups and then,
for example, compute descriptive statistics by groups. For example, let’s group our data by
country:
```
gasoline %>%
group_by(country)
```
```
## # A tibble: 342 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
It looks like nothing much happened, but if you look at the second line of the output you can read
the following:
```
## # Groups: country [18]
```
this means that the data is grouped, and every computation you will do now will take these groups
into account. It is also possible to group by more than one variable:
```
gasoline %>%
group_by(country, year)
```
```
## # A tibble: 342 × 6
## # Groups: country, year [342]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
and so on. You can then also ungroup:
```
gasoline %>%
group_by(country, year) %>%
ungroup()
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Once your data is grouped, the operations that will follow will be executed inside each group.
### 4\.3\.5 Get summary statistics with `summarise()`
Ok, now that we have learned the basic verbs, we can start to do more interesting stuff. For
example, one might want to compute the average gasoline consumption in each country, for
the whole period:
```
gasoline %>%
group_by(country) %>%
summarise(mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country `mean(lgaspcar)`
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
`mean()` was given as an argument to `summarise()`, which is a `{dplyr}` verb. What we get is
another `tibble`, that contains the variable we used to group, as well as the average per country.
We can also rename this column:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar))
```
```
## # A tibble: 18 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 austria 4.06
## 2 belgium 3.92
## 3 canada 4.86
## 4 denmark 4.19
## 5 france 3.82
## 6 germany 3.89
## 7 greece 4.88
## 8 ireland 4.23
## 9 italy 3.73
## 10 japan 4.70
## 11 netherla 4.08
## 12 norway 4.11
## 13 spain 4.06
## 14 sweden 4.01
## 15 switzerl 4.24
## 16 turkey 5.77
## 17 u.k. 3.98
## 18 u.s.a. 4.82
```
and because the output is a `tibble`, we can continue to use `{dplyr}` verbs on it:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar)) %>%
filter(country == "france")
```
```
## # A tibble: 1 × 2
## country mean_gaspcar
## <chr> <dbl>
## 1 france 3.82
```
`summarise()` is a very useful verb. For example, we can compute several descriptive statistics at once:
```
gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
```
## # A tibble: 18 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92
## 2 belgium 3.92 0.103 4.16 3.82
## 3 canada 4.86 0.0262 4.90 4.81
## 4 denmark 4.19 0.158 4.50 4.00
## 5 france 3.82 0.0499 3.91 3.75
## 6 germany 3.89 0.0239 3.93 3.85
## 7 greece 4.88 0.255 5.38 4.48
## 8 ireland 4.23 0.0437 4.33 4.16
## 9 italy 3.73 0.220 4.05 3.38
## 10 japan 4.70 0.684 6.00 3.95
## 11 netherla 4.08 0.286 4.65 3.71
## 12 norway 4.11 0.123 4.44 3.96
## 13 spain 4.06 0.317 4.75 3.62
## 14 sweden 4.01 0.0364 4.07 3.91
## 15 switzerl 4.24 0.102 4.44 4.05
## 16 turkey 5.77 0.329 6.16 5.14
## 17 u.k. 3.98 0.0479 4.10 3.91
## 18 u.s.a. 4.82 0.0219 4.86 4.79
```
Because the output is a `tibble`, you can save it in a variable of course:
```
desc_gasoline <- gasoline %>%
group_by(country) %>%
summarise(mean_gaspcar = mean(lgaspcar),
sd_gaspcar = sd(lgaspcar),
max_gaspcar = max(lgaspcar),
min_gaspcar = min(lgaspcar))
```
And then you can answer questions such as, *which country has the maximum average gasoline
consumption?*:
```
desc_gasoline %>%
filter(max(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 5.77 0.329 6.16 5.14
```
Turns out it’s Turkey. What about the minimum consumption?
```
desc_gasoline %>%
filter(min(mean_gaspcar) == mean_gaspcar)
```
```
## # A tibble: 1 × 5
## country mean_gaspcar sd_gaspcar max_gaspcar min_gaspcar
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 italy 3.73 0.220 4.05 3.38
```
Because the output of `{dplyr}` verbs is a tibble, it is possible to continue working with it. This
is one shortcoming of using the base `summary()` function. The object returned by that function is
not very easy to manipulate.
### 4\.3\.6 Adding columns with `mutate()` and `transmute()`
`mutate()` adds a column to the `tibble`, which can contain any transformation of any other
variable:
```
gasoline %>%
group_by(country) %>%
mutate(n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap `n()`
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
Using `mutate()` I’ve added a column that counts how many times the country appears in the `tibble`,
using `n()`, another `{dplyr}` function. There’s also `count()` and `tally()`, which we are going to
see further down. It is also possible to rename the column on the fly:
```
gasoline %>%
group_by(country) %>%
mutate(count = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap count
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
It is possible to do any arbitrary operation:
```
gasoline %>%
group_by(country) %>%
mutate(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap spam
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 0.100
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 0.0978
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 0.0969
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 0.0991
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 0.102
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 0.104
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 0.110
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 0.113
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 0.115
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 0.122
## # … with 332 more rows
```
`transmute()` is the same as `mutate()`, but only returns the created variable:
```
gasoline %>%
group_by(country) %>%
transmute(spam = exp(lgaspcar + lincomep))
```
```
## # A tibble: 342 × 2
## # Groups: country [18]
## country spam
## <chr> <dbl>
## 1 austria 0.100
## 2 austria 0.0978
## 3 austria 0.0969
## 4 austria 0.0991
## 5 austria 0.102
## 6 austria 0.104
## 7 austria 0.110
## 8 austria 0.113
## 9 austria 0.115
## 10 austria 0.122
## # … with 332 more rows
```
### 4\.3\.7 Joining `tibble`s with `full_join()`, `left_join()`, `right_join()` and all the others
I will end this section on `{dplyr}` with the very useful verbs: the `*_join()` verbs. Let’s first
start by loading another dataset from the `plm` package. `SumHes` and let’s convert it to `tibble`
and rename it:
```
data(SumHes, package = "plm")
pwt <- SumHes %>%
as_tibble() %>%
mutate(country = tolower(country))
```
Let’s take a quick look at the data:
```
glimpse(pwt)
```
```
## Rows: 3,250
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "algeria", "algeria", "algeria", "algeria", "algeria", "algeri…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 10800, 11016, 11236, 11460, 11690, 11923, 12267, 12622, 12986,…
## $ gdp <int> 1723, 1599, 1275, 1517, 1589, 1584, 1548, 1600, 1758, 1835, 18…
## $ sr <dbl> 19.9, 21.1, 15.0, 13.9, 10.6, 11.0, 8.3, 11.3, 15.1, 18.2, 19.…
```
We can merge both `gasoline` and `pwt` by country and year, as these two variables are common to
both datasets. There are more countries and years in the `pwt` dataset, so when merging both, and
depending on which function you use, you will either have `NA`’s for the variables where there is
no match, or rows that will be dropped. Let’s start with `full_join`:
```
gas_pwt_full <- gasoline %>%
full_join(pwt, by = c("country", "year"))
```
Let’s see which countries and years are included:
```
gas_pwt_full %>%
count(country, year)
```
```
## # A tibble: 3,307 × 3
## country year n
## <chr> <int> <int>
## 1 algeria 1960 1
## 2 algeria 1961 1
## 3 algeria 1962 1
## 4 algeria 1963 1
## 5 algeria 1964 1
## 6 algeria 1965 1
## 7 algeria 1966 1
## 8 algeria 1967 1
## 9 algeria 1968 1
## 10 algeria 1969 1
## # … with 3,297 more rows
```
As you see, every country and year was included, but what happened for, say, the U.S.S.R? This country
is in `pwt` but not in `gasoline` at all:
```
gas_pwt_full %>%
filter(country == "u.s.s.r.")
```
```
## # A tibble: 26 × 11
## country year lgaspcar lincomep lrpmg lcarp…¹ opec com pop gdp sr
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <fct> <fct> <int> <int> <dbl>
## 1 u.s.s.r. 1960 NA NA NA NA no yes 214400 2397 37.9
## 2 u.s.s.r. 1961 NA NA NA NA no yes 217896 2542 39.4
## 3 u.s.s.r. 1962 NA NA NA NA no yes 221449 2656 38.4
## 4 u.s.s.r. 1963 NA NA NA NA no yes 225060 2681 38.4
## 5 u.s.s.r. 1964 NA NA NA NA no yes 227571 2854 39.5
## 6 u.s.s.r. 1965 NA NA NA NA no yes 230109 3049 39.9
## 7 u.s.s.r. 1966 NA NA NA NA no yes 232676 3247 39.9
## 8 u.s.s.r. 1967 NA NA NA NA no yes 235272 3454 40.2
## 9 u.s.s.r. 1968 NA NA NA NA no yes 237896 3730 40.6
## 10 u.s.s.r. 1969 NA NA NA NA no yes 240550 3808 37.9
## # … with 16 more rows, and abbreviated variable name ¹lcarpcap
```
As you probably guessed, the variables from `gasoline` that are not included in `pwt` are filled with
`NA`s. One could remove all these lines and only keep countries for which these variables are not
`NA` everywhere with `filter()`, but there is a simpler solution:
```
gas_pwt_inner <- gasoline %>%
inner_join(pwt, by = c("country", "year"))
```
Let’s use the `tabyl()` from the `janitor` packages which is a very nice alternative to the `table()`
function from base R:
```
library(janitor)
gas_pwt_inner %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only countries with values in both datasets were returned. It’s almost every country from `gasoline`,
apart from Germany (called “germany west” in `pwt` and “germany” in `gasoline`. I left it as is to
provide an example of a country not in `pwt`). Let’s also look at the variables:
```
glimpse(gas_pwt_inner)
```
```
## Rows: 285
## Columns: 11
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ pop <int> 7048, 7087, 7130, 7172, 7215, 7255, 7308, 7338, 7362, 7384, 7…
## $ gdp <int> 5143, 5388, 5481, 5688, 5978, 6144, 6437, 6596, 6847, 7162, 7…
## $ sr <dbl> 24.3, 24.5, 23.3, 22.9, 25.2, 25.2, 26.7, 25.6, 25.7, 26.1, 2…
```
The variables from both datasets are in the joined data.
Contrast this to `semi_join()`:
```
gas_pwt_semi <- gasoline %>%
semi_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_semi)
```
```
## Rows: 285
## Columns: 6
## $ country <chr> "austria", "austria", "austria", "austria", "austria", "austr…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 4.173244, 4.100989, 4.073177, 4.059509, 4.037689, 4.033983, 4…
## $ lincomep <dbl> -6.474277, -6.426006, -6.407308, -6.370679, -6.322247, -6.294…
## $ lrpmg <dbl> -0.3345476, -0.3513276, -0.3795177, -0.4142514, -0.4453354, -…
## $ lcarpcap <dbl> -9.766840, -9.608622, -9.457257, -9.343155, -9.237739, -9.123…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
Only columns of `gasoline` are returned, and only rows of `gasoline` that were matched with rows
from `pwt`. `semi_join()` is not a commutative operation:
```
pwt_gas_semi <- pwt %>%
semi_join(gasoline, by = c("country", "year"))
glimpse(pwt_gas_semi)
```
```
## Rows: 285
## Columns: 7
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 19…
## $ country <chr> "canada", "canada", "canada", "canada", "canada", "canada", "c…
## $ opec <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ com <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no…
## $ pop <int> 17910, 18270, 18614, 18963, 19326, 19678, 20049, 20411, 20744,…
## $ gdp <int> 7258, 7261, 7605, 7876, 8244, 8664, 9093, 9231, 9582, 9975, 10…
## $ sr <dbl> 22.7, 21.5, 22.1, 21.9, 22.9, 24.8, 25.4, 23.1, 22.6, 23.4, 21…
```
```
gas_pwt_semi %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.06666667
## belgium 19 0.06666667
## canada 19 0.06666667
## denmark 19 0.06666667
## france 19 0.06666667
## greece 19 0.06666667
## ireland 19 0.06666667
## italy 19 0.06666667
## japan 19 0.06666667
## norway 19 0.06666667
## spain 19 0.06666667
## sweden 19 0.06666667
## turkey 19 0.06666667
## u.k. 19 0.06666667
## u.s.a. 19 0.06666667
```
The rows are the same, but not the columns.
`left_join()` and `right_join()` return all the rows from either the dataset that is on the
“left” (the first argument of the fonction) or on the “right” (the second argument of the
function) but all columns from both datasets. So depending on which countries you’re interested in,
you’re going to use either one of these functions:
```
gas_pwt_left <- gasoline %>%
left_join(pwt, by = c("country", "year"))
gas_pwt_left %>%
tabyl(country)
```
```
## country n percent
## austria 19 0.05555556
## belgium 19 0.05555556
## canada 19 0.05555556
## denmark 19 0.05555556
## france 19 0.05555556
## germany 19 0.05555556
## greece 19 0.05555556
## ireland 19 0.05555556
## italy 19 0.05555556
## japan 19 0.05555556
## netherla 19 0.05555556
## norway 19 0.05555556
## spain 19 0.05555556
## sweden 19 0.05555556
## switzerl 19 0.05555556
## turkey 19 0.05555556
## u.k. 19 0.05555556
## u.s.a. 19 0.05555556
```
```
gas_pwt_right <- gasoline %>%
right_join(pwt, by = c("country", "year"))
gas_pwt_right %>%
tabyl(country) %>%
head()
```
```
## country n percent
## algeria 26 0.008
## angola 26 0.008
## argentina 26 0.008
## australia 26 0.008
## austria 26 0.008
## bangladesh 26 0.008
```
The last merge function is `anti_join()`:
```
gas_pwt_anti <- gasoline %>%
anti_join(pwt, by = c("country", "year"))
glimpse(gas_pwt_anti)
```
```
## Rows: 57
## Columns: 6
## $ country <chr> "germany", "germany", "germany", "germany", "germany", "germa…
## $ year <int> 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1…
## $ lgaspcar <dbl> 3.916953, 3.885345, 3.871484, 3.848782, 3.868993, 3.861049, 3…
## $ lincomep <dbl> -6.159837, -6.120923, -6.094258, -6.068361, -6.013442, -5.966…
## $ lrpmg <dbl> -0.1859108, -0.2309538, -0.3438417, -0.3746467, -0.3996526, -…
## $ lcarpcap <dbl> -9.342481, -9.183841, -9.037280, -8.913630, -8.811013, -8.711…
```
```
gas_pwt_anti %>%
tabyl(country)
```
```
## country n percent
## germany 19 0.3333333
## netherla 19 0.3333333
## switzerl 19 0.3333333
```
`gas_pwt_anti` has the columns the `gasoline` dataset as well as the only country from `gasoline`
that is not in `pwt`: “germany”.
That was it for the basic `{dplyr}` verbs. Next, we’re going to learn about `{tidyr}`.
4\.4 Reshaping and sprucing up data with `{tidyr}`
--------------------------------------------------
Note: this section is going to be a lot harder than anything you’ve seen until now. Reshaping
data is tricky, and to really grok it, you need time, and you need to run each line, and see what
happens. Take your time, and don’t be discouraged.
Another important package from the `{tidyverse}` that goes hand in hand with `{dplyr}` is `{tidyr}`.
`{tidyr}` is the package you need when it’s time to reshape data.
I will start by presenting `pivot_wider()` and `pivot_longer()`.
### 4\.4\.1 `pivot_wider()` and `pivot_longer()`
Let’s first create a fake dataset:
```
library(tidyr)
```
```
survey_data <- tribble(
~id, ~variable, ~value,
1, "var1", 1,
1, "var2", 0.2,
NA, "var3", 0.3,
2, "var1", 1.4,
2, "var2", 1.9,
2, "var3", 4.1,
3, "var1", 0.1,
3, "var2", 2.8,
3, "var3", 8.9,
4, "var1", 1.7,
NA, "var2", 1.9,
4, "var3", 7.6
)
head(survey_data)
```
```
## # A tibble: 6 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
```
I used the `tribble()` function from the `{tibble}` package to create this fake dataset.
I’ll discuss this package later, for now, let’s focus on `{tidyr}.`
Let’s suppose that we need the data to be in the wide format which means `var1`, `var2` and `var3`
need to be their own columns. To do this, we need to use the `pivot_wider()` function. Why *wide*?
Because the data set will be wide, meaning, having more columns than rows.
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value)
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 NA
## 2 NA NA 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 NA 7.6
```
Let’s go through `pivot_wider()`’s arguments: the first is `id_cols =` which requires the variable
that uniquely identifies the rows to be supplied. `names_from =` is where you input the variable that will
generate the names of the new columns. In our case, the `variable` colmuns has three values; `var1`,
`var2` and `var3`, and these are now the names of the new columns. Finally, `values_from =` is where
you can specify the column containing the values that will fill the data frame.
I find the argument names `names_from =` and `values_from =` quite explicit.
As you can see, there are some missing values. Let’s suppose that we know that these missing values
are true 0’s. `pivot_wider()` has an argument called `values_fill =` that makes it easy to replace
the missing values:
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value,
values_fill = list(value = 0))
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 0
## 2 NA 0 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 0 7.6
```
A list of variables and their respective values to replace NA’s with must be supplied to `values_fill`.
Let’s now use another dataset, which you can get from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from: [http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId\=12950\&IF\_Language\=eng\&MainTheme\=2\&FldrName\=3\&RFPath\=91](http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId=12950&IF_Language=eng&MainTheme=2&FldrName=3&RFPath=91)). This data set gives the unemployment rate for each Luxembourguish
canton from 2001 to 2015\. We will come back to this data later on to learn how to plot it. For now,
let’s use it to learn more about `{tidyr}`.
```
unemp_lux_data <- rio::import(
"https://raw.githubusercontent.com/b-rodrigues/modern_R/master/datasets/unemployment/all/unemployment_lux_all.csv"
)
head(unemp_lux_data)
```
```
## division year active_population of_which_non_wage_earners
## 1 Beaufort 2001 688 85
## 2 Beaufort 2002 742 85
## 3 Beaufort 2003 773 85
## 4 Beaufort 2004 828 80
## 5 Beaufort 2005 866 96
## 6 Beaufort 2006 893 87
## of_which_wage_earners total_employed_population unemployed
## 1 568 653 35
## 2 631 716 26
## 3 648 733 40
## 4 706 786 42
## 5 719 815 51
## 6 746 833 60
## unemployment_rate_in_percent
## 1 5.09
## 2 3.50
## 3 5.17
## 4 5.07
## 5 5.89
## 6 6.72
```
Now, let’s suppose that for our purposes, it would make more sense to have the data in a wide format,
where columns are “divison times year” and the value is the unemployment rate. This can be easily done
with providing more columns to `names_from =`.
```
unemp_lux_data2 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017),
str_detect(division, ".*ange$"),
!str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column()
unemp_lux_data2 %>%
pivot_wider(names_from = c(division, year),
values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 48 × 49
## rowid Bertr…¹ Bertr…² Bertr…³ Diffe…⁴ Diffe…⁵ Diffe…⁶ Dudel…⁷ Dudel…⁸ Dudel…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.69 NA NA NA NA NA NA NA NA
## 2 2 NA 5.65 NA NA NA NA NA NA NA
## 3 3 NA NA 5.35 NA NA NA NA NA NA
## 4 4 NA NA NA 13.2 NA NA NA NA NA
## 5 5 NA NA NA NA 12.6 NA NA NA NA
## 6 6 NA NA NA NA NA 11.4 NA NA NA
## 7 7 NA NA NA NA NA NA 9.35 NA NA
## 8 8 NA NA NA NA NA NA NA 9.37 NA
## 9 9 NA NA NA NA NA NA NA NA 8.53
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 38 more rows, 39 more variables: Frisange_2013 <dbl>,
## # Frisange_2014 <dbl>, Frisange_2015 <dbl>, Hesperange_2013 <dbl>,
## # Hesperange_2014 <dbl>, Hesperange_2015 <dbl>, Leudelange_2013 <dbl>,
## # Leudelange_2014 <dbl>, Leudelange_2015 <dbl>, Mondercange_2013 <dbl>,
## # Mondercange_2014 <dbl>, Mondercange_2015 <dbl>, Pétange_2013 <dbl>,
## # Pétange_2014 <dbl>, Pétange_2015 <dbl>, Rumelange_2013 <dbl>,
## # Rumelange_2014 <dbl>, Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, …
```
In the `filter()` statement, I only kept data from 2013 to 2017, “division”s ending with the string
“ange” (“division” can be a canton or a commune, for example “Canton Redange”, a canton, or
“Hesperange” a commune), and removed the cantons as I’m only interested in communes. If you don’t
understand this `filter()` statement, don’t fret; this is not important for what follows. I then
only kept the columns I’m interested in and pivoted the data to a wide format. Also, I needed to
add a unique identifier to the data frame. For this, I used `rowid_to_column()` function, from the
`{tibble}` package, which adds a new column to the data frame with an id, going from 1 to the
number of rows in the data frame. If I did not add this identifier, the statement would work still:
```
unemp_lux_data3 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017), str_detect(division, ".*ange$"), !str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent)
unemp_lux_data3 %>%
pivot_wider(names_from = c(division, year), values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 1 × 48
## Bertrange_2013 Bertr…¹ Bertr…² Diffe…³ Diffe…⁴ Diffe…⁵ Dudel…⁶ Dudel…⁷ Dudel…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.69 5.65 5.35 13.2 12.6 11.4 9.35 9.37 8.53
## # … with 39 more variables: Frisange_2013 <dbl>, Frisange_2014 <dbl>,
## # Frisange_2015 <dbl>, Hesperange_2013 <dbl>, Hesperange_2014 <dbl>,
## # Hesperange_2015 <dbl>, Leudelange_2013 <dbl>, Leudelange_2014 <dbl>,
## # Leudelange_2015 <dbl>, Mondercange_2013 <dbl>, Mondercange_2014 <dbl>,
## # Mondercange_2015 <dbl>, Pétange_2013 <dbl>, Pétange_2014 <dbl>,
## # Pétange_2015 <dbl>, Rumelange_2013 <dbl>, Rumelange_2014 <dbl>,
## # Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, Schifflange_2014 <dbl>, …
```
and actually look even better, but only because there are no repeated values; there is only one
unemployment rate for each “commune times year”. I will come back to this later on, with another
example that might be clearer. These last two code blocks are intense; make sure you go through
each lien step by step and understand what is going on.
You might have noticed that because there is no data for the years 2016 and 2017, these columns do
not appear in the data. But suppose that we need to have these columns, so that a colleague from
another department can fill in the values. This is possible by providing a data frame with the
detailed specifications of the result data frame. This optional data frame must have at least two
columns, `.name`, which are the column names you want, and `.value` which contains the values.
Also, the function that uses this spec is a `pivot_wider_spec()`, and not `pivot_wider()`.
```
unemp_spec <- unemp_lux_data %>%
tidyr::expand(division,
year = c(year, 2016, 2017),
.value = "unemployment_rate_in_percent") %>%
unite(".name", division, year, remove = FALSE)
unemp_spec
```
Here, I use another function, `tidyr::expand()`, which returns every combinations (cartesian product)
of every variable from a dataset.
To make it work, we still need to create a column that uniquely identifies each row in the data:
```
unemp_lux_data4 <- unemp_lux_data %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column() %>%
pivot_wider_spec(spec = unemp_spec)
unemp_lux_data4
```
```
## # A tibble: 1,770 × 2,007
## rowid Beauf…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸ Beauf…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.09 NA NA NA NA NA NA NA NA
## 2 2 NA 3.5 NA NA NA NA NA NA NA
## 3 3 NA NA 5.17 NA NA NA NA NA NA
## 4 4 NA NA NA 5.07 NA NA NA NA NA
## 5 5 NA NA NA NA 5.89 NA NA NA NA
## 6 6 NA NA NA NA NA 6.72 NA NA NA
## 7 7 NA NA NA NA NA NA 4.3 NA NA
## 8 8 NA NA NA NA NA NA NA 7.08 NA
## 9 9 NA NA NA NA NA NA NA NA 8.52
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 1,760 more rows, 1,997 more variables: Beaufort_2010 <dbl>,
## # Beaufort_2011 <dbl>, Beaufort_2012 <dbl>, Beaufort_2013 <dbl>,
## # Beaufort_2014 <dbl>, Beaufort_2015 <dbl>, Beaufort_2016 <dbl>,
## # Beaufort_2017 <dbl>, Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>,
## # Bech_2004 <dbl>, Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>,
## # Bech_2008 <dbl>, Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>,
## # Bech_2012 <dbl>, Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, …
```
You can notice that now we have columns for 2016 and 2017 too. Let’s clean the data a little bit more:
```
unemp_lux_data4 %>%
select(-rowid) %>%
fill(matches(".*"), .direction = "down") %>%
slice(n())
```
```
## # A tibble: 1 × 2,006
## Beaufort_2001 Beaufo…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.09 3.5 5.17 5.07 5.89 6.72 4.3 7.08 8.52
## # … with 1,997 more variables: Beaufort_2010 <dbl>, Beaufort_2011 <dbl>,
## # Beaufort_2012 <dbl>, Beaufort_2013 <dbl>, Beaufort_2014 <dbl>,
## # Beaufort_2015 <dbl>, Beaufort_2016 <dbl>, Beaufort_2017 <dbl>,
## # Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>, Bech_2004 <dbl>,
## # Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>, Bech_2008 <dbl>,
## # Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>, Bech_2012 <dbl>,
## # Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, Bech_2016 <dbl>, …
```
We will learn about `fill()`, anoher `{tidyr}` function a bit later in this chapter, but its basic
purpose is to fill rows with whatever value comes before or after the missing values. `slice(n())`
then only keeps the last row of the data frame, which is the row that contains all the values (expect
for 2016 and 2017, which has missing values, as we wanted).
Here is another example of the importance of having an identifier column when using a spec:
```
data(mtcars)
mtcars_spec <- mtcars %>%
tidyr::expand(am, cyl, .value = "mpg") %>%
unite(".name", am, cyl, remove = FALSE)
mtcars_spec
```
We can now transform the data:
```
mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
As you can see, there are several values of “mpg” for some combinations of “am” times “cyl”. If
we remove the other columns, each row will not be uniquely identified anymore. This results in a
warning message, and a tibble that contains list\-columns:
```
mtcars %>%
select(am, cyl, mpg) %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## Warning: Values from `mpg` are not uniquely identified; output will contain list-cols.
## * Use `values_fn = list` to suppress this warning.
## * Use `values_fn = {summary_fun}` to summarise duplicates.
## * Use the following dplyr code to identify duplicates.
## {data} %>%
## dplyr::group_by(am, cyl) %>%
## dplyr::summarise(n = dplyr::n(), .groups = "drop") %>%
## dplyr::filter(n > 1L)
```
```
## # A tibble: 1 × 6
## `0_4` `0_6` `0_8` `1_4` `1_6` `1_8`
## <list> <list> <list> <list> <list> <list>
## 1 <dbl [3]> <dbl [4]> <dbl [12]> <dbl [8]> <dbl [3]> <dbl [2]>
```
We are going to learn about list\-columns in the next section. List\-columns are very powerful, and
mastering them will be important. But generally speaking, when reshaping data, if you get list\-columns
back it often means that something went wrong.
So you have to be careful with this.
`pivot_longer()` is used when you need to go from a wide to a long dataset, meaning, a dataset
where there are some columns that should not be columns, but rather, the levels of a factor
variable. Let’s suppose that the “am” column is split into two columns, `1` for automatic and `0`
for manual transmissions, and that the values filling these colums are miles per gallon, “mpg”:
```
mtcars_wide_am <- mtcars %>%
pivot_wider(names_from = am, values_from = mpg)
mtcars_wide_am %>%
select(`0`, `1`, everything())
```
```
## # A tibble: 32 × 11
## `0` `1` cyl disp hp drat wt qsec vs gear carb
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 NA 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 NA 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 NA 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 21.4 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 5 18.7 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 6 18.1 NA 6 225 105 2.76 3.46 20.2 1 3 1
## 7 14.3 NA 8 360 245 3.21 3.57 15.8 0 3 4
## 8 24.4 NA 4 147. 62 3.69 3.19 20 1 4 2
## 9 22.8 NA 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 19.2 NA 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
As you can see, the “0” and “1” columns should not be their own columns, unless there is a very
specific and good reason they should… but rather, they should be the levels of another column (in
our case, “am”).
We can go back to a long dataset like so:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
In the cols argument, you need to list all the variables that need to be transformed. Only `1` and
`0` must be pivoted, so I list them. Just for illustration purposes, imagine that we would need
to pivot 50 columns. It would be faster to list the columns that do not need to be pivoted. This
can be achieved by listing the columns that must be excluded with `-` in front, and maybe using
`match()` with a regular expression:
```
mtcars_wide_am %>%
pivot_longer(cols = -matches("^[[:alpha:]]"),
names_to = "am",
values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
Every column that starts with a letter is ok, so there is no need to pivot them. I use the `match()`
function with a regular expression so that I don’t have to type the names of all the columns. `select()`
is used to re\-order the columns, only for viewing purposes
`names_to =` takes a string as argument, which will be the name of the name column containing the
levels `0` and `1`, and `values_to =` also takes a string as argument, which will be the name of
the column containing the values. Finally, you can see that there are a lot of `NA`s in the
output. These can be removed easily:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg", values_drop_na = TRUE) %>%
select(am, mpg, everything())
```
```
## # A tibble: 32 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 5 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## 6 0 18.1 6 225 105 2.76 3.46 20.2 1 3 1
## 7 0 14.3 8 360 245 3.21 3.57 15.8 0 3 4
## 8 0 24.4 4 147. 62 3.69 3.19 20 1 4 2
## 9 0 22.8 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 19.2 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now for a more advanced example, let’s suppose that we are dealing with the following wide dataset:
```
mtcars_wide <- mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
mtcars_wide
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
The difficulty here is that we have columns with two levels of information. For instance, the
column “0\_4” contains the miles per gallon values for manual cars (`0`) with `4` cylinders.
The first step is to first pivot the columns:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
select(am_cyl, mpg, everything())
```
```
## # A tibble: 32 × 10
## am_cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1_6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1_6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1_4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0_6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0_8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0_6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0_8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0_4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0_4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0_6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now we only need to separate the “am\_cyl” column into two new columns, “am” and “cyl”:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
separate(am_cyl, into = c("am", "cyl"), sep = "_") %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
It is also possible to construct a specification data frame, just like for `pivot_wider_spec()`.
This time, I’m using the `build_longer_spec()` function that makes it easy to build specifications:
```
mtcars_spec_long <- mtcars_wide %>%
build_longer_spec(matches("0|1"),
values_to = "mpg") %>%
separate(name, c("am", "cyl"), sep = "_")
mtcars_spec_long
```
```
## # A tibble: 6 × 4
## .name .value am cyl
## <chr> <chr> <chr> <chr>
## 1 0_4 mpg 0 4
## 2 0_6 mpg 0 6
## 3 0_8 mpg 0 8
## 4 1_4 mpg 1 4
## 5 1_6 mpg 1 6
## 6 1_8 mpg 1 8
```
This spec can now be specified to `pivot_longer()`:
```
mtcars_wide %>%
pivot_longer_spec(spec = mtcars_spec_long,
values_drop_na = TRUE) %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Defining specifications give a lot of flexibility and in some complicated cases are the way to go.
### 4\.4\.2 `fill()` and `full_seq()`
`fill()` is pretty useful to… fill in missing values. For instance, in `survey_data`, some “id”s
are missing:
```
survey_data
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
It seems pretty obvious that the first `NA` is supposed to be `1` and the second missing is supposed
to be `4`. With `fill()`, this is pretty easy to achieve:
```
survey_data %>%
fill(.direction = "down", id)
```
`full_seq()` is similar:
```
full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1)
```
```
## [1] "2018-08-01" "2018-08-02" "2018-08-03"
```
We can add this as the date column to our survey data:
```
survey_data %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 NA var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 NA var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
I use the base `rep()` function to repeat the date 4 times and then using `mutate()` I have added
it the data frame.
Putting all these operations together:
```
survey_data %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 1 var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 4 var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
You should be careful when imputing missing values though. The method described above is called
`Last Observation Carried Forward`, and sometimes it makes sense, like here, but sometimes it doesn’t and
doing this will introduce bias in your analysis. Discussing how to handle missing values in your analysis
is outside of the scope of this book, but there are many resources available. You may want to check
out the vignettes of the `{mice}` [package](https://amices.org/mice/articles/overview.html), which
lists many resources to get you started.
### 4\.4\.3 Put order in your columns with `separate()`, `unite()`, and in your rows with `separate_rows()`
Sometimes, data can be in a format that makes working with it needlessly painful. For example, you
get this:
```
survey_data_not_tidy
```
```
## # A tibble: 12 × 3
## id variable_date value
## <dbl> <chr> <dbl>
## 1 1 var1/2018-08-01 1
## 2 1 var2/2018-08-02 0.2
## 3 1 var3/2018-08-03 0.3
## 4 2 var1/2018-08-01 1.4
## 5 2 var2/2018-08-02 1.9
## 6 2 var3/2018-08-03 4.1
## 7 3 var1/2018-08-01 0.1
## 8 3 var2/2018-08-02 2.8
## 9 3 var3/2018-08-03 8.9
## 10 4 var1/2018-08-01 1.7
## 11 4 var2/2018-08-02 1.9
## 12 4 var3/2018-08-03 7.6
```
Dealing with this is simple, thanks to `separate()`:
```
survey_data_not_tidy %>%
separate(variable_date, into = c("variable", "date"), sep = "/")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
The `variable_date` column gets separated into two columns, `variable` and `date`. One also needs
to specify the separator, in this case “/”.
`unite()` is the reverse operation, which can be useful when you are confronted to this situation:
```
survey_data2
```
```
## # A tibble: 12 × 6
## id variable year month day value
## <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 1 var1 2018 08 01 1
## 2 1 var2 2018 08 02 0.2
## 3 1 var3 2018 08 03 0.3
## 4 2 var1 2018 08 01 1.4
## 5 2 var2 2018 08 02 1.9
## 6 2 var3 2018 08 03 4.1
## 7 3 var1 2018 08 01 0.1
## 8 3 var2 2018 08 02 2.8
## 9 3 var3 2018 08 03 8.9
## 10 4 var1 2018 08 01 1.7
## 11 4 var2 2018 08 02 1.9
## 12 4 var3 2018 08 03 7.6
```
In some situation, it is better to have the date as a single column:
```
survey_data2 %>%
unite(date, year, month, day, sep = "-")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
Another awful situation is the following:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
`separate_rows()` saves the day:
```
survey_data_from_hell %>%
separate_rows(variable, value)
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <chr>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
So to summarise… you can go from this:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
to this:
```
survey_data_clean
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
quite easily:
```
survey_data_from_hell %>%
separate_rows(variable, value, convert = TRUE) %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
### 4\.4\.1 `pivot_wider()` and `pivot_longer()`
Let’s first create a fake dataset:
```
library(tidyr)
```
```
survey_data <- tribble(
~id, ~variable, ~value,
1, "var1", 1,
1, "var2", 0.2,
NA, "var3", 0.3,
2, "var1", 1.4,
2, "var2", 1.9,
2, "var3", 4.1,
3, "var1", 0.1,
3, "var2", 2.8,
3, "var3", 8.9,
4, "var1", 1.7,
NA, "var2", 1.9,
4, "var3", 7.6
)
head(survey_data)
```
```
## # A tibble: 6 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
```
I used the `tribble()` function from the `{tibble}` package to create this fake dataset.
I’ll discuss this package later, for now, let’s focus on `{tidyr}.`
Let’s suppose that we need the data to be in the wide format which means `var1`, `var2` and `var3`
need to be their own columns. To do this, we need to use the `pivot_wider()` function. Why *wide*?
Because the data set will be wide, meaning, having more columns than rows.
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value)
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 NA
## 2 NA NA 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 NA 7.6
```
Let’s go through `pivot_wider()`’s arguments: the first is `id_cols =` which requires the variable
that uniquely identifies the rows to be supplied. `names_from =` is where you input the variable that will
generate the names of the new columns. In our case, the `variable` colmuns has three values; `var1`,
`var2` and `var3`, and these are now the names of the new columns. Finally, `values_from =` is where
you can specify the column containing the values that will fill the data frame.
I find the argument names `names_from =` and `values_from =` quite explicit.
As you can see, there are some missing values. Let’s suppose that we know that these missing values
are true 0’s. `pivot_wider()` has an argument called `values_fill =` that makes it easy to replace
the missing values:
```
survey_data %>%
pivot_wider(id_cols = id,
names_from = variable,
values_from = value,
values_fill = list(value = 0))
```
```
## # A tibble: 5 × 4
## id var1 var2 var3
## <dbl> <dbl> <dbl> <dbl>
## 1 1 1 0.2 0
## 2 NA 0 1.9 0.3
## 3 2 1.4 1.9 4.1
## 4 3 0.1 2.8 8.9
## 5 4 1.7 0 7.6
```
A list of variables and their respective values to replace NA’s with must be supplied to `values_fill`.
Let’s now use another dataset, which you can get from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from: [http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId\=12950\&IF\_Language\=eng\&MainTheme\=2\&FldrName\=3\&RFPath\=91](http://www.statistiques.public.lu/stat/TableViewer/tableView.aspx?ReportId=12950&IF_Language=eng&MainTheme=2&FldrName=3&RFPath=91)). This data set gives the unemployment rate for each Luxembourguish
canton from 2001 to 2015\. We will come back to this data later on to learn how to plot it. For now,
let’s use it to learn more about `{tidyr}`.
```
unemp_lux_data <- rio::import(
"https://raw.githubusercontent.com/b-rodrigues/modern_R/master/datasets/unemployment/all/unemployment_lux_all.csv"
)
head(unemp_lux_data)
```
```
## division year active_population of_which_non_wage_earners
## 1 Beaufort 2001 688 85
## 2 Beaufort 2002 742 85
## 3 Beaufort 2003 773 85
## 4 Beaufort 2004 828 80
## 5 Beaufort 2005 866 96
## 6 Beaufort 2006 893 87
## of_which_wage_earners total_employed_population unemployed
## 1 568 653 35
## 2 631 716 26
## 3 648 733 40
## 4 706 786 42
## 5 719 815 51
## 6 746 833 60
## unemployment_rate_in_percent
## 1 5.09
## 2 3.50
## 3 5.17
## 4 5.07
## 5 5.89
## 6 6.72
```
Now, let’s suppose that for our purposes, it would make more sense to have the data in a wide format,
where columns are “divison times year” and the value is the unemployment rate. This can be easily done
with providing more columns to `names_from =`.
```
unemp_lux_data2 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017),
str_detect(division, ".*ange$"),
!str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column()
unemp_lux_data2 %>%
pivot_wider(names_from = c(division, year),
values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 48 × 49
## rowid Bertr…¹ Bertr…² Bertr…³ Diffe…⁴ Diffe…⁵ Diffe…⁶ Dudel…⁷ Dudel…⁸ Dudel…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.69 NA NA NA NA NA NA NA NA
## 2 2 NA 5.65 NA NA NA NA NA NA NA
## 3 3 NA NA 5.35 NA NA NA NA NA NA
## 4 4 NA NA NA 13.2 NA NA NA NA NA
## 5 5 NA NA NA NA 12.6 NA NA NA NA
## 6 6 NA NA NA NA NA 11.4 NA NA NA
## 7 7 NA NA NA NA NA NA 9.35 NA NA
## 8 8 NA NA NA NA NA NA NA 9.37 NA
## 9 9 NA NA NA NA NA NA NA NA 8.53
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 38 more rows, 39 more variables: Frisange_2013 <dbl>,
## # Frisange_2014 <dbl>, Frisange_2015 <dbl>, Hesperange_2013 <dbl>,
## # Hesperange_2014 <dbl>, Hesperange_2015 <dbl>, Leudelange_2013 <dbl>,
## # Leudelange_2014 <dbl>, Leudelange_2015 <dbl>, Mondercange_2013 <dbl>,
## # Mondercange_2014 <dbl>, Mondercange_2015 <dbl>, Pétange_2013 <dbl>,
## # Pétange_2014 <dbl>, Pétange_2015 <dbl>, Rumelange_2013 <dbl>,
## # Rumelange_2014 <dbl>, Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, …
```
In the `filter()` statement, I only kept data from 2013 to 2017, “division”s ending with the string
“ange” (“division” can be a canton or a commune, for example “Canton Redange”, a canton, or
“Hesperange” a commune), and removed the cantons as I’m only interested in communes. If you don’t
understand this `filter()` statement, don’t fret; this is not important for what follows. I then
only kept the columns I’m interested in and pivoted the data to a wide format. Also, I needed to
add a unique identifier to the data frame. For this, I used `rowid_to_column()` function, from the
`{tibble}` package, which adds a new column to the data frame with an id, going from 1 to the
number of rows in the data frame. If I did not add this identifier, the statement would work still:
```
unemp_lux_data3 <- unemp_lux_data %>%
filter(year %in% seq(2013, 2017), str_detect(division, ".*ange$"), !str_detect(division, ".*Canton.*")) %>%
select(division, year, unemployment_rate_in_percent)
unemp_lux_data3 %>%
pivot_wider(names_from = c(division, year), values_from = unemployment_rate_in_percent)
```
```
## # A tibble: 1 × 48
## Bertrange_2013 Bertr…¹ Bertr…² Diffe…³ Diffe…⁴ Diffe…⁵ Dudel…⁶ Dudel…⁷ Dudel…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.69 5.65 5.35 13.2 12.6 11.4 9.35 9.37 8.53
## # … with 39 more variables: Frisange_2013 <dbl>, Frisange_2014 <dbl>,
## # Frisange_2015 <dbl>, Hesperange_2013 <dbl>, Hesperange_2014 <dbl>,
## # Hesperange_2015 <dbl>, Leudelange_2013 <dbl>, Leudelange_2014 <dbl>,
## # Leudelange_2015 <dbl>, Mondercange_2013 <dbl>, Mondercange_2014 <dbl>,
## # Mondercange_2015 <dbl>, Pétange_2013 <dbl>, Pétange_2014 <dbl>,
## # Pétange_2015 <dbl>, Rumelange_2013 <dbl>, Rumelange_2014 <dbl>,
## # Rumelange_2015 <dbl>, Schifflange_2013 <dbl>, Schifflange_2014 <dbl>, …
```
and actually look even better, but only because there are no repeated values; there is only one
unemployment rate for each “commune times year”. I will come back to this later on, with another
example that might be clearer. These last two code blocks are intense; make sure you go through
each lien step by step and understand what is going on.
You might have noticed that because there is no data for the years 2016 and 2017, these columns do
not appear in the data. But suppose that we need to have these columns, so that a colleague from
another department can fill in the values. This is possible by providing a data frame with the
detailed specifications of the result data frame. This optional data frame must have at least two
columns, `.name`, which are the column names you want, and `.value` which contains the values.
Also, the function that uses this spec is a `pivot_wider_spec()`, and not `pivot_wider()`.
```
unemp_spec <- unemp_lux_data %>%
tidyr::expand(division,
year = c(year, 2016, 2017),
.value = "unemployment_rate_in_percent") %>%
unite(".name", division, year, remove = FALSE)
unemp_spec
```
Here, I use another function, `tidyr::expand()`, which returns every combinations (cartesian product)
of every variable from a dataset.
To make it work, we still need to create a column that uniquely identifies each row in the data:
```
unemp_lux_data4 <- unemp_lux_data %>%
select(division, year, unemployment_rate_in_percent) %>%
rowid_to_column() %>%
pivot_wider_spec(spec = unemp_spec)
unemp_lux_data4
```
```
## # A tibble: 1,770 × 2,007
## rowid Beauf…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸ Beauf…⁹
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 5.09 NA NA NA NA NA NA NA NA
## 2 2 NA 3.5 NA NA NA NA NA NA NA
## 3 3 NA NA 5.17 NA NA NA NA NA NA
## 4 4 NA NA NA 5.07 NA NA NA NA NA
## 5 5 NA NA NA NA 5.89 NA NA NA NA
## 6 6 NA NA NA NA NA 6.72 NA NA NA
## 7 7 NA NA NA NA NA NA 4.3 NA NA
## 8 8 NA NA NA NA NA NA NA 7.08 NA
## 9 9 NA NA NA NA NA NA NA NA 8.52
## 10 10 NA NA NA NA NA NA NA NA NA
## # … with 1,760 more rows, 1,997 more variables: Beaufort_2010 <dbl>,
## # Beaufort_2011 <dbl>, Beaufort_2012 <dbl>, Beaufort_2013 <dbl>,
## # Beaufort_2014 <dbl>, Beaufort_2015 <dbl>, Beaufort_2016 <dbl>,
## # Beaufort_2017 <dbl>, Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>,
## # Bech_2004 <dbl>, Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>,
## # Bech_2008 <dbl>, Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>,
## # Bech_2012 <dbl>, Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, …
```
You can notice that now we have columns for 2016 and 2017 too. Let’s clean the data a little bit more:
```
unemp_lux_data4 %>%
select(-rowid) %>%
fill(matches(".*"), .direction = "down") %>%
slice(n())
```
```
## # A tibble: 1 × 2,006
## Beaufort_2001 Beaufo…¹ Beauf…² Beauf…³ Beauf…⁴ Beauf…⁵ Beauf…⁶ Beauf…⁷ Beauf…⁸
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.09 3.5 5.17 5.07 5.89 6.72 4.3 7.08 8.52
## # … with 1,997 more variables: Beaufort_2010 <dbl>, Beaufort_2011 <dbl>,
## # Beaufort_2012 <dbl>, Beaufort_2013 <dbl>, Beaufort_2014 <dbl>,
## # Beaufort_2015 <dbl>, Beaufort_2016 <dbl>, Beaufort_2017 <dbl>,
## # Bech_2001 <dbl>, Bech_2002 <dbl>, Bech_2003 <dbl>, Bech_2004 <dbl>,
## # Bech_2005 <dbl>, Bech_2006 <dbl>, Bech_2007 <dbl>, Bech_2008 <dbl>,
## # Bech_2009 <dbl>, Bech_2010 <dbl>, Bech_2011 <dbl>, Bech_2012 <dbl>,
## # Bech_2013 <dbl>, Bech_2014 <dbl>, Bech_2015 <dbl>, Bech_2016 <dbl>, …
```
We will learn about `fill()`, anoher `{tidyr}` function a bit later in this chapter, but its basic
purpose is to fill rows with whatever value comes before or after the missing values. `slice(n())`
then only keeps the last row of the data frame, which is the row that contains all the values (expect
for 2016 and 2017, which has missing values, as we wanted).
Here is another example of the importance of having an identifier column when using a spec:
```
data(mtcars)
mtcars_spec <- mtcars %>%
tidyr::expand(am, cyl, .value = "mpg") %>%
unite(".name", am, cyl, remove = FALSE)
mtcars_spec
```
We can now transform the data:
```
mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
As you can see, there are several values of “mpg” for some combinations of “am” times “cyl”. If
we remove the other columns, each row will not be uniquely identified anymore. This results in a
warning message, and a tibble that contains list\-columns:
```
mtcars %>%
select(am, cyl, mpg) %>%
pivot_wider_spec(spec = mtcars_spec)
```
```
## Warning: Values from `mpg` are not uniquely identified; output will contain list-cols.
## * Use `values_fn = list` to suppress this warning.
## * Use `values_fn = {summary_fun}` to summarise duplicates.
## * Use the following dplyr code to identify duplicates.
## {data} %>%
## dplyr::group_by(am, cyl) %>%
## dplyr::summarise(n = dplyr::n(), .groups = "drop") %>%
## dplyr::filter(n > 1L)
```
```
## # A tibble: 1 × 6
## `0_4` `0_6` `0_8` `1_4` `1_6` `1_8`
## <list> <list> <list> <list> <list> <list>
## 1 <dbl [3]> <dbl [4]> <dbl [12]> <dbl [8]> <dbl [3]> <dbl [2]>
```
We are going to learn about list\-columns in the next section. List\-columns are very powerful, and
mastering them will be important. But generally speaking, when reshaping data, if you get list\-columns
back it often means that something went wrong.
So you have to be careful with this.
`pivot_longer()` is used when you need to go from a wide to a long dataset, meaning, a dataset
where there are some columns that should not be columns, but rather, the levels of a factor
variable. Let’s suppose that the “am” column is split into two columns, `1` for automatic and `0`
for manual transmissions, and that the values filling these colums are miles per gallon, “mpg”:
```
mtcars_wide_am <- mtcars %>%
pivot_wider(names_from = am, values_from = mpg)
mtcars_wide_am %>%
select(`0`, `1`, everything())
```
```
## # A tibble: 32 × 11
## `0` `1` cyl disp hp drat wt qsec vs gear carb
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 NA 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 NA 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 NA 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 21.4 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 5 18.7 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 6 18.1 NA 6 225 105 2.76 3.46 20.2 1 3 1
## 7 14.3 NA 8 360 245 3.21 3.57 15.8 0 3 4
## 8 24.4 NA 4 147. 62 3.69 3.19 20 1 4 2
## 9 22.8 NA 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 19.2 NA 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
As you can see, the “0” and “1” columns should not be their own columns, unless there is a very
specific and good reason they should… but rather, they should be the levels of another column (in
our case, “am”).
We can go back to a long dataset like so:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
In the cols argument, you need to list all the variables that need to be transformed. Only `1` and
`0` must be pivoted, so I list them. Just for illustration purposes, imagine that we would need
to pivot 50 columns. It would be faster to list the columns that do not need to be pivoted. This
can be achieved by listing the columns that must be excluded with `-` in front, and maybe using
`match()` with a regular expression:
```
mtcars_wide_am %>%
pivot_longer(cols = -matches("^[[:alpha:]]"),
names_to = "am",
values_to = "mpg") %>%
select(am, mpg, everything())
```
```
## # A tibble: 64 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 0 NA 6 160 110 3.9 2.62 16.5 0 4 4
## 3 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 4 0 NA 6 160 110 3.9 2.88 17.0 0 4 4
## 5 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 6 0 NA 4 108 93 3.85 2.32 18.6 1 4 1
## 7 1 NA 6 258 110 3.08 3.22 19.4 1 3 1
## 8 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 9 1 NA 8 360 175 3.15 3.44 17.0 0 3 2
## 10 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## # … with 54 more rows
```
Every column that starts with a letter is ok, so there is no need to pivot them. I use the `match()`
function with a regular expression so that I don’t have to type the names of all the columns. `select()`
is used to re\-order the columns, only for viewing purposes
`names_to =` takes a string as argument, which will be the name of the name column containing the
levels `0` and `1`, and `values_to =` also takes a string as argument, which will be the name of
the column containing the values. Finally, you can see that there are a lot of `NA`s in the
output. These can be removed easily:
```
mtcars_wide_am %>%
pivot_longer(cols = c(`1`, `0`), names_to = "am", values_to = "mpg", values_drop_na = TRUE) %>%
select(am, mpg, everything())
```
```
## # A tibble: 32 × 11
## am mpg cyl disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 21 6 160 110 3.9 2.62 16.5 0 4 4
## 2 1 21 6 160 110 3.9 2.88 17.0 0 4 4
## 3 1 22.8 4 108 93 3.85 2.32 18.6 1 4 1
## 4 0 21.4 6 258 110 3.08 3.22 19.4 1 3 1
## 5 0 18.7 8 360 175 3.15 3.44 17.0 0 3 2
## 6 0 18.1 6 225 105 2.76 3.46 20.2 1 3 1
## 7 0 14.3 8 360 245 3.21 3.57 15.8 0 3 4
## 8 0 24.4 4 147. 62 3.69 3.19 20 1 4 2
## 9 0 22.8 4 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 19.2 6 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now for a more advanced example, let’s suppose that we are dealing with the following wide dataset:
```
mtcars_wide <- mtcars %>%
pivot_wider_spec(spec = mtcars_spec)
mtcars_wide
```
```
## # A tibble: 32 × 14
## disp hp drat wt qsec vs gear carb `0_4` `0_6` `0_8` `1_4` `1_6`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 160 110 3.9 2.62 16.5 0 4 4 NA NA NA NA 21
## 2 160 110 3.9 2.88 17.0 0 4 4 NA NA NA NA 21
## 3 108 93 3.85 2.32 18.6 1 4 1 NA NA NA 22.8 NA
## 4 258 110 3.08 3.22 19.4 1 3 1 NA 21.4 NA NA NA
## 5 360 175 3.15 3.44 17.0 0 3 2 NA NA 18.7 NA NA
## 6 225 105 2.76 3.46 20.2 1 3 1 NA 18.1 NA NA NA
## 7 360 245 3.21 3.57 15.8 0 3 4 NA NA 14.3 NA NA
## 8 147. 62 3.69 3.19 20 1 4 2 24.4 NA NA NA NA
## 9 141. 95 3.92 3.15 22.9 1 4 2 22.8 NA NA NA NA
## 10 168. 123 3.92 3.44 18.3 1 4 4 NA 19.2 NA NA NA
## # … with 22 more rows, and 1 more variable: `1_8` <dbl>
```
The difficulty here is that we have columns with two levels of information. For instance, the
column “0\_4” contains the miles per gallon values for manual cars (`0`) with `4` cylinders.
The first step is to first pivot the columns:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
select(am_cyl, mpg, everything())
```
```
## # A tibble: 32 × 10
## am_cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1_6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1_6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1_4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0_6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0_8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0_6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0_8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0_4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0_4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0_6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Now we only need to separate the “am\_cyl” column into two new columns, “am” and “cyl”:
```
mtcars_wide %>%
pivot_longer(cols = matches("0|1"),
names_to = "am_cyl",
values_to = "mpg",
values_drop_na = TRUE) %>%
separate(am_cyl, into = c("am", "cyl"), sep = "_") %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
It is also possible to construct a specification data frame, just like for `pivot_wider_spec()`.
This time, I’m using the `build_longer_spec()` function that makes it easy to build specifications:
```
mtcars_spec_long <- mtcars_wide %>%
build_longer_spec(matches("0|1"),
values_to = "mpg") %>%
separate(name, c("am", "cyl"), sep = "_")
mtcars_spec_long
```
```
## # A tibble: 6 × 4
## .name .value am cyl
## <chr> <chr> <chr> <chr>
## 1 0_4 mpg 0 4
## 2 0_6 mpg 0 6
## 3 0_8 mpg 0 8
## 4 1_4 mpg 1 4
## 5 1_6 mpg 1 6
## 6 1_8 mpg 1 8
```
This spec can now be specified to `pivot_longer()`:
```
mtcars_wide %>%
pivot_longer_spec(spec = mtcars_spec_long,
values_drop_na = TRUE) %>%
select(am, cyl, mpg, everything())
```
```
## # A tibble: 32 × 11
## am cyl mpg disp hp drat wt qsec vs gear carb
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 6 21 160 110 3.9 2.62 16.5 0 4 4
## 2 1 6 21 160 110 3.9 2.88 17.0 0 4 4
## 3 1 4 22.8 108 93 3.85 2.32 18.6 1 4 1
## 4 0 6 21.4 258 110 3.08 3.22 19.4 1 3 1
## 5 0 8 18.7 360 175 3.15 3.44 17.0 0 3 2
## 6 0 6 18.1 225 105 2.76 3.46 20.2 1 3 1
## 7 0 8 14.3 360 245 3.21 3.57 15.8 0 3 4
## 8 0 4 24.4 147. 62 3.69 3.19 20 1 4 2
## 9 0 4 22.8 141. 95 3.92 3.15 22.9 1 4 2
## 10 0 6 19.2 168. 123 3.92 3.44 18.3 1 4 4
## # … with 22 more rows
```
Defining specifications give a lot of flexibility and in some complicated cases are the way to go.
### 4\.4\.2 `fill()` and `full_seq()`
`fill()` is pretty useful to… fill in missing values. For instance, in `survey_data`, some “id”s
are missing:
```
survey_data
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
It seems pretty obvious that the first `NA` is supposed to be `1` and the second missing is supposed
to be `4`. With `fill()`, this is pretty easy to achieve:
```
survey_data %>%
fill(.direction = "down", id)
```
`full_seq()` is similar:
```
full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1)
```
```
## [1] "2018-08-01" "2018-08-02" "2018-08-03"
```
We can add this as the date column to our survey data:
```
survey_data %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 NA var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 NA var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
I use the base `rep()` function to repeat the date 4 times and then using `mutate()` I have added
it the data frame.
Putting all these operations together:
```
survey_data %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
```
## # A tibble: 12 × 4
## id variable value date
## <dbl> <chr> <dbl> <date>
## 1 1 var1 1 2018-08-01
## 2 1 var2 0.2 2018-08-02
## 3 1 var3 0.3 2018-08-03
## 4 2 var1 1.4 2018-08-01
## 5 2 var2 1.9 2018-08-02
## 6 2 var3 4.1 2018-08-03
## 7 3 var1 0.1 2018-08-01
## 8 3 var2 2.8 2018-08-02
## 9 3 var3 8.9 2018-08-03
## 10 4 var1 1.7 2018-08-01
## 11 4 var2 1.9 2018-08-02
## 12 4 var3 7.6 2018-08-03
```
You should be careful when imputing missing values though. The method described above is called
`Last Observation Carried Forward`, and sometimes it makes sense, like here, but sometimes it doesn’t and
doing this will introduce bias in your analysis. Discussing how to handle missing values in your analysis
is outside of the scope of this book, but there are many resources available. You may want to check
out the vignettes of the `{mice}` [package](https://amices.org/mice/articles/overview.html), which
lists many resources to get you started.
### 4\.4\.3 Put order in your columns with `separate()`, `unite()`, and in your rows with `separate_rows()`
Sometimes, data can be in a format that makes working with it needlessly painful. For example, you
get this:
```
survey_data_not_tidy
```
```
## # A tibble: 12 × 3
## id variable_date value
## <dbl> <chr> <dbl>
## 1 1 var1/2018-08-01 1
## 2 1 var2/2018-08-02 0.2
## 3 1 var3/2018-08-03 0.3
## 4 2 var1/2018-08-01 1.4
## 5 2 var2/2018-08-02 1.9
## 6 2 var3/2018-08-03 4.1
## 7 3 var1/2018-08-01 0.1
## 8 3 var2/2018-08-02 2.8
## 9 3 var3/2018-08-03 8.9
## 10 4 var1/2018-08-01 1.7
## 11 4 var2/2018-08-02 1.9
## 12 4 var3/2018-08-03 7.6
```
Dealing with this is simple, thanks to `separate()`:
```
survey_data_not_tidy %>%
separate(variable_date, into = c("variable", "date"), sep = "/")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
The `variable_date` column gets separated into two columns, `variable` and `date`. One also needs
to specify the separator, in this case “/”.
`unite()` is the reverse operation, which can be useful when you are confronted to this situation:
```
survey_data2
```
```
## # A tibble: 12 × 6
## id variable year month day value
## <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 1 var1 2018 08 01 1
## 2 1 var2 2018 08 02 0.2
## 3 1 var3 2018 08 03 0.3
## 4 2 var1 2018 08 01 1.4
## 5 2 var2 2018 08 02 1.9
## 6 2 var3 2018 08 03 4.1
## 7 3 var1 2018 08 01 0.1
## 8 3 var2 2018 08 02 2.8
## 9 3 var3 2018 08 03 8.9
## 10 4 var1 2018 08 01 1.7
## 11 4 var2 2018 08 02 1.9
## 12 4 var3 2018 08 03 7.6
```
In some situation, it is better to have the date as a single column:
```
survey_data2 %>%
unite(date, year, month, day, sep = "-")
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
Another awful situation is the following:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
`separate_rows()` saves the day:
```
survey_data_from_hell %>%
separate_rows(variable, value)
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <chr>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
So to summarise… you can go from this:
```
survey_data_from_hell
```
```
## id variable value
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1, var2, var3 1.4, 1.9, 4.1
## 5 3 var1, var2 0.1, 2.8
## 6 3 var3 8.9
## 7 4 var1 1.7
## 8 NA var2 1.9
## 9 4 var3 7.6
```
to this:
```
survey_data_clean
```
```
## # A tibble: 12 × 4
## id variable date value
## <dbl> <chr> <chr> <dbl>
## 1 1 var1 2018-08-01 1
## 2 1 var2 2018-08-02 0.2
## 3 1 var3 2018-08-03 0.3
## 4 2 var1 2018-08-01 1.4
## 5 2 var2 2018-08-02 1.9
## 6 2 var3 2018-08-03 4.1
## 7 3 var1 2018-08-01 0.1
## 8 3 var2 2018-08-02 2.8
## 9 3 var3 2018-08-03 8.9
## 10 4 var1 2018-08-01 1.7
## 11 4 var2 2018-08-02 1.9
## 12 4 var3 2018-08-03 7.6
```
quite easily:
```
survey_data_from_hell %>%
separate_rows(variable, value, convert = TRUE) %>%
fill(.direction = "down", id) %>%
mutate(date = rep(full_seq(c(as.Date("2018-08-01"), as.Date("2018-08-03")), 1), 4))
```
4\.5 Working on many columns with `if_any()`, `if_all()` and `across()`
-----------------------------------------------------------------------
### 4\.5\.1 Filtering rows where several columns verify a condition
Let’s go back to the `gasoline` data from the `{Ecdat}` package.
When using `filter()`, it is only possible to filter one column at a time. For example, you can
only filter rows where a column equals “France” for instance. But suppose that we have a condition that we want
to use to filter out a lot of columns at once. For example, for every column that is of type
`numeric`, keep only the lines where the condition *value \> \-8* is satisfied. The next line does
that:
```
gasoline %>%
filter(if_any(where(is.numeric), \(x)(`>`(x, -8))))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The above code is using the `if_any()` function, included in `{dplyr}`. It also uses
`where()`, which must be used for predicate functions like `is.numeric()`, or `is.character()`, etc.
You can think of `if_any()` as a function that helps you select the columns to which to apply the
function. You can read the code above like this:
*Start with the gasoline data, then filter rows that are greater than \-8 across the columns
which are numeric*
or similar. `if_any()`, `if_all()` and `across()` makes operations like these very easy to achieve.
Sometimes, you’d want to filter rows from columns that end their labels with a letter, for instance
`"p"`. This can again be achieved using another helper, `ends_with()`, instead of `where()`:
```
gasoline %>%
filter(if_any(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 340 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 330 more rows
```
We already know about `ends_with()` and `starts_with()`. So the above line means “for the columns
whose name end with a ‘p’ only keep the lines where, for all the selected columns, the values are
strictly superior to `-8`”.
`if_all()` works exactly the same way, but think of the `if` in `if_all()` as having the conditions
separated by `and` while the `if` of `if_any()` being separated by `or`. So for example, the
code above, where `if_any()` is replaced by `if_all()`, results in a much smaller data frame:
```
gasoline %>%
filter(if_all(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 30 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 canada 1972 4.89 -5.44 -1.10 -7.99
## 2 canada 1973 4.90 -5.41 -1.13 -7.94
## 3 canada 1974 4.89 -5.42 -1.12 -7.90
## 4 canada 1975 4.89 -5.38 -1.19 -7.87
## 5 canada 1976 4.84 -5.36 -1.06 -7.81
## 6 canada 1977 4.81 -5.34 -1.07 -7.77
## 7 canada 1978 4.86 -5.31 -1.07 -7.79
## 8 germany 1978 3.88 -5.56 -0.628 -7.95
## 9 sweden 1975 3.97 -7.68 -2.77 -7.99
## 10 sweden 1976 3.98 -7.67 -2.82 -7.96
## # … with 20 more rows
```
because here, we only keep rows for columns that end with “p” where ALL of them are simultaneously
greater than 8\.
### 4\.5\.2 Selecting several columns at once
In a previous section we already played around a little bit with `select()` and some helpers,
`everything()`, `starts_with()` and `ends_with()`. But there are many ways that you can use
helper functions to select several columns easily:
```
gasoline %>%
select(where(is.numeric))
```
```
## # A tibble: 342 × 5
## year lgaspcar lincomep lrpmg lcarpcap
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1960 4.17 -6.47 -0.335 -9.77
## 2 1961 4.10 -6.43 -0.351 -9.61
## 3 1962 4.07 -6.41 -0.380 -9.46
## 4 1963 4.06 -6.37 -0.414 -9.34
## 5 1964 4.04 -6.32 -0.445 -9.24
## 6 1965 4.03 -6.29 -0.497 -9.12
## 7 1966 4.05 -6.25 -0.467 -9.02
## 8 1967 4.05 -6.23 -0.506 -8.93
## 9 1968 4.05 -6.21 -0.522 -8.85
## 10 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Selecting by column position is also possible:
```
gasoline %>%
select(c(1, 2, 5))
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
As is selecting columns starting or ending with a certain string of characters, as discussed previously:
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Another very neat trick is selecting columns that may or may not exist in your data frame. For this quick examples
let’s use the `mtcars` dataset:
```
sort(colnames(mtcars))
```
```
## [1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "mpg" "qsec" "vs"
## [11] "wt"
```
Let’s create a vector with some column names:
```
cols_to_select <- c("mpg", "cyl", "am", "nonsense")
```
The following selects the columns that exist
in the data frame but shows a warning for the column that does not exist:
```
mtcars %>%
select(any_of(cols_to_select))
```
```
## mpg cyl am
## Mazda RX4 21.0 6 1
## Mazda RX4 Wag 21.0 6 1
## Datsun 710 22.8 4 1
## Hornet 4 Drive 21.4 6 0
## Hornet Sportabout 18.7 8 0
## Valiant 18.1 6 0
## Duster 360 14.3 8 0
## Merc 240D 24.4 4 0
## Merc 230 22.8 4 0
## Merc 280 19.2 6 0
## Merc 280C 17.8 6 0
## Merc 450SE 16.4 8 0
## Merc 450SL 17.3 8 0
## Merc 450SLC 15.2 8 0
## Cadillac Fleetwood 10.4 8 0
## Lincoln Continental 10.4 8 0
## Chrysler Imperial 14.7 8 0
## Fiat 128 32.4 4 1
## Honda Civic 30.4 4 1
## Toyota Corolla 33.9 4 1
## Toyota Corona 21.5 4 0
## Dodge Challenger 15.5 8 0
## AMC Javelin 15.2 8 0
## Camaro Z28 13.3 8 0
## Pontiac Firebird 19.2 8 0
## Fiat X1-9 27.3 4 1
## Porsche 914-2 26.0 4 1
## Lotus Europa 30.4 4 1
## Ford Pantera L 15.8 8 1
## Ferrari Dino 19.7 6 1
## Maserati Bora 15.0 8 1
## Volvo 142E 21.4 4 1
```
and finally, if you want it to fail, don’t use any helper:
```
mtcars %>%
select(cols_to_select)
```
```
Error: Can't subset columns that don't exist.
The column `nonsense` doesn't exist.
```
or use `all_of()`:
```
mtcars %>%
select(all_of(cols_to_select))
```
```
✖ Column `nonsense` doesn't exist.
```
Bulk\-renaming can be achieved using `rename_with()`
```
gasoline %>%
rename_with(toupper, is.numeric)
```
```
## # A tibble: 342 × 6
## country YEAR LGASPCAR LINCOMEP LRPMG LCARPCAP
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
you can also pass functions to `rename_with()`:
```
gasoline %>%
rename_with(\(x)(paste0("new_", x)))
```
```
## # A tibble: 342 × 6
## new_country new_year new_lgaspcar new_lincomep new_lrpmg new_lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The reason I’m talking about renaming in a section about selecting is because you can
also rename with select:
```
gasoline %>%
select(YEAR = year)
```
```
## # A tibble: 342 × 1
## YEAR
## <int>
## 1 1960
## 2 1961
## 3 1962
## 4 1963
## 5 1964
## 6 1965
## 7 1966
## 8 1967
## 9 1968
## 10 1969
## # … with 332 more rows
```
but of course here, you only keep that one column, and you can’t rename with a function.
### 4\.5\.3 Summarising with `across()`
`across()` is used for summarising data. It allows to aggregations… *across* several columns. It
is especially useful with `group_by()`. To illustrate how `group_by()` works with `across()` I have
to first modify the `gasoline` data a little bit. As you can see below, the `year` column is of
type `double`:
```
gasoline %>%
lapply(typeof)
```
```
## $country
## [1] "character"
##
## $year
## [1] "integer"
##
## $lgaspcar
## [1] "double"
##
## $lincomep
## [1] "double"
##
## $lrpmg
## [1] "double"
##
## $lcarpcap
## [1] "double"
```
(we’ll discuss `lapply()` in a later chapter, but just to give you a little taste, `lapply()` applies
a function to each element of a list or of a data frame, in this case, `lapply()` applied the `typeof()`
function to each column of the `gasoline` data set, returning the type of each column)
Let’s change that to character:
```
gasoline <- gasoline %>%
mutate(year = as.character(year),
country = as.character(country))
```
This now allows me to group by type of columns for instance:
```
gasoline %>%
group_by(across(where(is.character))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
This is faster than having to write:
```
gasoline %>%
group_by(country, year) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
You may think that having two write the name of two variables is not a huge deal, which is true.
But imagine that you have dozens of character columns that you want to group by.
With `across()` and the helper functions, it doesn’t matter if the data frame has 2 columns
you need to group by or 2000\. All that matters is that you can find some commonalities between
all these columns that make it easy to select them. It can be their type, as we have seen
before, or their label:
```
gasoline %>%
group_by(across(contains("y"))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but it’s also possible to `group_by()` position:
```
gasoline %>%
group_by(across(c(1, 2))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
Using a sequence is also possible:
```
gasoline %>%
group_by(across(seq(1:2))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but be careful, selecting by position is dangerous. If the position of columns changes, your code
will fail. Selecting by type or label is much more robust, especially by label, since types can
change as well (for example a date column can easily be exported as character column, etc).
### 4\.5\.4 `summarise()` across many columns
Summarising across many columns is really incredibly useful and in my opinion one of the best
arguments in favour of switching to a `{tidyverse}` only workflow:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), mean))
```
```
## # A tibble: 18 × 5
## country lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 -6.12 -0.486 -8.85
## 2 belgium 3.92 -5.85 -0.326 -8.63
## 3 canada 4.86 -5.58 -1.05 -8.08
## 4 denmark 4.19 -5.76 -0.358 -8.58
## 5 france 3.82 -5.87 -0.253 -8.45
## 6 germany 3.89 -5.85 -0.517 -8.51
## 7 greece 4.88 -6.61 -0.0339 -10.8
## 8 ireland 4.23 -6.44 -0.348 -9.04
## 9 italy 3.73 -6.35 -0.152 -8.83
## 10 japan 4.70 -6.25 -0.287 -9.95
## 11 netherla 4.08 -5.92 -0.370 -8.82
## 12 norway 4.11 -5.75 -0.278 -8.77
## 13 spain 4.06 -5.63 0.739 -9.90
## 14 sweden 4.01 -7.82 -2.71 -8.25
## 15 switzerl 4.24 -5.93 -0.902 -8.54
## 16 turkey 5.77 -7.34 -0.422 -12.5
## 17 u.k. 3.98 -6.02 -0.459 -8.55
## 18 u.s.a. 4.82 -5.45 -1.21 -7.78
```
But where `summarise()` and `across()` really shine is when you want to apply several functions
to many columns at once:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_li…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -6.12 0.235 -5.76 -6.47
## 2 belgium 3.92 0.103 4.16 3.82 -5.85 0.227 -5.53 -6.22
## 3 canada 4.86 0.0262 4.90 4.81 -5.58 0.193 -5.31 -5.89
## 4 denmark 4.19 0.158 4.50 4.00 -5.76 0.176 -5.48 -6.06
## 5 france 3.82 0.0499 3.91 3.75 -5.87 0.241 -5.53 -6.26
## 6 germany 3.89 0.0239 3.93 3.85 -5.85 0.193 -5.56 -6.16
## 7 greece 4.88 0.255 5.38 4.48 -6.61 0.331 -6.15 -7.16
## 8 ireland 4.23 0.0437 4.33 4.16 -6.44 0.162 -6.19 -6.72
## 9 italy 3.73 0.220 4.05 3.38 -6.35 0.217 -6.08 -6.73
## 10 japan 4.70 0.684 6.00 3.95 -6.25 0.425 -5.71 -6.99
## 11 netherla 4.08 0.286 4.65 3.71 -5.92 0.193 -5.66 -6.22
## 12 norway 4.11 0.123 4.44 3.96 -5.75 0.201 -5.42 -6.09
## 13 spain 4.06 0.317 4.75 3.62 -5.63 0.278 -5.29 -6.17
## 14 sweden 4.01 0.0364 4.07 3.91 -7.82 0.126 -7.67 -8.07
## 15 switzerl 4.24 0.102 4.44 4.05 -5.93 0.124 -5.75 -6.16
## 16 turkey 5.77 0.329 6.16 5.14 -7.34 0.331 -6.89 -7.84
## 17 u.k. 3.98 0.0479 4.10 3.91 -6.02 0.107 -5.84 -6.19
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -5.45 0.148 -5.22 -5.70
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, max_lrpmg <dbl>,
## # min_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # max_lcarpcap <dbl>, min_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷max_lincomep, ⁸min_lincomep
```
Here, I first started by grouping by `country`, then I applied the `mean()`, `sd()`, `max()` and
`min()` functions to every column starting with the character `"l"`. `tibble::lst()` allows you to
create a list just like with `list()` but names its arguments automatically. So the `mean()` function
gets name `"mean"`, and so on. Finally, I use the `.names =` argument to create the template for
the new column names. `{fn}_{col}` creates new column names of the form *function name \_ column name*.
As mentioned before, `across()` works with other helper functions:
```
gasoline %>%
group_by(country) %>%
summarise(across(contains("car"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 9
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_lc…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -8.85 0.473 -8.21 -9.77
## 2 belgium 3.92 0.103 4.16 3.82 -8.63 0.417 -8.10 -9.41
## 3 canada 4.86 0.0262 4.90 4.81 -8.08 0.195 -7.77 -8.38
## 4 denmark 4.19 0.158 4.50 4.00 -8.58 0.349 -8.20 -9.33
## 5 france 3.82 0.0499 3.91 3.75 -8.45 0.344 -8.01 -9.15
## 6 germany 3.89 0.0239 3.93 3.85 -8.51 0.406 -7.95 -9.34
## 7 greece 4.88 0.255 5.38 4.48 -10.8 0.839 -9.57 -12.2
## 8 ireland 4.23 0.0437 4.33 4.16 -9.04 0.345 -8.55 -9.70
## 9 italy 3.73 0.220 4.05 3.38 -8.83 0.639 -8.11 -10.1
## 10 japan 4.70 0.684 6.00 3.95 -9.95 1.20 -8.59 -12.2
## 11 netherla 4.08 0.286 4.65 3.71 -8.82 0.617 -8.16 -10.0
## 12 norway 4.11 0.123 4.44 3.96 -8.77 0.438 -8.17 -9.68
## 13 spain 4.06 0.317 4.75 3.62 -9.90 0.960 -8.63 -11.6
## 14 sweden 4.01 0.0364 4.07 3.91 -8.25 0.242 -7.96 -8.74
## 15 switzerl 4.24 0.102 4.44 4.05 -8.54 0.378 -8.03 -9.26
## 16 turkey 5.77 0.329 6.16 5.14 -12.5 0.751 -11.2 -13.5
## 17 u.k. 3.98 0.0479 4.10 3.91 -8.55 0.281 -8.26 -9.12
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -7.78 0.162 -7.54 -8.02
## # … with abbreviated variable names ¹mean_lgaspcar, ²sd_lgaspcar,
## # ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lcarpcap, ⁶sd_lcarpcap, ⁷max_lcarpcap,
## # ⁸min_lcarpcap
```
This is very likely the quickest, most elegant way to summarise that many columns.
There’s also a way to *summarise where*:
```
gasoline %>%
group_by(country) %>%
summarise(across(where(is.numeric), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
This allows you to summarise every column that contains real numbers. The difference between
`is.double()` and `is.numeric()` is that `is.numeric()` returns `TRUE` for integers too, whereas
`is.double()` returns `TRUE` for real numbers only (integers are real numbers too, but you know
what I mean). It is also possible to summarise every column at once:
```
gasoline %>%
select(-year) %>%
group_by(country) %>%
summarise(across(everything(), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
I removed the `year` variable because it’s not a variable for which we want to have descriptive
statistics.
### 4\.5\.1 Filtering rows where several columns verify a condition
Let’s go back to the `gasoline` data from the `{Ecdat}` package.
When using `filter()`, it is only possible to filter one column at a time. For example, you can
only filter rows where a column equals “France” for instance. But suppose that we have a condition that we want
to use to filter out a lot of columns at once. For example, for every column that is of type
`numeric`, keep only the lines where the condition *value \> \-8* is satisfied. The next line does
that:
```
gasoline %>%
filter(if_any(where(is.numeric), \(x)(`>`(x, -8))))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The above code is using the `if_any()` function, included in `{dplyr}`. It also uses
`where()`, which must be used for predicate functions like `is.numeric()`, or `is.character()`, etc.
You can think of `if_any()` as a function that helps you select the columns to which to apply the
function. You can read the code above like this:
*Start with the gasoline data, then filter rows that are greater than \-8 across the columns
which are numeric*
or similar. `if_any()`, `if_all()` and `across()` makes operations like these very easy to achieve.
Sometimes, you’d want to filter rows from columns that end their labels with a letter, for instance
`"p"`. This can again be achieved using another helper, `ends_with()`, instead of `where()`:
```
gasoline %>%
filter(if_any(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 340 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 330 more rows
```
We already know about `ends_with()` and `starts_with()`. So the above line means “for the columns
whose name end with a ‘p’ only keep the lines where, for all the selected columns, the values are
strictly superior to `-8`”.
`if_all()` works exactly the same way, but think of the `if` in `if_all()` as having the conditions
separated by `and` while the `if` of `if_any()` being separated by `or`. So for example, the
code above, where `if_any()` is replaced by `if_all()`, results in a much smaller data frame:
```
gasoline %>%
filter(if_all(ends_with("p"), \(x)(`>`(x, -8))))
```
```
## # A tibble: 30 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 canada 1972 4.89 -5.44 -1.10 -7.99
## 2 canada 1973 4.90 -5.41 -1.13 -7.94
## 3 canada 1974 4.89 -5.42 -1.12 -7.90
## 4 canada 1975 4.89 -5.38 -1.19 -7.87
## 5 canada 1976 4.84 -5.36 -1.06 -7.81
## 6 canada 1977 4.81 -5.34 -1.07 -7.77
## 7 canada 1978 4.86 -5.31 -1.07 -7.79
## 8 germany 1978 3.88 -5.56 -0.628 -7.95
## 9 sweden 1975 3.97 -7.68 -2.77 -7.99
## 10 sweden 1976 3.98 -7.67 -2.82 -7.96
## # … with 20 more rows
```
because here, we only keep rows for columns that end with “p” where ALL of them are simultaneously
greater than 8\.
### 4\.5\.2 Selecting several columns at once
In a previous section we already played around a little bit with `select()` and some helpers,
`everything()`, `starts_with()` and `ends_with()`. But there are many ways that you can use
helper functions to select several columns easily:
```
gasoline %>%
select(where(is.numeric))
```
```
## # A tibble: 342 × 5
## year lgaspcar lincomep lrpmg lcarpcap
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1960 4.17 -6.47 -0.335 -9.77
## 2 1961 4.10 -6.43 -0.351 -9.61
## 3 1962 4.07 -6.41 -0.380 -9.46
## 4 1963 4.06 -6.37 -0.414 -9.34
## 5 1964 4.04 -6.32 -0.445 -9.24
## 6 1965 4.03 -6.29 -0.497 -9.12
## 7 1966 4.05 -6.25 -0.467 -9.02
## 8 1967 4.05 -6.23 -0.506 -8.93
## 9 1968 4.05 -6.21 -0.522 -8.85
## 10 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Selecting by column position is also possible:
```
gasoline %>%
select(c(1, 2, 5))
```
```
## # A tibble: 342 × 3
## country year lrpmg
## <chr> <int> <dbl>
## 1 austria 1960 -0.335
## 2 austria 1961 -0.351
## 3 austria 1962 -0.380
## 4 austria 1963 -0.414
## 5 austria 1964 -0.445
## 6 austria 1965 -0.497
## 7 austria 1966 -0.467
## 8 austria 1967 -0.506
## 9 austria 1968 -0.522
## 10 austria 1969 -0.559
## # … with 332 more rows
```
As is selecting columns starting or ending with a certain string of characters, as discussed previously:
```
gasoline %>%
select(starts_with("l"))
```
```
## # A tibble: 342 × 4
## lgaspcar lincomep lrpmg lcarpcap
## <dbl> <dbl> <dbl> <dbl>
## 1 4.17 -6.47 -0.335 -9.77
## 2 4.10 -6.43 -0.351 -9.61
## 3 4.07 -6.41 -0.380 -9.46
## 4 4.06 -6.37 -0.414 -9.34
## 5 4.04 -6.32 -0.445 -9.24
## 6 4.03 -6.29 -0.497 -9.12
## 7 4.05 -6.25 -0.467 -9.02
## 8 4.05 -6.23 -0.506 -8.93
## 9 4.05 -6.21 -0.522 -8.85
## 10 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
Another very neat trick is selecting columns that may or may not exist in your data frame. For this quick examples
let’s use the `mtcars` dataset:
```
sort(colnames(mtcars))
```
```
## [1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "mpg" "qsec" "vs"
## [11] "wt"
```
Let’s create a vector with some column names:
```
cols_to_select <- c("mpg", "cyl", "am", "nonsense")
```
The following selects the columns that exist
in the data frame but shows a warning for the column that does not exist:
```
mtcars %>%
select(any_of(cols_to_select))
```
```
## mpg cyl am
## Mazda RX4 21.0 6 1
## Mazda RX4 Wag 21.0 6 1
## Datsun 710 22.8 4 1
## Hornet 4 Drive 21.4 6 0
## Hornet Sportabout 18.7 8 0
## Valiant 18.1 6 0
## Duster 360 14.3 8 0
## Merc 240D 24.4 4 0
## Merc 230 22.8 4 0
## Merc 280 19.2 6 0
## Merc 280C 17.8 6 0
## Merc 450SE 16.4 8 0
## Merc 450SL 17.3 8 0
## Merc 450SLC 15.2 8 0
## Cadillac Fleetwood 10.4 8 0
## Lincoln Continental 10.4 8 0
## Chrysler Imperial 14.7 8 0
## Fiat 128 32.4 4 1
## Honda Civic 30.4 4 1
## Toyota Corolla 33.9 4 1
## Toyota Corona 21.5 4 0
## Dodge Challenger 15.5 8 0
## AMC Javelin 15.2 8 0
## Camaro Z28 13.3 8 0
## Pontiac Firebird 19.2 8 0
## Fiat X1-9 27.3 4 1
## Porsche 914-2 26.0 4 1
## Lotus Europa 30.4 4 1
## Ford Pantera L 15.8 8 1
## Ferrari Dino 19.7 6 1
## Maserati Bora 15.0 8 1
## Volvo 142E 21.4 4 1
```
and finally, if you want it to fail, don’t use any helper:
```
mtcars %>%
select(cols_to_select)
```
```
Error: Can't subset columns that don't exist.
The column `nonsense` doesn't exist.
```
or use `all_of()`:
```
mtcars %>%
select(all_of(cols_to_select))
```
```
✖ Column `nonsense` doesn't exist.
```
Bulk\-renaming can be achieved using `rename_with()`
```
gasoline %>%
rename_with(toupper, is.numeric)
```
```
## # A tibble: 342 × 6
## country YEAR LGASPCAR LINCOMEP LRPMG LCARPCAP
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
you can also pass functions to `rename_with()`:
```
gasoline %>%
rename_with(\(x)(paste0("new_", x)))
```
```
## # A tibble: 342 × 6
## new_country new_year new_lgaspcar new_lincomep new_lrpmg new_lcarpcap
## <chr> <int> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 austria 1964 4.04 -6.32 -0.445 -9.24
## 6 austria 1965 4.03 -6.29 -0.497 -9.12
## 7 austria 1966 4.05 -6.25 -0.467 -9.02
## 8 austria 1967 4.05 -6.23 -0.506 -8.93
## 9 austria 1968 4.05 -6.21 -0.522 -8.85
## 10 austria 1969 4.05 -6.15 -0.559 -8.79
## # … with 332 more rows
```
The reason I’m talking about renaming in a section about selecting is because you can
also rename with select:
```
gasoline %>%
select(YEAR = year)
```
```
## # A tibble: 342 × 1
## YEAR
## <int>
## 1 1960
## 2 1961
## 3 1962
## 4 1963
## 5 1964
## 6 1965
## 7 1966
## 8 1967
## 9 1968
## 10 1969
## # … with 332 more rows
```
but of course here, you only keep that one column, and you can’t rename with a function.
### 4\.5\.3 Summarising with `across()`
`across()` is used for summarising data. It allows to aggregations… *across* several columns. It
is especially useful with `group_by()`. To illustrate how `group_by()` works with `across()` I have
to first modify the `gasoline` data a little bit. As you can see below, the `year` column is of
type `double`:
```
gasoline %>%
lapply(typeof)
```
```
## $country
## [1] "character"
##
## $year
## [1] "integer"
##
## $lgaspcar
## [1] "double"
##
## $lincomep
## [1] "double"
##
## $lrpmg
## [1] "double"
##
## $lcarpcap
## [1] "double"
```
(we’ll discuss `lapply()` in a later chapter, but just to give you a little taste, `lapply()` applies
a function to each element of a list or of a data frame, in this case, `lapply()` applied the `typeof()`
function to each column of the `gasoline` data set, returning the type of each column)
Let’s change that to character:
```
gasoline <- gasoline %>%
mutate(year = as.character(year),
country = as.character(country))
```
This now allows me to group by type of columns for instance:
```
gasoline %>%
group_by(across(where(is.character))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
This is faster than having to write:
```
gasoline %>%
group_by(country, year) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
You may think that having two write the name of two variables is not a huge deal, which is true.
But imagine that you have dozens of character columns that you want to group by.
With `across()` and the helper functions, it doesn’t matter if the data frame has 2 columns
you need to group by or 2000\. All that matters is that you can find some commonalities between
all these columns that make it easy to select them. It can be their type, as we have seen
before, or their label:
```
gasoline %>%
group_by(across(contains("y"))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but it’s also possible to `group_by()` position:
```
gasoline %>%
group_by(across(c(1, 2))) %>%
summarise(mean_licomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_licomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
Using a sequence is also possible:
```
gasoline %>%
group_by(across(seq(1:2))) %>%
summarise(mean_lincomep = mean(lincomep))
```
```
## `summarise()` has grouped output by 'country'. You can override using the
## `.groups` argument.
```
```
## # A tibble: 342 × 3
## # Groups: country [18]
## country year mean_lincomep
## <chr> <chr> <dbl>
## 1 austria 1960 -6.47
## 2 austria 1961 -6.43
## 3 austria 1962 -6.41
## 4 austria 1963 -6.37
## 5 austria 1964 -6.32
## 6 austria 1965 -6.29
## 7 austria 1966 -6.25
## 8 austria 1967 -6.23
## 9 austria 1968 -6.21
## 10 austria 1969 -6.15
## # … with 332 more rows
```
but be careful, selecting by position is dangerous. If the position of columns changes, your code
will fail. Selecting by type or label is much more robust, especially by label, since types can
change as well (for example a date column can easily be exported as character column, etc).
### 4\.5\.4 `summarise()` across many columns
Summarising across many columns is really incredibly useful and in my opinion one of the best
arguments in favour of switching to a `{tidyverse}` only workflow:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), mean))
```
```
## # A tibble: 18 × 5
## country lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 -6.12 -0.486 -8.85
## 2 belgium 3.92 -5.85 -0.326 -8.63
## 3 canada 4.86 -5.58 -1.05 -8.08
## 4 denmark 4.19 -5.76 -0.358 -8.58
## 5 france 3.82 -5.87 -0.253 -8.45
## 6 germany 3.89 -5.85 -0.517 -8.51
## 7 greece 4.88 -6.61 -0.0339 -10.8
## 8 ireland 4.23 -6.44 -0.348 -9.04
## 9 italy 3.73 -6.35 -0.152 -8.83
## 10 japan 4.70 -6.25 -0.287 -9.95
## 11 netherla 4.08 -5.92 -0.370 -8.82
## 12 norway 4.11 -5.75 -0.278 -8.77
## 13 spain 4.06 -5.63 0.739 -9.90
## 14 sweden 4.01 -7.82 -2.71 -8.25
## 15 switzerl 4.24 -5.93 -0.902 -8.54
## 16 turkey 5.77 -7.34 -0.422 -12.5
## 17 u.k. 3.98 -6.02 -0.459 -8.55
## 18 u.s.a. 4.82 -5.45 -1.21 -7.78
```
But where `summarise()` and `across()` really shine is when you want to apply several functions
to many columns at once:
```
gasoline %>%
group_by(country) %>%
summarise(across(starts_with("l"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_li…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -6.12 0.235 -5.76 -6.47
## 2 belgium 3.92 0.103 4.16 3.82 -5.85 0.227 -5.53 -6.22
## 3 canada 4.86 0.0262 4.90 4.81 -5.58 0.193 -5.31 -5.89
## 4 denmark 4.19 0.158 4.50 4.00 -5.76 0.176 -5.48 -6.06
## 5 france 3.82 0.0499 3.91 3.75 -5.87 0.241 -5.53 -6.26
## 6 germany 3.89 0.0239 3.93 3.85 -5.85 0.193 -5.56 -6.16
## 7 greece 4.88 0.255 5.38 4.48 -6.61 0.331 -6.15 -7.16
## 8 ireland 4.23 0.0437 4.33 4.16 -6.44 0.162 -6.19 -6.72
## 9 italy 3.73 0.220 4.05 3.38 -6.35 0.217 -6.08 -6.73
## 10 japan 4.70 0.684 6.00 3.95 -6.25 0.425 -5.71 -6.99
## 11 netherla 4.08 0.286 4.65 3.71 -5.92 0.193 -5.66 -6.22
## 12 norway 4.11 0.123 4.44 3.96 -5.75 0.201 -5.42 -6.09
## 13 spain 4.06 0.317 4.75 3.62 -5.63 0.278 -5.29 -6.17
## 14 sweden 4.01 0.0364 4.07 3.91 -7.82 0.126 -7.67 -8.07
## 15 switzerl 4.24 0.102 4.44 4.05 -5.93 0.124 -5.75 -6.16
## 16 turkey 5.77 0.329 6.16 5.14 -7.34 0.331 -6.89 -7.84
## 17 u.k. 3.98 0.0479 4.10 3.91 -6.02 0.107 -5.84 -6.19
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -5.45 0.148 -5.22 -5.70
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, max_lrpmg <dbl>,
## # min_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # max_lcarpcap <dbl>, min_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷max_lincomep, ⁸min_lincomep
```
Here, I first started by grouping by `country`, then I applied the `mean()`, `sd()`, `max()` and
`min()` functions to every column starting with the character `"l"`. `tibble::lst()` allows you to
create a list just like with `list()` but names its arguments automatically. So the `mean()` function
gets name `"mean"`, and so on. Finally, I use the `.names =` argument to create the template for
the new column names. `{fn}_{col}` creates new column names of the form *function name \_ column name*.
As mentioned before, `across()` works with other helper functions:
```
gasoline %>%
group_by(country) %>%
summarise(across(contains("car"), tibble::lst(mean, sd, max, min), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 9
## country mean_lgasp…¹ sd_lg…² max_l…³ min_l…⁴ mean_…⁵ sd_lc…⁶ max_l…⁷ min_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 4.20 3.92 -8.85 0.473 -8.21 -9.77
## 2 belgium 3.92 0.103 4.16 3.82 -8.63 0.417 -8.10 -9.41
## 3 canada 4.86 0.0262 4.90 4.81 -8.08 0.195 -7.77 -8.38
## 4 denmark 4.19 0.158 4.50 4.00 -8.58 0.349 -8.20 -9.33
## 5 france 3.82 0.0499 3.91 3.75 -8.45 0.344 -8.01 -9.15
## 6 germany 3.89 0.0239 3.93 3.85 -8.51 0.406 -7.95 -9.34
## 7 greece 4.88 0.255 5.38 4.48 -10.8 0.839 -9.57 -12.2
## 8 ireland 4.23 0.0437 4.33 4.16 -9.04 0.345 -8.55 -9.70
## 9 italy 3.73 0.220 4.05 3.38 -8.83 0.639 -8.11 -10.1
## 10 japan 4.70 0.684 6.00 3.95 -9.95 1.20 -8.59 -12.2
## 11 netherla 4.08 0.286 4.65 3.71 -8.82 0.617 -8.16 -10.0
## 12 norway 4.11 0.123 4.44 3.96 -8.77 0.438 -8.17 -9.68
## 13 spain 4.06 0.317 4.75 3.62 -9.90 0.960 -8.63 -11.6
## 14 sweden 4.01 0.0364 4.07 3.91 -8.25 0.242 -7.96 -8.74
## 15 switzerl 4.24 0.102 4.44 4.05 -8.54 0.378 -8.03 -9.26
## 16 turkey 5.77 0.329 6.16 5.14 -12.5 0.751 -11.2 -13.5
## 17 u.k. 3.98 0.0479 4.10 3.91 -8.55 0.281 -8.26 -9.12
## 18 u.s.a. 4.82 0.0219 4.86 4.79 -7.78 0.162 -7.54 -8.02
## # … with abbreviated variable names ¹mean_lgaspcar, ²sd_lgaspcar,
## # ³max_lgaspcar, ⁴min_lgaspcar, ⁵mean_lcarpcap, ⁶sd_lcarpcap, ⁷max_lcarpcap,
## # ⁸min_lcarpcap
```
This is very likely the quickest, most elegant way to summarise that many columns.
There’s also a way to *summarise where*:
```
gasoline %>%
group_by(country) %>%
summarise(across(where(is.numeric), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
This allows you to summarise every column that contains real numbers. The difference between
`is.double()` and `is.numeric()` is that `is.numeric()` returns `TRUE` for integers too, whereas
`is.double()` returns `TRUE` for real numbers only (integers are real numbers too, but you know
what I mean). It is also possible to summarise every column at once:
```
gasoline %>%
select(-year) %>%
group_by(country) %>%
summarise(across(everything(), tibble::lst(mean, sd, min, max), .names = "{fn}_{col}"))
```
```
## # A tibble: 18 × 17
## country mean_lgasp…¹ sd_lg…² min_l…³ max_l…⁴ mean_…⁵ sd_li…⁶ min_l…⁷ max_l…⁸
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 4.06 0.0693 3.92 4.20 -6.12 0.235 -6.47 -5.76
## 2 belgium 3.92 0.103 3.82 4.16 -5.85 0.227 -6.22 -5.53
## 3 canada 4.86 0.0262 4.81 4.90 -5.58 0.193 -5.89 -5.31
## 4 denmark 4.19 0.158 4.00 4.50 -5.76 0.176 -6.06 -5.48
## 5 france 3.82 0.0499 3.75 3.91 -5.87 0.241 -6.26 -5.53
## 6 germany 3.89 0.0239 3.85 3.93 -5.85 0.193 -6.16 -5.56
## 7 greece 4.88 0.255 4.48 5.38 -6.61 0.331 -7.16 -6.15
## 8 ireland 4.23 0.0437 4.16 4.33 -6.44 0.162 -6.72 -6.19
## 9 italy 3.73 0.220 3.38 4.05 -6.35 0.217 -6.73 -6.08
## 10 japan 4.70 0.684 3.95 6.00 -6.25 0.425 -6.99 -5.71
## 11 netherla 4.08 0.286 3.71 4.65 -5.92 0.193 -6.22 -5.66
## 12 norway 4.11 0.123 3.96 4.44 -5.75 0.201 -6.09 -5.42
## 13 spain 4.06 0.317 3.62 4.75 -5.63 0.278 -6.17 -5.29
## 14 sweden 4.01 0.0364 3.91 4.07 -7.82 0.126 -8.07 -7.67
## 15 switzerl 4.24 0.102 4.05 4.44 -5.93 0.124 -6.16 -5.75
## 16 turkey 5.77 0.329 5.14 6.16 -7.34 0.331 -7.84 -6.89
## 17 u.k. 3.98 0.0479 3.91 4.10 -6.02 0.107 -6.19 -5.84
## 18 u.s.a. 4.82 0.0219 4.79 4.86 -5.45 0.148 -5.70 -5.22
## # … with 8 more variables: mean_lrpmg <dbl>, sd_lrpmg <dbl>, min_lrpmg <dbl>,
## # max_lrpmg <dbl>, mean_lcarpcap <dbl>, sd_lcarpcap <dbl>,
## # min_lcarpcap <dbl>, max_lcarpcap <dbl>, and abbreviated variable names
## # ¹mean_lgaspcar, ²sd_lgaspcar, ³min_lgaspcar, ⁴max_lgaspcar, ⁵mean_lincomep,
## # ⁶sd_lincomep, ⁷min_lincomep, ⁸max_lincomep
```
I removed the `year` variable because it’s not a variable for which we want to have descriptive
statistics.
4\.6 Other useful `{tidyverse}` functions
-----------------------------------------
### 4\.6\.1 `if_else()`, `case_when()` and `recode()`
Some other very useful `{tidyverse}` functions are `if_else()` and `case_when`. These two
functions, combined with `mutate()` make it easy to create a new variable whose values must
respect certain conditions. For instance, we might want to have a dummy that equals `1` if a country
in the European Union (to simplify, say as of 2017\) and `0` if not. First let’s create a list of
countries that are in the EU:
```
eu_countries <- c("austria", "belgium", "bulgaria", "croatia", "republic of cyprus",
"czech republic", "denmark", "estonia", "finland", "france", "germany",
"greece", "hungary", "ireland", "italy", "latvia", "lithuania", "luxembourg",
"malta", "netherla", "poland", "portugal", "romania", "slovakia", "slovenia",
"spain", "sweden", "u.k.")
```
I’ve had to change “netherlands” to “netherla” because that’s how the country is called in the
`gasoline` data. Now let’s create a dummy variable that equals `1` for EU countries, and `0` for the others:
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, 1, 0))
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 1
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 1
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 1
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 1
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 1
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 1
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 1
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 1
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 1
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 1
## # … with 332 more rows
```
Instead of `1` and `0`, we can of course use strings (I add `filter(year == 1960)` at the end to
have a better view of what happened):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, "yes", "no")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 yes
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 yes
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 no
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 yes
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 yes
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 yes
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 yes
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 yes
## 9 italy 1960 4.05 -6.73 0.165 -10.1 yes
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 no
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 yes
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 no
## 13 spain 1960 4.75 -6.17 1.13 -11.6 yes
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 yes
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 no
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 no
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 yes
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 no
```
I think that `if_else()` is fairly straightforward, especially if you know `ifelse()` already. You
might be wondering what is the difference between these two. `if_else()` is stricter than
`ifelse()` and does not do type conversion. Compare the two next lines:
```
ifelse(1 == 1, "0", 1)
```
```
## [1] "0"
```
```
if_else(1 == 1, "0", 1)
```
```
Error: `false` must be type string, not double
```
Type conversion, especially without a warning is very dangerous. `if_else()`’s behaviour which
consists in failing as soon as possble avoids a lot of pain and suffering, especially when
programming non\-interactively.
`if_else()` also accepts an optional argument, that allows you to specify what should be returned
in case of `NA`:
```
if_else(1 <= NA, 0, 1, 999)
```
```
## [1] 999
```
```
# Or
if_else(1 <= NA, 0, 1, NA_real_)
```
```
## [1] NA
```
`case_when()` can be seen as a generalization of `if_else()`. Whenever you want to use multiple
`if_else()`s, that’s when you know you should use `case_when()` (I’m adding the filter at the end
for the same reason as before, to see the output better):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(region = case_when(
country %in% c("france", "italy", "turkey", "greece", "spain") ~ "mediterranean",
country %in% c("germany", "austria", "switzerl", "belgium", "netherla") ~ "central europe",
country %in% c("canada", "u.s.a.", "u.k.", "ireland") ~ "anglosphere",
country %in% c("denmark", "norway", "sweden") ~ "nordic",
country %in% c("japan") ~ "asia")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap region
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 central europe
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 central europe
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 anglosphere
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 nordic
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 mediterranean
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 central europe
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 mediterranean
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 anglosphere
## 9 italy 1960 4.05 -6.73 0.165 -10.1 mediterranean
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 asia
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 central europe
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 nordic
## 13 spain 1960 4.75 -6.17 1.13 -11.6 mediterranean
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 nordic
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 central europe
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 mediterranean
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 anglosphere
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 anglosphere
```
If all you want is to recode values, you can use `recode()`. For example, the Netherlands is
written as “NETHERLA” in the `gasoline` data, which is quite ugly. Same for Switzerland:
```
gasoline <- gasoline %>%
mutate(country = tolower(country)) %>%
mutate(country = recode(country, "netherla" = "netherlands", "switzerl" = "switzerland"))
```
I saved the data with these changes as they will become useful in the future. Let’s take a look at
the data:
```
gasoline %>%
filter(country %in% c("netherlands", "switzerland"), year == 1960)
```
```
## # A tibble: 2 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 netherlands 1960 4.65 -6.22 -0.201 -10.0
## 2 switzerland 1960 4.40 -6.16 -0.823 -9.26
```
### 4\.6\.2 `lead()` and `lag()`
`lead()` and `lag()` are especially useful in econometrics. When I was doing my masters, in 4 B.d.
(*Before dplyr*) lagging variables in panel data was quite tricky. Now, with `{dplyr}` it’s really
very easy:
```
gasoline %>%
group_by(country) %>%
mutate(lag_lgaspcar = lag(lgaspcar)) %>%
mutate(lead_lgaspcar = lead(lgaspcar)) %>%
filter(year %in% seq(1960, 1963))
```
```
## # A tibble: 72 × 8
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap lag_lgaspcar lead_lgaspcar
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 NA 4.10
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 4.17 4.07
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 4.10 4.06
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 4.07 4.04
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41 NA 4.12
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30 4.16 4.08
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22 4.12 4.00
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11 4.08 3.99
## 9 canada 1960 4.86 -5.89 -0.972 -8.38 NA 4.83
## 10 canada 1961 4.83 -5.88 -0.972 -8.35 4.86 4.85
## # … with 62 more rows
```
To lag every variable, remember that you can use `mutate_if()`:
```
gasoline %>%
group_by(country) %>%
mutate_if(is.double, lag) %>%
filter(year %in% seq(1960, 1963))
```
```
## `mutate_if()` ignored the following grouping variables:
## • Column `country`
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1961 4.83 -5.88 -0.972 -8.35
## # … with 62 more rows
```
you can replace `lag()` with `lead()`, but just keep in mind that the columns get transformed in
place.
### 4\.6\.3 `ntile()`
The last helper function I will discuss is `ntile()`. There are some other, so do read `mutate()`’s
documentation with `help(mutate)`!
If you need quantiles, you need `ntile()`. Let’s see how it works:
```
gasoline %>%
mutate(quintile = ntile(lgaspcar, 5)) %>%
mutate(decile = ntile(lgaspcar, 10)) %>%
select(country, year, lgaspcar, quintile, decile)
```
```
## # A tibble: 342 × 5
## country year lgaspcar quintile decile
## <chr> <dbl> <dbl> <int> <int>
## 1 austria 1960 4.17 3 6
## 2 austria 1961 4.10 3 6
## 3 austria 1962 4.07 3 5
## 4 austria 1963 4.06 3 5
## 5 austria 1964 4.04 3 5
## 6 austria 1965 4.03 3 5
## 7 austria 1966 4.05 3 5
## 8 austria 1967 4.05 3 5
## 9 austria 1968 4.05 3 5
## 10 austria 1969 4.05 3 5
## # … with 332 more rows
```
`quintile` and `decile` do not hold the values but the quantile the value lies in. If you want to
have a column that contains the median for instance, you can use good ol’ `quantile()`:
```
gasoline %>%
group_by(country) %>%
mutate(median = quantile(lgaspcar, 0.5)) %>% # quantile(x, 0.5) is equivalent to median(x)
filter(year == 1960) %>%
select(country, year, median)
```
```
## # A tibble: 18 × 3
## # Groups: country [18]
## country year median
## <chr> <dbl> <dbl>
## 1 austria 1960 4.05
## 2 belgium 1960 3.88
## 3 canada 1960 4.86
## 4 denmark 1960 4.16
## 5 france 1960 3.81
## 6 germany 1960 3.89
## 7 greece 1960 4.89
## 8 ireland 1960 4.22
## 9 italy 1960 3.74
## 10 japan 1960 4.52
## 11 netherlands 1960 3.99
## 12 norway 1960 4.08
## 13 spain 1960 3.99
## 14 sweden 1960 4.00
## 15 switzerland 1960 4.26
## 16 turkey 1960 5.72
## 17 u.k. 1960 3.98
## 18 u.s.a. 1960 4.81
```
### 4\.6\.4 `arrange()`
`arrange()` re\-orders the whole `tibble` according to values of the supplied variable:
```
gasoline %>%
arrange(lgaspcar)
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 italy 1977 3.38 -6.10 0.164 -8.15
## 2 italy 1978 3.39 -6.08 0.0348 -8.11
## 3 italy 1976 3.43 -6.12 0.103 -8.17
## 4 italy 1974 3.50 -6.13 -0.223 -8.26
## 5 italy 1975 3.52 -6.17 -0.0327 -8.22
## 6 spain 1978 3.62 -5.29 0.621 -8.63
## 7 italy 1972 3.63 -6.21 -0.215 -8.38
## 8 italy 1971 3.65 -6.22 -0.148 -8.47
## 9 spain 1977 3.65 -5.30 0.526 -8.73
## 10 italy 1973 3.65 -6.16 -0.325 -8.32
## # … with 332 more rows
```
If you want to re\-order the `tibble` in descending order of the variable:
```
gasoline %>%
arrange(desc(lgaspcar))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 1966 6.16 -7.51 -0.356 -13.0
## 2 turkey 1960 6.13 -7.80 -0.253 -13.5
## 3 turkey 1961 6.11 -7.79 -0.343 -13.4
## 4 turkey 1962 6.08 -7.84 -0.408 -13.2
## 5 turkey 1968 6.08 -7.42 -0.365 -12.8
## 6 turkey 1963 6.08 -7.63 -0.225 -13.3
## 7 turkey 1964 6.06 -7.63 -0.252 -13.2
## 8 turkey 1967 6.04 -7.46 -0.335 -12.8
## 9 japan 1960 6.00 -6.99 -0.145 -12.2
## 10 turkey 1965 5.82 -7.62 -0.293 -12.9
## # … with 332 more rows
```
`arrange`’s documentation alerts the user that re\-ording by group is only possible by explicitely
specifying an option:
```
gasoline %>%
filter(year %in% seq(1960, 1963)) %>%
group_by(country) %>%
arrange(desc(lgaspcar), .by_group = TRUE)
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1962 4.85 -5.84 -0.979 -8.32
## # … with 62 more rows
```
This is especially useful for plotting. We’ll see this in Chapter 6\.
### 4\.6\.5 `tally()` and `count()`
`tally()` and `count()` count the number of observations in your data. I believe `count()` is the
more useful of the two, as it counts the number of observations within a group that you can provide:
```
gasoline %>%
count(country)
```
```
## # A tibble: 18 × 2
## country n
## <chr> <int>
## 1 austria 19
## 2 belgium 19
## 3 canada 19
## 4 denmark 19
## 5 france 19
## 6 germany 19
## 7 greece 19
## 8 ireland 19
## 9 italy 19
## 10 japan 19
## 11 netherlands 19
## 12 norway 19
## 13 spain 19
## 14 sweden 19
## 15 switzerland 19
## 16 turkey 19
## 17 u.k. 19
## 18 u.s.a. 19
```
There’s also `add_count()` which adds the column to the data:
```
gasoline %>%
add_count(country)
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
`add_count()` is a shortcut for the following code:
```
gasoline %>%
group_by(country) %>%
mutate(n = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
where `n()` is a `{dplyr}` function that can only be used within `summarise()`, `mutate()` and
`filter()`.
### 4\.6\.1 `if_else()`, `case_when()` and `recode()`
Some other very useful `{tidyverse}` functions are `if_else()` and `case_when`. These two
functions, combined with `mutate()` make it easy to create a new variable whose values must
respect certain conditions. For instance, we might want to have a dummy that equals `1` if a country
in the European Union (to simplify, say as of 2017\) and `0` if not. First let’s create a list of
countries that are in the EU:
```
eu_countries <- c("austria", "belgium", "bulgaria", "croatia", "republic of cyprus",
"czech republic", "denmark", "estonia", "finland", "france", "germany",
"greece", "hungary", "ireland", "italy", "latvia", "lithuania", "luxembourg",
"malta", "netherla", "poland", "portugal", "romania", "slovakia", "slovenia",
"spain", "sweden", "u.k.")
```
I’ve had to change “netherlands” to “netherla” because that’s how the country is called in the
`gasoline` data. Now let’s create a dummy variable that equals `1` for EU countries, and `0` for the others:
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, 1, 0))
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 1
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 1
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 1
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 1
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 1
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 1
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 1
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 1
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 1
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 1
## # … with 332 more rows
```
Instead of `1` and `0`, we can of course use strings (I add `filter(year == 1960)` at the end to
have a better view of what happened):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(in_eu = if_else(country %in% eu_countries, "yes", "no")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap in_eu
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 yes
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 yes
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 no
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 yes
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 yes
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 yes
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 yes
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 yes
## 9 italy 1960 4.05 -6.73 0.165 -10.1 yes
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 no
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 yes
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 no
## 13 spain 1960 4.75 -6.17 1.13 -11.6 yes
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 yes
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 no
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 no
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 yes
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 no
```
I think that `if_else()` is fairly straightforward, especially if you know `ifelse()` already. You
might be wondering what is the difference between these two. `if_else()` is stricter than
`ifelse()` and does not do type conversion. Compare the two next lines:
```
ifelse(1 == 1, "0", 1)
```
```
## [1] "0"
```
```
if_else(1 == 1, "0", 1)
```
```
Error: `false` must be type string, not double
```
Type conversion, especially without a warning is very dangerous. `if_else()`’s behaviour which
consists in failing as soon as possble avoids a lot of pain and suffering, especially when
programming non\-interactively.
`if_else()` also accepts an optional argument, that allows you to specify what should be returned
in case of `NA`:
```
if_else(1 <= NA, 0, 1, 999)
```
```
## [1] 999
```
```
# Or
if_else(1 <= NA, 0, 1, NA_real_)
```
```
## [1] NA
```
`case_when()` can be seen as a generalization of `if_else()`. Whenever you want to use multiple
`if_else()`s, that’s when you know you should use `case_when()` (I’m adding the filter at the end
for the same reason as before, to see the output better):
```
gasoline %>%
mutate(country = tolower(country)) %>%
mutate(region = case_when(
country %in% c("france", "italy", "turkey", "greece", "spain") ~ "mediterranean",
country %in% c("germany", "austria", "switzerl", "belgium", "netherla") ~ "central europe",
country %in% c("canada", "u.s.a.", "u.k.", "ireland") ~ "anglosphere",
country %in% c("denmark", "norway", "sweden") ~ "nordic",
country %in% c("japan") ~ "asia")) %>%
filter(year == 1960)
```
```
## # A tibble: 18 × 7
## country year lgaspcar lincomep lrpmg lcarpcap region
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 central europe
## 2 belgium 1960 4.16 -6.22 -0.166 -9.41 central europe
## 3 canada 1960 4.86 -5.89 -0.972 -8.38 anglosphere
## 4 denmark 1960 4.50 -6.06 -0.196 -9.33 nordic
## 5 france 1960 3.91 -6.26 -0.0196 -9.15 mediterranean
## 6 germany 1960 3.92 -6.16 -0.186 -9.34 central europe
## 7 greece 1960 5.04 -7.16 -0.0835 -12.2 mediterranean
## 8 ireland 1960 4.27 -6.72 -0.0765 -9.70 anglosphere
## 9 italy 1960 4.05 -6.73 0.165 -10.1 mediterranean
## 10 japan 1960 6.00 -6.99 -0.145 -12.2 asia
## 11 netherla 1960 4.65 -6.22 -0.201 -10.0 central europe
## 12 norway 1960 4.44 -6.09 -0.140 -9.68 nordic
## 13 spain 1960 4.75 -6.17 1.13 -11.6 mediterranean
## 14 sweden 1960 4.06 -8.07 -2.52 -8.74 nordic
## 15 switzerl 1960 4.40 -6.16 -0.823 -9.26 central europe
## 16 turkey 1960 6.13 -7.80 -0.253 -13.5 mediterranean
## 17 u.k. 1960 4.10 -6.19 -0.391 -9.12 anglosphere
## 18 u.s.a. 1960 4.82 -5.70 -1.12 -8.02 anglosphere
```
If all you want is to recode values, you can use `recode()`. For example, the Netherlands is
written as “NETHERLA” in the `gasoline` data, which is quite ugly. Same for Switzerland:
```
gasoline <- gasoline %>%
mutate(country = tolower(country)) %>%
mutate(country = recode(country, "netherla" = "netherlands", "switzerl" = "switzerland"))
```
I saved the data with these changes as they will become useful in the future. Let’s take a look at
the data:
```
gasoline %>%
filter(country %in% c("netherlands", "switzerland"), year == 1960)
```
```
## # A tibble: 2 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 netherlands 1960 4.65 -6.22 -0.201 -10.0
## 2 switzerland 1960 4.40 -6.16 -0.823 -9.26
```
### 4\.6\.2 `lead()` and `lag()`
`lead()` and `lag()` are especially useful in econometrics. When I was doing my masters, in 4 B.d.
(*Before dplyr*) lagging variables in panel data was quite tricky. Now, with `{dplyr}` it’s really
very easy:
```
gasoline %>%
group_by(country) %>%
mutate(lag_lgaspcar = lag(lgaspcar)) %>%
mutate(lead_lgaspcar = lead(lgaspcar)) %>%
filter(year %in% seq(1960, 1963))
```
```
## # A tibble: 72 × 8
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap lag_lgaspcar lead_lgaspcar
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 NA 4.10
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 4.17 4.07
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 4.10 4.06
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 4.07 4.04
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41 NA 4.12
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30 4.16 4.08
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22 4.12 4.00
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11 4.08 3.99
## 9 canada 1960 4.86 -5.89 -0.972 -8.38 NA 4.83
## 10 canada 1961 4.83 -5.88 -0.972 -8.35 4.86 4.85
## # … with 62 more rows
```
To lag every variable, remember that you can use `mutate_if()`:
```
gasoline %>%
group_by(country) %>%
mutate_if(is.double, lag) %>%
filter(year %in% seq(1960, 1963))
```
```
## `mutate_if()` ignored the following grouping variables:
## • Column `country`
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1961 4.83 -5.88 -0.972 -8.35
## # … with 62 more rows
```
you can replace `lag()` with `lead()`, but just keep in mind that the columns get transformed in
place.
### 4\.6\.3 `ntile()`
The last helper function I will discuss is `ntile()`. There are some other, so do read `mutate()`’s
documentation with `help(mutate)`!
If you need quantiles, you need `ntile()`. Let’s see how it works:
```
gasoline %>%
mutate(quintile = ntile(lgaspcar, 5)) %>%
mutate(decile = ntile(lgaspcar, 10)) %>%
select(country, year, lgaspcar, quintile, decile)
```
```
## # A tibble: 342 × 5
## country year lgaspcar quintile decile
## <chr> <dbl> <dbl> <int> <int>
## 1 austria 1960 4.17 3 6
## 2 austria 1961 4.10 3 6
## 3 austria 1962 4.07 3 5
## 4 austria 1963 4.06 3 5
## 5 austria 1964 4.04 3 5
## 6 austria 1965 4.03 3 5
## 7 austria 1966 4.05 3 5
## 8 austria 1967 4.05 3 5
## 9 austria 1968 4.05 3 5
## 10 austria 1969 4.05 3 5
## # … with 332 more rows
```
`quintile` and `decile` do not hold the values but the quantile the value lies in. If you want to
have a column that contains the median for instance, you can use good ol’ `quantile()`:
```
gasoline %>%
group_by(country) %>%
mutate(median = quantile(lgaspcar, 0.5)) %>% # quantile(x, 0.5) is equivalent to median(x)
filter(year == 1960) %>%
select(country, year, median)
```
```
## # A tibble: 18 × 3
## # Groups: country [18]
## country year median
## <chr> <dbl> <dbl>
## 1 austria 1960 4.05
## 2 belgium 1960 3.88
## 3 canada 1960 4.86
## 4 denmark 1960 4.16
## 5 france 1960 3.81
## 6 germany 1960 3.89
## 7 greece 1960 4.89
## 8 ireland 1960 4.22
## 9 italy 1960 3.74
## 10 japan 1960 4.52
## 11 netherlands 1960 3.99
## 12 norway 1960 4.08
## 13 spain 1960 3.99
## 14 sweden 1960 4.00
## 15 switzerland 1960 4.26
## 16 turkey 1960 5.72
## 17 u.k. 1960 3.98
## 18 u.s.a. 1960 4.81
```
### 4\.6\.4 `arrange()`
`arrange()` re\-orders the whole `tibble` according to values of the supplied variable:
```
gasoline %>%
arrange(lgaspcar)
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 italy 1977 3.38 -6.10 0.164 -8.15
## 2 italy 1978 3.39 -6.08 0.0348 -8.11
## 3 italy 1976 3.43 -6.12 0.103 -8.17
## 4 italy 1974 3.50 -6.13 -0.223 -8.26
## 5 italy 1975 3.52 -6.17 -0.0327 -8.22
## 6 spain 1978 3.62 -5.29 0.621 -8.63
## 7 italy 1972 3.63 -6.21 -0.215 -8.38
## 8 italy 1971 3.65 -6.22 -0.148 -8.47
## 9 spain 1977 3.65 -5.30 0.526 -8.73
## 10 italy 1973 3.65 -6.16 -0.325 -8.32
## # … with 332 more rows
```
If you want to re\-order the `tibble` in descending order of the variable:
```
gasoline %>%
arrange(desc(lgaspcar))
```
```
## # A tibble: 342 × 6
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 turkey 1966 6.16 -7.51 -0.356 -13.0
## 2 turkey 1960 6.13 -7.80 -0.253 -13.5
## 3 turkey 1961 6.11 -7.79 -0.343 -13.4
## 4 turkey 1962 6.08 -7.84 -0.408 -13.2
## 5 turkey 1968 6.08 -7.42 -0.365 -12.8
## 6 turkey 1963 6.08 -7.63 -0.225 -13.3
## 7 turkey 1964 6.06 -7.63 -0.252 -13.2
## 8 turkey 1967 6.04 -7.46 -0.335 -12.8
## 9 japan 1960 6.00 -6.99 -0.145 -12.2
## 10 turkey 1965 5.82 -7.62 -0.293 -12.9
## # … with 332 more rows
```
`arrange`’s documentation alerts the user that re\-ording by group is only possible by explicitely
specifying an option:
```
gasoline %>%
filter(year %in% seq(1960, 1963)) %>%
group_by(country) %>%
arrange(desc(lgaspcar), .by_group = TRUE)
```
```
## # A tibble: 72 × 6
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77
## 2 austria 1961 4.10 -6.43 -0.351 -9.61
## 3 austria 1962 4.07 -6.41 -0.380 -9.46
## 4 austria 1963 4.06 -6.37 -0.414 -9.34
## 5 belgium 1960 4.16 -6.22 -0.166 -9.41
## 6 belgium 1961 4.12 -6.18 -0.172 -9.30
## 7 belgium 1962 4.08 -6.13 -0.222 -9.22
## 8 belgium 1963 4.00 -6.09 -0.250 -9.11
## 9 canada 1960 4.86 -5.89 -0.972 -8.38
## 10 canada 1962 4.85 -5.84 -0.979 -8.32
## # … with 62 more rows
```
This is especially useful for plotting. We’ll see this in Chapter 6\.
### 4\.6\.5 `tally()` and `count()`
`tally()` and `count()` count the number of observations in your data. I believe `count()` is the
more useful of the two, as it counts the number of observations within a group that you can provide:
```
gasoline %>%
count(country)
```
```
## # A tibble: 18 × 2
## country n
## <chr> <int>
## 1 austria 19
## 2 belgium 19
## 3 canada 19
## 4 denmark 19
## 5 france 19
## 6 germany 19
## 7 greece 19
## 8 ireland 19
## 9 italy 19
## 10 japan 19
## 11 netherlands 19
## 12 norway 19
## 13 spain 19
## 14 sweden 19
## 15 switzerland 19
## 16 turkey 19
## 17 u.k. 19
## 18 u.s.a. 19
```
There’s also `add_count()` which adds the column to the data:
```
gasoline %>%
add_count(country)
```
```
## # A tibble: 342 × 7
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
`add_count()` is a shortcut for the following code:
```
gasoline %>%
group_by(country) %>%
mutate(n = n())
```
```
## # A tibble: 342 × 7
## # Groups: country [18]
## country year lgaspcar lincomep lrpmg lcarpcap n
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
## 1 austria 1960 4.17 -6.47 -0.335 -9.77 19
## 2 austria 1961 4.10 -6.43 -0.351 -9.61 19
## 3 austria 1962 4.07 -6.41 -0.380 -9.46 19
## 4 austria 1963 4.06 -6.37 -0.414 -9.34 19
## 5 austria 1964 4.04 -6.32 -0.445 -9.24 19
## 6 austria 1965 4.03 -6.29 -0.497 -9.12 19
## 7 austria 1966 4.05 -6.25 -0.467 -9.02 19
## 8 austria 1967 4.05 -6.23 -0.506 -8.93 19
## 9 austria 1968 4.05 -6.21 -0.522 -8.85 19
## 10 austria 1969 4.05 -6.15 -0.559 -8.79 19
## # … with 332 more rows
```
where `n()` is a `{dplyr}` function that can only be used within `summarise()`, `mutate()` and
`filter()`.
4\.7 Special packages for special kinds of data: `{forcats}`, `{lubridate}`, and `{stringr}`
--------------------------------------------------------------------------------------------
### 4\.7\.1 🐱🐱🐱🐱
Factor variables are very useful but not very easy to manipulate. `forcats` contains very useful
functions that make working on factor variables painless. In my opinion, the four following functions, `fct_recode()`, `fct_relevel()`, `fct_reorder()` and `fct_relabel()`, are the ones you must
know, so that’s what I’ll be showing.
Remember in chapter 3 when I very quickly explained what were `factor` variables? In this section,
we are going to work a little bit with these type of variable. `factor`s are very useful, and the
`forcats` package includes some handy functions to work with them. First, let’s load the `forcats` package:
```
library(forcats)
```
as an example, we are going to work with the `gss_cat` dataset that is included in `forcats`. Let’s
load the data:
```
data(gss_cat)
head(gss_cat)
```
```
## # A tibble: 6 × 9
## year marital age race rincome partyid relig denom tvhours
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int>
## 1 2000 Never married 26 White $8000 to 9999 Ind,near r… Prot… Sout… 12
## 2 2000 Divorced 48 White $8000 to 9999 Not str re… Prot… Bapt… NA
## 3 2000 Widowed 67 White Not applicable Independent Prot… No d… 2
## 4 2000 Never married 39 White Not applicable Ind,near r… Orth… Not … 4
## 5 2000 Divorced 25 White Not applicable Not str de… None Not … 1
## 6 2000 Married 25 White $20000 - 24999 Strong dem… Prot… Sout… NA
```
as you can see, `marital`, `race`, `rincome` and `partyid` are all factor variables. Let’s take a closer
look at `marital`:
```
str(gss_cat$marital)
```
```
## Factor w/ 6 levels "No answer","Never married",..: 2 4 5 2 4 6 2 4 6 6 ...
```
and let’s see `rincome`:
```
str(gss_cat$rincome)
```
```
## Factor w/ 16 levels "No answer","Don't know",..: 8 8 16 16 16 5 4 9 4 4 ...
```
`factor` variables have different levels and the `forcats` package includes functions that allow
you to recode, collapse and do all sorts of things on these levels. For example , using
`forcats::fct_recode()` you can recode levels:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_recode(marital,
refuse = "No answer",
never_married = "Never married",
divorced = "Separated",
divorced = "Divorced",
widowed = "Widowed",
married = "Married"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## refuse 17 0.0007913234
## never_married 5416 0.2521063166
## divorced 4126 0.1920588372
## widowed 1807 0.0841130196
## married 10117 0.4709305032
```
Using `fct_recode()`, I was able to recode the levels and collapse `Separated` and `Divorced` to
a single category called `divorced`. As you can see, `refuse` and `widowed` are less than 10%, so
maybe you’d want to lump these categories together:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_lump(marital, prop = 0.10, other_level = "other"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## never_married 5416 0.25210632
## divorced 4126 0.19205884
## married 10117 0.47093050
## other 1824 0.08490434
```
`fct_reorder()` is especially useful for plotting. We will explore plotting in the next chapter,
but to show you why `fct_reorder()` is so useful, I will create a barplot, first without
using `fct_reorder()` to re\-order the factors, then with reordering. Do not worry if you don’t
understand all the code for now:
```
gss_cat %>%
tabyl(marital) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
It would be much better if the categories were ordered by frequency. This is easy to do with
`fct_reorder()`:
```
gss_cat %>%
tabyl(marital) %>%
mutate(marital = fct_reorder(marital, n, .desc = FALSE)) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
Much better! In Chapter 6, we are going to learn about `{ggplot2}`.
The last family of function I’d like to mention are the `fct_lump*()` functions. These make it possible
to lump several levels of a factor into a new *other* level:
```
gss_cat %>%
mutate(
# Description of the different functions taken from help(fct_lump)
denom_lowfreq = fct_lump_lowfreq(denom), # lumps together the least frequent levels, ensuring that "other" is still the smallest level.
denom_min = fct_lump_min(denom, min = 10), # lumps levels that appear fewer than min times.
denom_n = fct_lump_n(denom, n = 3), # lumps all levels except for the n most frequent (or least frequent if n < 0)
denom_prop = fct_lump_prop(denom, prop = 0.10) # lumps levels that appear in fewer prop * n times.
)
```
```
## # A tibble: 21,483 × 13
## year marital age race rincome partyid relig denom tvhours denom…¹ denom…²
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int> <fct> <fct>
## 1 2000 never_… 26 White $8000 … Ind,ne… Prot… Sout… 12 Southe… Southe…
## 2 2000 divorc… 48 White $8000 … Not st… Prot… Bapt… NA Baptis… Baptis…
## 3 2000 other 67 White Not ap… Indepe… Prot… No d… 2 No den… No den…
## 4 2000 never_… 39 White Not ap… Ind,ne… Orth… Not … 4 Not ap… Not ap…
## 5 2000 divorc… 25 White Not ap… Not st… None Not … 1 Not ap… Not ap…
## 6 2000 married 25 White $20000… Strong… Prot… Sout… NA Southe… Southe…
## 7 2000 never_… 36 White $25000… Not st… Chri… Not … 3 Not ap… Not ap…
## 8 2000 divorc… 44 White $7000 … Ind,ne… Prot… Luth… NA Luther… Luther…
## 9 2000 married 44 White $25000… Not st… Prot… Other 0 Other Other
## 10 2000 married 47 White $25000… Strong… Prot… Sout… 3 Southe… Southe…
## # … with 21,473 more rows, 2 more variables: denom_n <fct>, denom_prop <fct>,
## # and abbreviated variable names ¹denom_lowfreq, ²denom_min
```
There’s many other, so I’d advise you go through the package’s function [reference](https://forcats.tidyverse.org/reference/index.html).
### 4\.7\.2 Get your dates right with `{lubridate}`
`{lubridate}` is yet another tidyverse package, that makes dealing with dates or durations (and intervals) as
painless as possible. I do not use every function contained in the package daily, and as such will
only focus on some of the functions. However, if you have to deal with dates often,
you might want to explore the package thouroughly.
#### 4\.7\.2\.1 Defining dates, the tidy way
Let’s load new dataset, called *independence* from the Github repo of the book:
```
independence_path <- tempfile(fileext = "rds")
download.file(url = "https://github.com/b-rodrigues/modern_R/blob/master/datasets/independence.rds?raw=true",
destfile = independence_path)
independence <- readRDS(independence_path)
```
This dataset was scraped from the following Wikipedia [page](https://en.wikipedia.org/wiki/Decolonisation_of_Africa#Timeline).
It shows when African countries gained independence and from which colonial powers. In Chapter 10, I
will show you how to scrape Wikipedia pages using R. For now, let’s take a look at the contents
of the dataset:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ indep…² first…³ indep…⁴
## <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Liberia Liberia United… 26 Jul… Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal O… United… 31 May… Louis … South …
## 3 Egypt Sultanate of Egypt United… 28 Feb… Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 10 Feb… Haile … -
## 5 Libya British Military Administration… United… 24 Dec… Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1 Janu… Ismail… -
## 7 Tunisia French Protectorate of Tunisia France 20 Mar… Muhamm… -
## 8 Morocco French Protectorate in Morocco … France… 2 Marc… Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 6 Marc… Kwame … Gold C…
## 10 Guinea French West Africa France 2 Octo… Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
as you can see, the date of independence is in a format that might make it difficult to answer questions
such as *Which African countries gained independence before 1960 ?* for two reasons. First of all,
the date uses the name of the month instead of the number of the month, and second of all the type of
the independence day column is *character* and not “date”. So our first task is to correctly define the column
as being of type date, while making sure that R understands that *January* is supposed to be “01”, and so
on. There are several helpful functions included in `{lubridate}` to convert columns to dates. For instance
if the column you want to convert is of the form “2012\-11\-21”, then you would use the function `ymd()`,
for “year\-month\-day”. If, however the column is “2012\-21\-11”, then you would use `ydm()`. There’s
a few of these helper functions, and they can handle a lot of different formats for dates. In our case,
having the name of the month instead of the number might seem quite problematic, but it turns out
that this is a case that `{lubridate}` handles painfully:
```
library(lubridate)
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
```
independence <- independence %>%
mutate(independence_date = dmy(independence_date))
```
```
## Warning: 5 failed to parse.
```
Some dates failed to parse, for instance for Morocco. This is because these countries have several
independence dates; this means that the string to convert looks like:
```
"2 March 1956
7 April 1956
10 April 1958
4 January 1969"
```
which obviously cannot be converted by `{lubridate}` without further manipulation. I ignore these cases for
simplicity’s sake.
#### 4\.7\.2\.2 Data manipulation with dates
Let’s take a look at the data now:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ independ…² first…³ indep…⁴
## <chr> <chr> <chr> <date> <chr> <chr>
## 1 Liberia Liberia United… 1847-07-26 Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal… United… 1910-05-31 Louis … South …
## 3 Egypt Sultanate of Egypt United… 1922-02-28 Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 1947-02-10 Haile … -
## 5 Libya British Military Administrat… United… 1951-12-24 Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1956-01-01 Ismail… -
## 7 Tunisia French Protectorate of Tunis… France 1956-03-20 Muhamm… -
## 8 Morocco French Protectorate in Moroc… France… NA Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 1957-03-06 Kwame … Gold C…
## 10 Guinea French West Africa France 1958-10-02 Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
As you can see, we now have a date column in the right format. We can now answer questions such as
*Which countries gained independence before 1960?* quite easily, by using the functions `year()`,
`month()` and `day()`. Let’s see which countries gained independence before 1960:
```
independence %>%
filter(year(independence_date) <= 1960) %>%
pull(country)
```
```
## [1] "Liberia" "South Africa"
## [3] "Egypt" "Eritrea"
## [5] "Libya" "Sudan"
## [7] "Tunisia" "Ghana"
## [9] "Guinea" "Cameroon"
## [11] "Togo" "Mali"
## [13] "Madagascar" "Democratic Republic of the Congo"
## [15] "Benin" "Niger"
## [17] "Burkina Faso" "Ivory Coast"
## [19] "Chad" "Central African Republic"
## [21] "Republic of the Congo" "Gabon"
## [23] "Mauritania"
```
You guessed it, `year()` extracts the year of the date column and converts it as a *numeric* so that we can work
on it. This is the same for `month()` or `day()`. Let’s try to see if countries gained their independence on
Christmas Eve:
```
independence %>%
filter(month(independence_date) == 12,
day(independence_date) == 24) %>%
pull(country)
```
```
## [1] "Libya"
```
Seems like Libya was the only one! You can also operate on dates. For instance, let’s compute the difference between
two dates, using the `interval()` column:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since)
```
```
## # A tibble: 54 × 2
## country independent_since
## <chr> <Interval>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC
## 8 Morocco NA--NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC
## # … with 44 more rows
```
The `independent_since` column now contains an *interval* object that we can convert to years:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since) %>%
mutate(years_independent = as.numeric(independent_since, "years"))
```
```
## # A tibble: 54 × 3
## country independent_since years_independent
## <chr> <Interval> <dbl>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC 175.
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC 112.
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC 101.
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC 75.7
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC 70.8
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC 66.8
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC 66.6
## 8 Morocco NA--NA NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC 65.6
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC 64.1
## # … with 44 more rows
```
We can now see for how long the last country to gain independence has been independent.
Because the data is not tidy (in some cases, an African country was colonized by two powers,
see Libya), I will only focus on 4 European colonial powers: Belgium, France, Portugal and the United Kingdom:
```
independence %>%
filter(colonial_power %in% c("Belgium", "France", "Portugal", "United Kingdom")) %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
mutate(years_independent = as.numeric(independent_since, "years")) %>%
group_by(colonial_power) %>%
summarise(last_colony_independent_for = min(years_independent, na.rm = TRUE))
```
```
## # A tibble: 4 × 2
## colonial_power last_colony_independent_for
## <chr> <dbl>
## 1 Belgium 60.3
## 2 France 45.3
## 3 Portugal 47.0
## 4 United Kingdom 46.3
```
#### 4\.7\.2\.3 Arithmetic with dates
Adding or substracting days to dates is quite easy:
```
ymd("2018-12-31") + 16
```
```
## [1] "2019-01-16"
```
It is also possible to be more explicit and use `days()`:
```
ymd("2018-12-31") + days(16)
```
```
## [1] "2019-01-16"
```
To add years, you can use `years()`:
```
ymd("2018-12-31") + years(1)
```
```
## [1] "2019-12-31"
```
But you have to be careful with leap years:
```
ymd("2016-02-29") + years(1)
```
```
## [1] NA
```
Because 2017 is not a leap year, the above computation returns `NA`. The same goes for months with
a different number of days:
```
ymd("2018-12-31") + months(2)
```
```
## [1] NA
```
The way to solve these issues is to use the special `%m+%` infix operator:
```
ymd("2016-02-29") %m+% years(1)
```
```
## [1] "2017-02-28"
```
and for months:
```
ymd("2018-12-31") %m+% months(2)
```
```
## [1] "2019-02-28"
```
`{lubridate}` contains many more functions. If you often work with dates, duration or interval
data, `{lubridate}` is a package that you have to add to your toolbox.
### 4\.7\.3 Manipulate strings with `{stringr}`
`{stringr}` contains functions to manipulate strings. In Chapter 10, I will teach you about regular
expressions, but the functions contained in `{stringr}` allow you to already do a lot of work on
strings, without needing to be a regular expression expert.
I will discuss the most common string operations: detecting, locating, matching, searching and
replacing, and exctracting/removing strings.
To introduce these operations, let us use an ALTO file of an issue of *The Winchester News* from
October 31, 1910, which you can find on this
[link](https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt) (to see
how the newspaper looked like,
[click here](https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/)). I re\-hosted
the file on a public gist for archiving purposes. While working on the book, the original site went
down several times…
ALTO is an XML schema for the description of text OCR and layout information of pages for digitzed
material, such as newspapers (source: [ALTO Wikipedia page](https://en.wikipedia.org/wiki/ALTO_(XML))).
For more details, you can read my
[blogpost](https://www.brodrigues.co/blog/2019-01-13-newspapers_mets_alto/)
on the matter, but for our current purposes, it is enough to know that the file contains the text
of newspaper articles. The file looks like this:
```
<TextLine HEIGHT="138.0" WIDTH="2434.0" HPOS="4056.0" VPOS="5814.0">
<String STYLEREFS="ID7" HEIGHT="108.0" WIDTH="393.0" HPOS="4056.0" VPOS="5838.0" CONTENT="timore" WC="0.82539684">
<ALTERNATIVE>timole</ALTERNATIVE>
<ALTERNATIVE>tlnldre</ALTERNATIVE>
<ALTERNATIVE>timor</ALTERNATIVE>
<ALTERNATIVE>insole</ALTERNATIVE>
<ALTERNATIVE>landed</ALTERNATIVE>
</String>
<SP WIDTH="74.0" HPOS="4449.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="105.0" WIDTH="432.0" HPOS="4524.0" VPOS="5847.0" CONTENT="market" WC="0.95238096"/>
<SP WIDTH="116.0" HPOS="4956.0" VPOS="5847.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="138.0" HPOS="5073.0" VPOS="5883.0" CONTENT="as" WC="0.96825397"/>
<SP WIDTH="74.0" HPOS="5211.0" VPOS="5883.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="285.0" HPOS="5286.0" VPOS="5877.0" CONTENT="were" WC="1.0">
<ALTERNATIVE>verc</ALTERNATIVE>
<ALTERNATIVE>veer</ALTERNATIVE>
</String>
<SP WIDTH="68.0" HPOS="5571.0" VPOS="5877.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="147.0" HPOS="5640.0" VPOS="5838.0" CONTENT="all" WC="1.0"/>
<SP WIDTH="83.0" HPOS="5787.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="183.0" HPOS="5871.0" VPOS="5835.0" CONTENT="the" WC="0.95238096">
<ALTERNATIVE>tll</ALTERNATIVE>
<ALTERNATIVE>Cu</ALTERNATIVE>
<ALTERNATIVE>tall</ALTERNATIVE>
</String>
<SP WIDTH="75.0" HPOS="6054.0" VPOS="5835.0"/>
<String STYLEREFS="ID3" HEIGHT="132.0" WIDTH="351.0" HPOS="6129.0" VPOS="5814.0" CONTENT="cattle" WC="0.95238096"/>
</TextLine>
```
We are interested in the strings after `CONTENT=`. We are going to use functions from the `{stringr}`
package to get the strings after `CONTENT=`. In Chapter 10, we are going to explore this file
again, but using complex regular expressions to get all the content in one go.
#### 4\.7\.3\.1 Getting text data into Rstudio
First of all, let us read in the file:
```
winchester <- read_lines("https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt")
```
Even though the file is an XML file, I still read it in using `read_lines()` and not `read_xml()`
from the `{xml2}` package. This is for the purposes of the current exercise, and also because I
always have trouble with XML files, and prefer to treat them as simple text files, and use regular
expressions to get what I need.
Now that the ALTO file is read in and saved in the `winchester` variable, you might want to print
the whole thing in the console. Before that, take a look at the structure:
```
str(winchester)
```
```
## chr [1:43] "" ...
```
So the `winchester` variable is a character atomic vector with 43 elements. So first, we need to
understand what these elements are. Let’s start with the first one:
```
winchester[1]
```
```
## [1] ""
```
Ok, so it seems like the first element is part of the header of the file. What about the second one?
```
winchester[2]
```
```
## [1] "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"><base href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\"><style>body{margin-left:0;margin-right:0;margin-top:0}#bN015htcoyT__google-cache-hdr{background:#f5f5f5;font:13px arial,sans-serif;text-align:left;color:#202020;border:0;margin:0;border-bottom:1px solid #cecece;line-height:16px;padding:16px 28px 24px 28px}#bN015htcoyT__google-cache-hdr *{display:inline;font:inherit;text-align:inherit;color:inherit;line-height:inherit;background:none;border:0;margin:0;padding:0;letter-spacing:0}#bN015htcoyT__google-cache-hdr a{text-decoration:none;color:#1a0dab}#bN015htcoyT__google-cache-hdr a:hover{text-decoration:underline}#bN015htcoyT__google-cache-hdr a:visited{color:#609}#bN015htcoyT__google-cache-hdr div{display:block;margin-top:4px}#bN015htcoyT__google-cache-hdr b{font-weight:bold;display:inline-block;direction:ltr}</style><div id=\"bN015htcoyT__google-cache-hdr\"><div><span>This is Google's cache of <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml</a>.</span> <span>It is a snapshot of the page as it appeared on 21 Jan 2019 05:18:18 GMT.</span> <span>The <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">current page</a> could have changed in the meantime.</span> <a href=\"http://support.google.com/websearch/bin/answer.py?hl=en&p=cached&answer=1687222\"><span>Learn more</span>.</a></div><div><span style=\"display:inline-block;margin-top:8px;margin-right:104px;white-space:nowrap\"><span style=\"margin-right:28px\"><span style=\"font-weight:bold\">Full version</span></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=1&vwsrc=0\"><span>Text-only version</span></a></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=0&vwsrc=1\"><span>View source</span></a></span></span></div><span style=\"display:inline-block;margin-top:8px;color:#717171\"><span>Tip: To quickly find your search term on this page, press <b>Ctrl+F</b> or <b>⌘-F</b> (Mac) and use the find bar.</span></span></div><div style=\"position:relative;\"><?xml version=\"1.0\" encoding=\"UTF-8\"?>"
```
Same. So where is the content? The file is very large, so if you print it in the console, it will
take quite some time to print, and you will not really be able to make out anything. The best
way would be to try to detect the string `CONTENT` and work from there.
#### 4\.7\.3\.2 Detecting, getting the position and locating strings
When confronted to an atomic vector of strings, you might want to know inside which elements you
can find certain strings. For example, to know which elements of `winchester` contain the string
`CONTENT`, use `str_detect()`:
```
winchester %>%
str_detect("CONTENT")
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [25] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [37] FALSE FALSE FALSE FALSE FALSE FALSE TRUE
```
This returns a boolean atomic vector of the same length as `winchester`. If the string `CONTENT` is
nowhere to be found, the result will equal `FALSE`, if not it will equal `TRUE`. Here it is easy to
see that the last element contains the string `CONTENT`. But what if instead of having 43 elements,
the vector had 24192 elements? And hundreds would contain the string `CONTENT`? It would be easier
to instead have the indices of the vector where one can find the word `CONTENT`. This is possible
with `str_which()`:
```
winchester %>%
str_which("CONTENT")
```
```
## [1] 43
```
Here, the result is 43, meaning that the 43rd element of `winchester` contains the string `CONTENT`
somewhere. If we need more precision, we can use `str_locate()` and `str_locate_all()`. To explain
how both these functions work, let’s create a very small example:
```
ancient_philosophers <- c("aristotle", "plato", "epictetus", "seneca the younger", "epicurus", "marcus aurelius")
```
Now suppose I am interested in philosophers whose name ends in `us`. Let us use `str_locate()` first:
```
ancient_philosophers %>%
str_locate("us")
```
```
## start end
## [1,] NA NA
## [2,] NA NA
## [3,] 8 9
## [4,] NA NA
## [5,] 7 8
## [6,] 5 6
```
You can interpret the result as follows: in the rows, the index of the vector where the
string `us` is found. So the 3rd, 5th and 6th philosopher have `us` somewhere in their name.
The result also has two columns: `start` and `end`. These give the position of the string. So the
string `us` can be found starting at position 8 of the 3rd element of the vector, and ends at position
9\. Same goes for the other philisophers. However, consider Marcus Aurelius. He has two names, both
ending with `us`. However, `str_locate()` only shows the position of the `us` in `Marcus`.
To get both `us` strings, you need to use `str_locate_all()`:
```
ancient_philosophers %>%
str_locate_all("us")
```
```
## [[1]]
## start end
##
## [[2]]
## start end
##
## [[3]]
## start end
## [1,] 8 9
##
## [[4]]
## start end
##
## [[5]]
## start end
## [1,] 7 8
##
## [[6]]
## start end
## [1,] 5 6
## [2,] 14 15
```
Now we get the position of the two `us` in Marcus Aurelius. Doing this on the `winchester` vector
will give use the position of the `CONTENT` string, but this is not really important right now. What
matters is that you know how `str_locate()` and `str_locate_all()` work.
So now that we know what interests us in the 43nd element of `winchester`, let’s take a closer
look at it:
```
winchester[43]
```
As you can see, it’s a mess:
```
<TextLine HEIGHT=\"126.0\" WIDTH=\"1731.0\" HPOS=\"17160.0\" VPOS=\"21252.0\"><String HEIGHT=\"114.0\" WIDTH=\"354.0\" HPOS=\"17160.0\" VPOS=\"21264.0\" CONTENT=\"0tV\" WC=\"0.8095238\"/><SP WIDTH=\"131.0\" HPOS=\"17514.0\" VPOS=\"21264.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"111.0\" WIDTH=\"474.0\" HPOS=\"17646.0\" VPOS=\"21258.0\" CONTENT=\"BATES\" WC=\"1.0\"/><SP WIDTH=\"140.0\" HPOS=\"18120.0\" VPOS=\"21258.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"114.0\" WIDTH=\"630.0\" HPOS=\"18261.0\" VPOS=\"21252.0\" CONTENT=\"President\" WC=\"1.0\"><ALTERNATIVE>Prcideht</ALTERNATIVE><ALTERNATIVE>Pride</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"153.0\" WIDTH=\"1689.0\" HPOS=\"17145.0\" VPOS=\"21417.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"258.0\" HPOS=\"17145.0\" VPOS=\"21439.0\" CONTENT=\"WM\" WC=\"0.82539684\"><TextLine HEIGHT=\"120.0\" WIDTH=\"2211.0\" HPOS=\"16788.0\" VPOS=\"21870.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"102.0\" HPOS=\"16788.0\" VPOS=\"21894.0\" CONTENT=\"It\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"16890.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"93.0\" HPOS=\"16962.0\" VPOS=\"21885.0\" CONTENT=\"is\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17055.0\" VPOS=\"21885.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"102.0\" WIDTH=\"417.0\" HPOS=\"17136.0\" VPOS=\"21879.0\" CONTENT=\"seldom\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17553.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"267.0\" HPOS=\"17634.0\" VPOS=\"21873.0\" CONTENT=\"hard\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"17901.0\" VPOS=\"21873.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"87.0\" WIDTH=\"111.0\" HPOS=\"17982.0\" VPOS=\"21879.0\" CONTENT=\"to\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"18093.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"219.0\" HPOS=\"18174.0\" VPOS=\"21870.0\" CONTENT=\"find\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18393.0\" VPOS=\"21870.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"66.0\" HPOS=\"18471.0\" VPOS=\"21894.0\" CONTENT=\"a\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18537.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"78.0\" WIDTH=\"384.0\" HPOS=\"18615.0\" VPOS=\"21888.0\" CONTENT=\"succes\" WC=\"0.82539684\"><ALTERNATIVE>success</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"126.0\" WIDTH=\"2316.0\" HPOS=\"16662.0\" VPOS=\"22008.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"75.0\" WIDTH=\"183.0\" HPOS=\"16662.0\" VPOS=\"22059.0\" CONTENT=\"sor\" WC=\"1.0\"><ALTERNATIVE>soar</ALTERNATIVE></String><SP WIDTH=\"72.0\" HPOS=\"16845.0\" VPOS=\"22059.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"90.0\" WIDTH=\"168.0\" HPOS=\"16917.0\" VPOS=\"22035.0\" CONTENT=\"for\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"17085.0\" VPOS=\"22035.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"267.0\" HPOS=\"17157.0\" VPOS=\"22050.0\" CONTENT=\"even\" WC=\"1.0\"><ALTERNATIVE>cen</ALTERNATIVE><ALTERNATIVE>cent</ALTERNATIVE></String><SP WIDTH=\"77.0\" HPOS=\"17434.0\" VPOS=\"22050.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"66.0\" WIDTH=\"63.0\" HPOS=\"17502.0\" VPOS=\"22044.0\"
```
The file was imported without any newlines. So we need to insert them ourselves, by splitting the
string in a clever way.
#### 4\.7\.3\.3 Splitting strings
There are two functions included in `{stringr}` to split strings, `str_split()` and `str_split_fixed()`.
Let’s go back to our ancient philosophers. Two of them, Seneca the Younger and Marcus Aurelius have
something else in common than both being Roman Stoic philosophers. Their names are composed of several
words. If we want to split their names at the space character, we can use `str_split()` like this:
```
ancient_philosophers %>%
str_split(" ")
```
```
## [[1]]
## [1] "aristotle"
##
## [[2]]
## [1] "plato"
##
## [[3]]
## [1] "epictetus"
##
## [[4]]
## [1] "seneca" "the" "younger"
##
## [[5]]
## [1] "epicurus"
##
## [[6]]
## [1] "marcus" "aurelius"
```
`str_split()` also has a `simplify = TRUE` option:
```
ancient_philosophers %>%
str_split(" ", simplify = TRUE)
```
```
## [,1] [,2] [,3]
## [1,] "aristotle" "" ""
## [2,] "plato" "" ""
## [3,] "epictetus" "" ""
## [4,] "seneca" "the" "younger"
## [5,] "epicurus" "" ""
## [6,] "marcus" "aurelius" ""
```
This time, the returned object is a matrix.
What about `str_split_fixed()`? The difference is that here you can specify the number of pieces
to return. For example, you could consider the name “Aurelius” to be the middle name of Marcus Aurelius,
and the “the younger” to be the middle name of Seneca the younger. This means that you would want
to split the name only at the first space character, and not at all of them. This is easily achieved
with `str_split_fixed()`:
```
ancient_philosophers %>%
str_split_fixed(" ", 2)
```
```
## [,1] [,2]
## [1,] "aristotle" ""
## [2,] "plato" ""
## [3,] "epictetus" ""
## [4,] "seneca" "the younger"
## [5,] "epicurus" ""
## [6,] "marcus" "aurelius"
```
This gives the expected result.
So how does this help in our case? Well, if you look at how the ALTO file looks like, at the beginning
of this section, you will notice that every line ends with the “\>” character. So let’s split at
that character!
```
winchester_text <- winchester[43] %>%
str_split(">")
```
Let’s take a closer look at `winchester_text`:
```
str(winchester_text)
```
```
## List of 1
## $ : chr [1:19706] "</processingStepSettings" "<processingSoftware" "<softwareCreator" "iArchives</softwareCreator" ...
```
So this is a list of length one, and the first, and only, element of that list is an atomic vector
with 19706 elements. Since this is a list of only one element, we can simplify it by saving the
atomic vector in a variable:
```
winchester_text <- winchester_text[[1]]
```
Let’s now look at some lines:
```
winchester_text[1232:1245]
```
```
## [1] "<SP WIDTH=\"66.0\" HPOS=\"5763.0\" VPOS=\"9696.0\"/"
## [2] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"612.0\" HPOS=\"5829.0\" VPOS=\"9693.0\" CONTENT=\"Louisville\" WC=\"1.0\""
## [3] "<ALTERNATIVE"
## [4] "Loniile</ALTERNATIVE"
## [5] "<ALTERNATIVE"
## [6] "Lenities</ALTERNATIVE"
## [7] "</String"
## [8] "</TextLine"
## [9] "<TextLine HEIGHT=\"150.0\" WIDTH=\"2520.0\" HPOS=\"4032.0\" VPOS=\"9849.0\""
## [10] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"510.0\" HPOS=\"4032.0\" VPOS=\"9861.0\" CONTENT=\"Tobacco\" WC=\"1.0\"/"
## [11] "<SP WIDTH=\"113.0\" HPOS=\"4542.0\" VPOS=\"9861.0\"/"
## [12] "<String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"696.0\" HPOS=\"4656.0\" VPOS=\"9861.0\" CONTENT=\"Warehouse\" WC=\"1.0\""
## [13] "<ALTERNATIVE"
## [14] "WHrchons</ALTERNATIVE"
```
This now looks easier to handle. We can narrow it down to the lines that only contain the string
we are interested in, “CONTENT”. First, let’s get the indices:
```
content_winchester_index <- winchester_text %>%
str_which("CONTENT")
```
How many lines contain the string “CONTENT”?
```
length(content_winchester_index)
```
```
## [1] 4462
```
As you can see, this reduces the amount of data we have to work with. Let us save this is a new
variable:
```
content_winchester <- winchester_text[content_winchester_index]
```
#### 4\.7\.3\.4 Matching strings
Matching strings is useful, but only in combination with regular expressions. As stated at the
beginning of this section, we are going to learn about regular expressions in Chapter 10, but in
order to make this section useful, we are going to learn the easiest, but perhaps the most useful
regular expression: `.*`.
Let’s go back to our ancient philosophers, and use `str_match()` and see what happens. Let’s match
the “us” string:
```
ancient_philosophers %>%
str_match("us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "us"
## [4,] NA
## [5,] "us"
## [6,] "us"
```
Not very useful, but what about the regular expression `.*`? How could it help?
```
ancient_philosophers %>%
str_match(".*us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "epictetus"
## [4,] NA
## [5,] "epicurus"
## [6,] "marcus aurelius"
```
That’s already very interesting! So how does `.*` work? To understand, let’s first start by using
`.` alone:
```
ancient_philosophers %>%
str_match(".us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "tus"
## [4,] NA
## [5,] "rus"
## [6,] "cus"
```
This also matched whatever symbol comes just before the “u” from “us”. What if we use two `.` instead?
```
ancient_philosophers %>%
str_match("..us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "etus"
## [4,] NA
## [5,] "urus"
## [6,] "rcus"
```
This time, we get the two symbols that immediately precede “us”. Instead of continuing like this
we now use the `*`, which matches zero or more of `.`. So by combining `*` and `.`, we can match
any symbol repeatedly, until there is nothing more to match. Note that there is also `+`, which works
similarly to `*`, but it matches one or more symbols.
There is also a `str_match_all()`:
```
ancient_philosophers %>%
str_match_all(".*us")
```
```
## [[1]]
## [,1]
##
## [[2]]
## [,1]
##
## [[3]]
## [,1]
## [1,] "epictetus"
##
## [[4]]
## [,1]
##
## [[5]]
## [,1]
## [1,] "epicurus"
##
## [[6]]
## [,1]
## [1,] "marcus aurelius"
```
In this particular case it does not change the end result, but keep it in mind for cases like this one:
```
c("haha", "huhu") %>%
str_match("ha")
```
```
## [,1]
## [1,] "ha"
## [2,] NA
```
and:
```
c("haha", "huhu") %>%
str_match_all("ha")
```
```
## [[1]]
## [,1]
## [1,] "ha"
## [2,] "ha"
##
## [[2]]
## [,1]
```
What if we want to match names containing the letter “t”? Easy:
```
ancient_philosophers %>%
str_match(".*t.*")
```
```
## [,1]
## [1,] "aristotle"
## [2,] "plato"
## [3,] "epictetus"
## [4,] "seneca the younger"
## [5,] NA
## [6,] NA
```
So how does this help us with our historical newspaper? Let’s try to get the strings that come
after “CONTENT”:
```
winchester_content <- winchester_text %>%
str_match("CONTENT.*")
```
Let’s use our faithful `str()` function to take a look:
```
winchester_content %>%
str
```
```
## chr [1:19706, 1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA ...
```
Hum, there’s a lot of `NA` values! This is because a lot of the lines from the file did not have the
string “CONTENT”, so there is no match possible. Let’s us remove all these `NA`s. Because the
result is a matrix, we cannot use the `filter()` function from `{dplyr}`. So we need to convert it
to a tibble first:
```
winchester_content <- winchester_content %>%
as.tibble() %>%
filter(!is.na(V1))
```
```
## Warning: `as.tibble()` was deprecated in tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
```
```
## Warning: The `x` argument of `as_tibble.matrix()` must have unique column names if `.name_repair` is omitted as of tibble 2.0.0.
## Using compatibility `.name_repair`.
```
Because matrix columns do not have names, when a matrix gets converted into a tibble, the firt column
gets automatically called `V1`. This is why I filter on this column. Let’s take a look at the data:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## V1
## <chr>
## 1 "CONTENT=\"J\" WC=\"0.8095238\"/"
## 2 "CONTENT=\"a\" WC=\"0.8095238\"/"
## 3 "CONTENT=\"Ira\" WC=\"0.95238096\"/"
## 4 "CONTENT=\"mj\" WC=\"0.8095238\"/"
## 5 "CONTENT=\"iI\" WC=\"0.8095238\"/"
## 6 "CONTENT=\"tE1r\" WC=\"0.8095238\"/"
```
#### 4\.7\.3\.5 Searching and replacing strings
We are getting close to the final result. We still need to do some cleaning however. Since our data
is inside a nice tibble, we might as well stick with it. So let’s first rename the column and
change all the strings to lowercase:
```
winchester_content <- winchester_content %>%
mutate(content = tolower(V1)) %>%
select(-V1)
```
Let’s take a look at the result:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" wc=\"0.8095238\"/"
## 2 "content=\"a\" wc=\"0.8095238\"/"
## 3 "content=\"ira\" wc=\"0.95238096\"/"
## 4 "content=\"mj\" wc=\"0.8095238\"/"
## 5 "content=\"ii\" wc=\"0.8095238\"/"
## 6 "content=\"te1r\" wc=\"0.8095238\"/"
```
The second part of the string, “wc\=….” is not really interesting. Let’s search and replace this
with an empty string, using `str_replace()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "wc.*", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" "
## 2 "content=\"a\" "
## 3 "content=\"ira\" "
## 4 "content=\"mj\" "
## 5 "content=\"ii\" "
## 6 "content=\"te1r\" "
```
We need to use the regular expression from before to replace “wc” and every character that follows.
The same can be use to remove “content\=”:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
We are almost done, but some cleaning is still necessary:
#### 4\.7\.3\.6 Exctracting or removing strings
Now, because I now the ALTO spec, I know how to find words that are split between two sentences:
```
winchester_content %>%
filter(str_detect(content, "hyppart"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "\"aver\" subs_type=\"hyppart1\" subs_content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "\"considera\" subs_type=\"hyppart1\" subs_content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "\"re\" subs_type=\"hyppart1\" subs_content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "\"install\" subs_type=\"hyppart1\" subs_content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "\"be\" subs_type=\"hyppart1\" subs_content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
For instance, the word “average” was split over two lines, the first part of the word, “aver” on the
first line, and the second part of the word, “age”, on the second line. We want to keep what comes
after “subs\_content”. Let’s extract the word “average” using `str_extract()`. However, because only
some words were split between two lines, we first need to detect where the string “hyppart1” is
located, and only then can we extract what comes after “subs\_content”. Thus, we need to combine
`str_detect()` to first detect the string, and then `str_extract()` to extract what comes after
“subs\_content”:
```
winchester_content <- winchester_content %>%
mutate(content = if_else(str_detect(content, "hyppart1"),
str_extract_all(content, "content=.*", simplify = TRUE),
content))
```
Let’s take a look at the result:
```
winchester_content %>%
filter(str_detect(content, "content"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
We still need to get rid of the string “content\=” and then of all the strings that contain “hyppart2”,
which are not needed now:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", "")) %>%
mutate(content = if_else(str_detect(content, "hyppart2"), NA_character_, content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
Almost done! We only need to remove the `"` characters:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace_all(content, "\"", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "j "
## 2 "a "
## 3 "ira "
## 4 "mj "
## 5 "ii "
## 6 "te1r "
```
Let’s remove space characters with `str_trim()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_trim(content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 j
## 2 a
## 3 ira
## 4 mj
## 5 ii
## 6 te1r
```
To finish off this section, let’s remove stop words (words that do not add any meaning to a sentence,
such as “as”, “and”…) and words that are composed of less than 3 characters. You can find a dataset
with stopwords inside the `{stopwords}` package:
```
library(stopwords)
data(data_stopwords_stopwordsiso)
eng_stopwords <- tibble("content" = data_stopwords_stopwordsiso$en)
winchester_content <- winchester_content %>%
anti_join(eng_stopwords) %>%
filter(nchar(content) > 3)
```
```
## Joining, by = "content"
```
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 te1r
## 2 jilas
## 3 edition
## 4 winchester
## 5 news
## 6 injuries
```
That’s it for this section! You now know how to work with strings, but in Chapter 10 we are going
one step further by learning about regular expressions, which offer much more power.
### 4\.7\.4 Tidy data frames with `{tibble}`
We have already seen and used several functions from the `{tibble}` package. Let’s now go through
some more useful functions.
#### 4\.7\.4\.1 Creating tibbles
`tribble()` makes it easy to create tibble row by row, manually:
It is also possible to create a tibble from a named list:
```
as_tibble(list("combustion" = c("oil", "diesel", "oil", "electric"),
"doors" = c(3, 5, 5, 5)))
```
```
## # A tibble: 4 × 2
## combustion doors
## <chr> <dbl>
## 1 oil 3
## 2 diesel 5
## 3 oil 5
## 4 electric 5
```
```
enframe(list("combustion" = c(1,2), "doors" = c(1,2,4), "cylinders" = c(1,8,9,10)))
```
```
## # A tibble: 3 × 2
## name value
## <chr> <list>
## 1 combustion <dbl [2]>
## 2 doors <dbl [3]>
## 3 cylinders <dbl [4]>
```
### 4\.7\.1 🐱🐱🐱🐱
Factor variables are very useful but not very easy to manipulate. `forcats` contains very useful
functions that make working on factor variables painless. In my opinion, the four following functions, `fct_recode()`, `fct_relevel()`, `fct_reorder()` and `fct_relabel()`, are the ones you must
know, so that’s what I’ll be showing.
Remember in chapter 3 when I very quickly explained what were `factor` variables? In this section,
we are going to work a little bit with these type of variable. `factor`s are very useful, and the
`forcats` package includes some handy functions to work with them. First, let’s load the `forcats` package:
```
library(forcats)
```
as an example, we are going to work with the `gss_cat` dataset that is included in `forcats`. Let’s
load the data:
```
data(gss_cat)
head(gss_cat)
```
```
## # A tibble: 6 × 9
## year marital age race rincome partyid relig denom tvhours
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int>
## 1 2000 Never married 26 White $8000 to 9999 Ind,near r… Prot… Sout… 12
## 2 2000 Divorced 48 White $8000 to 9999 Not str re… Prot… Bapt… NA
## 3 2000 Widowed 67 White Not applicable Independent Prot… No d… 2
## 4 2000 Never married 39 White Not applicable Ind,near r… Orth… Not … 4
## 5 2000 Divorced 25 White Not applicable Not str de… None Not … 1
## 6 2000 Married 25 White $20000 - 24999 Strong dem… Prot… Sout… NA
```
as you can see, `marital`, `race`, `rincome` and `partyid` are all factor variables. Let’s take a closer
look at `marital`:
```
str(gss_cat$marital)
```
```
## Factor w/ 6 levels "No answer","Never married",..: 2 4 5 2 4 6 2 4 6 6 ...
```
and let’s see `rincome`:
```
str(gss_cat$rincome)
```
```
## Factor w/ 16 levels "No answer","Don't know",..: 8 8 16 16 16 5 4 9 4 4 ...
```
`factor` variables have different levels and the `forcats` package includes functions that allow
you to recode, collapse and do all sorts of things on these levels. For example , using
`forcats::fct_recode()` you can recode levels:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_recode(marital,
refuse = "No answer",
never_married = "Never married",
divorced = "Separated",
divorced = "Divorced",
widowed = "Widowed",
married = "Married"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## refuse 17 0.0007913234
## never_married 5416 0.2521063166
## divorced 4126 0.1920588372
## widowed 1807 0.0841130196
## married 10117 0.4709305032
```
Using `fct_recode()`, I was able to recode the levels and collapse `Separated` and `Divorced` to
a single category called `divorced`. As you can see, `refuse` and `widowed` are less than 10%, so
maybe you’d want to lump these categories together:
```
gss_cat <- gss_cat %>%
mutate(marital = fct_lump(marital, prop = 0.10, other_level = "other"))
gss_cat %>%
tabyl(marital)
```
```
## marital n percent
## never_married 5416 0.25210632
## divorced 4126 0.19205884
## married 10117 0.47093050
## other 1824 0.08490434
```
`fct_reorder()` is especially useful for plotting. We will explore plotting in the next chapter,
but to show you why `fct_reorder()` is so useful, I will create a barplot, first without
using `fct_reorder()` to re\-order the factors, then with reordering. Do not worry if you don’t
understand all the code for now:
```
gss_cat %>%
tabyl(marital) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
It would be much better if the categories were ordered by frequency. This is easy to do with
`fct_reorder()`:
```
gss_cat %>%
tabyl(marital) %>%
mutate(marital = fct_reorder(marital, n, .desc = FALSE)) %>%
ggplot() +
geom_col(aes(y = n, x = marital)) +
coord_flip()
```
Much better! In Chapter 6, we are going to learn about `{ggplot2}`.
The last family of function I’d like to mention are the `fct_lump*()` functions. These make it possible
to lump several levels of a factor into a new *other* level:
```
gss_cat %>%
mutate(
# Description of the different functions taken from help(fct_lump)
denom_lowfreq = fct_lump_lowfreq(denom), # lumps together the least frequent levels, ensuring that "other" is still the smallest level.
denom_min = fct_lump_min(denom, min = 10), # lumps levels that appear fewer than min times.
denom_n = fct_lump_n(denom, n = 3), # lumps all levels except for the n most frequent (or least frequent if n < 0)
denom_prop = fct_lump_prop(denom, prop = 0.10) # lumps levels that appear in fewer prop * n times.
)
```
```
## # A tibble: 21,483 × 13
## year marital age race rincome partyid relig denom tvhours denom…¹ denom…²
## <int> <fct> <int> <fct> <fct> <fct> <fct> <fct> <int> <fct> <fct>
## 1 2000 never_… 26 White $8000 … Ind,ne… Prot… Sout… 12 Southe… Southe…
## 2 2000 divorc… 48 White $8000 … Not st… Prot… Bapt… NA Baptis… Baptis…
## 3 2000 other 67 White Not ap… Indepe… Prot… No d… 2 No den… No den…
## 4 2000 never_… 39 White Not ap… Ind,ne… Orth… Not … 4 Not ap… Not ap…
## 5 2000 divorc… 25 White Not ap… Not st… None Not … 1 Not ap… Not ap…
## 6 2000 married 25 White $20000… Strong… Prot… Sout… NA Southe… Southe…
## 7 2000 never_… 36 White $25000… Not st… Chri… Not … 3 Not ap… Not ap…
## 8 2000 divorc… 44 White $7000 … Ind,ne… Prot… Luth… NA Luther… Luther…
## 9 2000 married 44 White $25000… Not st… Prot… Other 0 Other Other
## 10 2000 married 47 White $25000… Strong… Prot… Sout… 3 Southe… Southe…
## # … with 21,473 more rows, 2 more variables: denom_n <fct>, denom_prop <fct>,
## # and abbreviated variable names ¹denom_lowfreq, ²denom_min
```
There’s many other, so I’d advise you go through the package’s function [reference](https://forcats.tidyverse.org/reference/index.html).
### 4\.7\.2 Get your dates right with `{lubridate}`
`{lubridate}` is yet another tidyverse package, that makes dealing with dates or durations (and intervals) as
painless as possible. I do not use every function contained in the package daily, and as such will
only focus on some of the functions. However, if you have to deal with dates often,
you might want to explore the package thouroughly.
#### 4\.7\.2\.1 Defining dates, the tidy way
Let’s load new dataset, called *independence* from the Github repo of the book:
```
independence_path <- tempfile(fileext = "rds")
download.file(url = "https://github.com/b-rodrigues/modern_R/blob/master/datasets/independence.rds?raw=true",
destfile = independence_path)
independence <- readRDS(independence_path)
```
This dataset was scraped from the following Wikipedia [page](https://en.wikipedia.org/wiki/Decolonisation_of_Africa#Timeline).
It shows when African countries gained independence and from which colonial powers. In Chapter 10, I
will show you how to scrape Wikipedia pages using R. For now, let’s take a look at the contents
of the dataset:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ indep…² first…³ indep…⁴
## <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Liberia Liberia United… 26 Jul… Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal O… United… 31 May… Louis … South …
## 3 Egypt Sultanate of Egypt United… 28 Feb… Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 10 Feb… Haile … -
## 5 Libya British Military Administration… United… 24 Dec… Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1 Janu… Ismail… -
## 7 Tunisia French Protectorate of Tunisia France 20 Mar… Muhamm… -
## 8 Morocco French Protectorate in Morocco … France… 2 Marc… Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 6 Marc… Kwame … Gold C…
## 10 Guinea French West Africa France 2 Octo… Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
as you can see, the date of independence is in a format that might make it difficult to answer questions
such as *Which African countries gained independence before 1960 ?* for two reasons. First of all,
the date uses the name of the month instead of the number of the month, and second of all the type of
the independence day column is *character* and not “date”. So our first task is to correctly define the column
as being of type date, while making sure that R understands that *January* is supposed to be “01”, and so
on. There are several helpful functions included in `{lubridate}` to convert columns to dates. For instance
if the column you want to convert is of the form “2012\-11\-21”, then you would use the function `ymd()`,
for “year\-month\-day”. If, however the column is “2012\-21\-11”, then you would use `ydm()`. There’s
a few of these helper functions, and they can handle a lot of different formats for dates. In our case,
having the name of the month instead of the number might seem quite problematic, but it turns out
that this is a case that `{lubridate}` handles painfully:
```
library(lubridate)
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
```
independence <- independence %>%
mutate(independence_date = dmy(independence_date))
```
```
## Warning: 5 failed to parse.
```
Some dates failed to parse, for instance for Morocco. This is because these countries have several
independence dates; this means that the string to convert looks like:
```
"2 March 1956
7 April 1956
10 April 1958
4 January 1969"
```
which obviously cannot be converted by `{lubridate}` without further manipulation. I ignore these cases for
simplicity’s sake.
#### 4\.7\.2\.2 Data manipulation with dates
Let’s take a look at the data now:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ independ…² first…³ indep…⁴
## <chr> <chr> <chr> <date> <chr> <chr>
## 1 Liberia Liberia United… 1847-07-26 Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal… United… 1910-05-31 Louis … South …
## 3 Egypt Sultanate of Egypt United… 1922-02-28 Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 1947-02-10 Haile … -
## 5 Libya British Military Administrat… United… 1951-12-24 Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1956-01-01 Ismail… -
## 7 Tunisia French Protectorate of Tunis… France 1956-03-20 Muhamm… -
## 8 Morocco French Protectorate in Moroc… France… NA Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 1957-03-06 Kwame … Gold C…
## 10 Guinea French West Africa France 1958-10-02 Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
As you can see, we now have a date column in the right format. We can now answer questions such as
*Which countries gained independence before 1960?* quite easily, by using the functions `year()`,
`month()` and `day()`. Let’s see which countries gained independence before 1960:
```
independence %>%
filter(year(independence_date) <= 1960) %>%
pull(country)
```
```
## [1] "Liberia" "South Africa"
## [3] "Egypt" "Eritrea"
## [5] "Libya" "Sudan"
## [7] "Tunisia" "Ghana"
## [9] "Guinea" "Cameroon"
## [11] "Togo" "Mali"
## [13] "Madagascar" "Democratic Republic of the Congo"
## [15] "Benin" "Niger"
## [17] "Burkina Faso" "Ivory Coast"
## [19] "Chad" "Central African Republic"
## [21] "Republic of the Congo" "Gabon"
## [23] "Mauritania"
```
You guessed it, `year()` extracts the year of the date column and converts it as a *numeric* so that we can work
on it. This is the same for `month()` or `day()`. Let’s try to see if countries gained their independence on
Christmas Eve:
```
independence %>%
filter(month(independence_date) == 12,
day(independence_date) == 24) %>%
pull(country)
```
```
## [1] "Libya"
```
Seems like Libya was the only one! You can also operate on dates. For instance, let’s compute the difference between
two dates, using the `interval()` column:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since)
```
```
## # A tibble: 54 × 2
## country independent_since
## <chr> <Interval>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC
## 8 Morocco NA--NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC
## # … with 44 more rows
```
The `independent_since` column now contains an *interval* object that we can convert to years:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since) %>%
mutate(years_independent = as.numeric(independent_since, "years"))
```
```
## # A tibble: 54 × 3
## country independent_since years_independent
## <chr> <Interval> <dbl>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC 175.
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC 112.
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC 101.
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC 75.7
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC 70.8
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC 66.8
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC 66.6
## 8 Morocco NA--NA NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC 65.6
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC 64.1
## # … with 44 more rows
```
We can now see for how long the last country to gain independence has been independent.
Because the data is not tidy (in some cases, an African country was colonized by two powers,
see Libya), I will only focus on 4 European colonial powers: Belgium, France, Portugal and the United Kingdom:
```
independence %>%
filter(colonial_power %in% c("Belgium", "France", "Portugal", "United Kingdom")) %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
mutate(years_independent = as.numeric(independent_since, "years")) %>%
group_by(colonial_power) %>%
summarise(last_colony_independent_for = min(years_independent, na.rm = TRUE))
```
```
## # A tibble: 4 × 2
## colonial_power last_colony_independent_for
## <chr> <dbl>
## 1 Belgium 60.3
## 2 France 45.3
## 3 Portugal 47.0
## 4 United Kingdom 46.3
```
#### 4\.7\.2\.3 Arithmetic with dates
Adding or substracting days to dates is quite easy:
```
ymd("2018-12-31") + 16
```
```
## [1] "2019-01-16"
```
It is also possible to be more explicit and use `days()`:
```
ymd("2018-12-31") + days(16)
```
```
## [1] "2019-01-16"
```
To add years, you can use `years()`:
```
ymd("2018-12-31") + years(1)
```
```
## [1] "2019-12-31"
```
But you have to be careful with leap years:
```
ymd("2016-02-29") + years(1)
```
```
## [1] NA
```
Because 2017 is not a leap year, the above computation returns `NA`. The same goes for months with
a different number of days:
```
ymd("2018-12-31") + months(2)
```
```
## [1] NA
```
The way to solve these issues is to use the special `%m+%` infix operator:
```
ymd("2016-02-29") %m+% years(1)
```
```
## [1] "2017-02-28"
```
and for months:
```
ymd("2018-12-31") %m+% months(2)
```
```
## [1] "2019-02-28"
```
`{lubridate}` contains many more functions. If you often work with dates, duration or interval
data, `{lubridate}` is a package that you have to add to your toolbox.
#### 4\.7\.2\.1 Defining dates, the tidy way
Let’s load new dataset, called *independence* from the Github repo of the book:
```
independence_path <- tempfile(fileext = "rds")
download.file(url = "https://github.com/b-rodrigues/modern_R/blob/master/datasets/independence.rds?raw=true",
destfile = independence_path)
independence <- readRDS(independence_path)
```
This dataset was scraped from the following Wikipedia [page](https://en.wikipedia.org/wiki/Decolonisation_of_Africa#Timeline).
It shows when African countries gained independence and from which colonial powers. In Chapter 10, I
will show you how to scrape Wikipedia pages using R. For now, let’s take a look at the contents
of the dataset:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ indep…² first…³ indep…⁴
## <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Liberia Liberia United… 26 Jul… Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal O… United… 31 May… Louis … South …
## 3 Egypt Sultanate of Egypt United… 28 Feb… Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 10 Feb… Haile … -
## 5 Libya British Military Administration… United… 24 Dec… Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1 Janu… Ismail… -
## 7 Tunisia French Protectorate of Tunisia France 20 Mar… Muhamm… -
## 8 Morocco French Protectorate in Morocco … France… 2 Marc… Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 6 Marc… Kwame … Gold C…
## 10 Guinea French West Africa France 2 Octo… Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
as you can see, the date of independence is in a format that might make it difficult to answer questions
such as *Which African countries gained independence before 1960 ?* for two reasons. First of all,
the date uses the name of the month instead of the number of the month, and second of all the type of
the independence day column is *character* and not “date”. So our first task is to correctly define the column
as being of type date, while making sure that R understands that *January* is supposed to be “01”, and so
on. There are several helpful functions included in `{lubridate}` to convert columns to dates. For instance
if the column you want to convert is of the form “2012\-11\-21”, then you would use the function `ymd()`,
for “year\-month\-day”. If, however the column is “2012\-21\-11”, then you would use `ydm()`. There’s
a few of these helper functions, and they can handle a lot of different formats for dates. In our case,
having the name of the month instead of the number might seem quite problematic, but it turns out
that this is a case that `{lubridate}` handles painfully:
```
library(lubridate)
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
```
independence <- independence %>%
mutate(independence_date = dmy(independence_date))
```
```
## Warning: 5 failed to parse.
```
Some dates failed to parse, for instance for Morocco. This is because these countries have several
independence dates; this means that the string to convert looks like:
```
"2 March 1956
7 April 1956
10 April 1958
4 January 1969"
```
which obviously cannot be converted by `{lubridate}` without further manipulation. I ignore these cases for
simplicity’s sake.
#### 4\.7\.2\.2 Data manipulation with dates
Let’s take a look at the data now:
```
independence
```
```
## # A tibble: 54 × 6
## country colonial_name colon…¹ independ…² first…³ indep…⁴
## <chr> <chr> <chr> <date> <chr> <chr>
## 1 Liberia Liberia United… 1847-07-26 Joseph… Liberi…
## 2 South Africa Cape Colony Colony of Natal… United… 1910-05-31 Louis … South …
## 3 Egypt Sultanate of Egypt United… 1922-02-28 Fuad I Egypti…
## 4 Eritrea Italian Eritrea Italy 1947-02-10 Haile … -
## 5 Libya British Military Administrat… United… 1951-12-24 Idris -
## 6 Sudan Anglo-Egyptian Sudan United… 1956-01-01 Ismail… -
## 7 Tunisia French Protectorate of Tunis… France 1956-03-20 Muhamm… -
## 8 Morocco French Protectorate in Moroc… France… NA Mohamm… Ifni W…
## 9 Ghana Gold Coast United… 1957-03-06 Kwame … Gold C…
## 10 Guinea French West Africa France 1958-10-02 Ahmed … Guinea…
## # … with 44 more rows, and abbreviated variable names ¹colonial_power,
## # ²independence_date, ³first_head_of_state, ⁴independence_won_through
```
As you can see, we now have a date column in the right format. We can now answer questions such as
*Which countries gained independence before 1960?* quite easily, by using the functions `year()`,
`month()` and `day()`. Let’s see which countries gained independence before 1960:
```
independence %>%
filter(year(independence_date) <= 1960) %>%
pull(country)
```
```
## [1] "Liberia" "South Africa"
## [3] "Egypt" "Eritrea"
## [5] "Libya" "Sudan"
## [7] "Tunisia" "Ghana"
## [9] "Guinea" "Cameroon"
## [11] "Togo" "Mali"
## [13] "Madagascar" "Democratic Republic of the Congo"
## [15] "Benin" "Niger"
## [17] "Burkina Faso" "Ivory Coast"
## [19] "Chad" "Central African Republic"
## [21] "Republic of the Congo" "Gabon"
## [23] "Mauritania"
```
You guessed it, `year()` extracts the year of the date column and converts it as a *numeric* so that we can work
on it. This is the same for `month()` or `day()`. Let’s try to see if countries gained their independence on
Christmas Eve:
```
independence %>%
filter(month(independence_date) == 12,
day(independence_date) == 24) %>%
pull(country)
```
```
## [1] "Libya"
```
Seems like Libya was the only one! You can also operate on dates. For instance, let’s compute the difference between
two dates, using the `interval()` column:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since)
```
```
## # A tibble: 54 × 2
## country independent_since
## <chr> <Interval>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC
## 8 Morocco NA--NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC
## # … with 44 more rows
```
The `independent_since` column now contains an *interval* object that we can convert to years:
```
independence %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
select(country, independent_since) %>%
mutate(years_independent = as.numeric(independent_since, "years"))
```
```
## # A tibble: 54 × 3
## country independent_since years_independent
## <chr> <Interval> <dbl>
## 1 Liberia 1847-07-26 UTC--2022-10-24 UTC 175.
## 2 South Africa 1910-05-31 UTC--2022-10-24 UTC 112.
## 3 Egypt 1922-02-28 UTC--2022-10-24 UTC 101.
## 4 Eritrea 1947-02-10 UTC--2022-10-24 UTC 75.7
## 5 Libya 1951-12-24 UTC--2022-10-24 UTC 70.8
## 6 Sudan 1956-01-01 UTC--2022-10-24 UTC 66.8
## 7 Tunisia 1956-03-20 UTC--2022-10-24 UTC 66.6
## 8 Morocco NA--NA NA
## 9 Ghana 1957-03-06 UTC--2022-10-24 UTC 65.6
## 10 Guinea 1958-10-02 UTC--2022-10-24 UTC 64.1
## # … with 44 more rows
```
We can now see for how long the last country to gain independence has been independent.
Because the data is not tidy (in some cases, an African country was colonized by two powers,
see Libya), I will only focus on 4 European colonial powers: Belgium, France, Portugal and the United Kingdom:
```
independence %>%
filter(colonial_power %in% c("Belgium", "France", "Portugal", "United Kingdom")) %>%
mutate(today = lubridate::today()) %>%
mutate(independent_since = interval(independence_date, today)) %>%
mutate(years_independent = as.numeric(independent_since, "years")) %>%
group_by(colonial_power) %>%
summarise(last_colony_independent_for = min(years_independent, na.rm = TRUE))
```
```
## # A tibble: 4 × 2
## colonial_power last_colony_independent_for
## <chr> <dbl>
## 1 Belgium 60.3
## 2 France 45.3
## 3 Portugal 47.0
## 4 United Kingdom 46.3
```
#### 4\.7\.2\.3 Arithmetic with dates
Adding or substracting days to dates is quite easy:
```
ymd("2018-12-31") + 16
```
```
## [1] "2019-01-16"
```
It is also possible to be more explicit and use `days()`:
```
ymd("2018-12-31") + days(16)
```
```
## [1] "2019-01-16"
```
To add years, you can use `years()`:
```
ymd("2018-12-31") + years(1)
```
```
## [1] "2019-12-31"
```
But you have to be careful with leap years:
```
ymd("2016-02-29") + years(1)
```
```
## [1] NA
```
Because 2017 is not a leap year, the above computation returns `NA`. The same goes for months with
a different number of days:
```
ymd("2018-12-31") + months(2)
```
```
## [1] NA
```
The way to solve these issues is to use the special `%m+%` infix operator:
```
ymd("2016-02-29") %m+% years(1)
```
```
## [1] "2017-02-28"
```
and for months:
```
ymd("2018-12-31") %m+% months(2)
```
```
## [1] "2019-02-28"
```
`{lubridate}` contains many more functions. If you often work with dates, duration or interval
data, `{lubridate}` is a package that you have to add to your toolbox.
### 4\.7\.3 Manipulate strings with `{stringr}`
`{stringr}` contains functions to manipulate strings. In Chapter 10, I will teach you about regular
expressions, but the functions contained in `{stringr}` allow you to already do a lot of work on
strings, without needing to be a regular expression expert.
I will discuss the most common string operations: detecting, locating, matching, searching and
replacing, and exctracting/removing strings.
To introduce these operations, let us use an ALTO file of an issue of *The Winchester News* from
October 31, 1910, which you can find on this
[link](https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt) (to see
how the newspaper looked like,
[click here](https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/)). I re\-hosted
the file on a public gist for archiving purposes. While working on the book, the original site went
down several times…
ALTO is an XML schema for the description of text OCR and layout information of pages for digitzed
material, such as newspapers (source: [ALTO Wikipedia page](https://en.wikipedia.org/wiki/ALTO_(XML))).
For more details, you can read my
[blogpost](https://www.brodrigues.co/blog/2019-01-13-newspapers_mets_alto/)
on the matter, but for our current purposes, it is enough to know that the file contains the text
of newspaper articles. The file looks like this:
```
<TextLine HEIGHT="138.0" WIDTH="2434.0" HPOS="4056.0" VPOS="5814.0">
<String STYLEREFS="ID7" HEIGHT="108.0" WIDTH="393.0" HPOS="4056.0" VPOS="5838.0" CONTENT="timore" WC="0.82539684">
<ALTERNATIVE>timole</ALTERNATIVE>
<ALTERNATIVE>tlnldre</ALTERNATIVE>
<ALTERNATIVE>timor</ALTERNATIVE>
<ALTERNATIVE>insole</ALTERNATIVE>
<ALTERNATIVE>landed</ALTERNATIVE>
</String>
<SP WIDTH="74.0" HPOS="4449.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="105.0" WIDTH="432.0" HPOS="4524.0" VPOS="5847.0" CONTENT="market" WC="0.95238096"/>
<SP WIDTH="116.0" HPOS="4956.0" VPOS="5847.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="138.0" HPOS="5073.0" VPOS="5883.0" CONTENT="as" WC="0.96825397"/>
<SP WIDTH="74.0" HPOS="5211.0" VPOS="5883.0"/>
<String STYLEREFS="ID7" HEIGHT="69.0" WIDTH="285.0" HPOS="5286.0" VPOS="5877.0" CONTENT="were" WC="1.0">
<ALTERNATIVE>verc</ALTERNATIVE>
<ALTERNATIVE>veer</ALTERNATIVE>
</String>
<SP WIDTH="68.0" HPOS="5571.0" VPOS="5877.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="147.0" HPOS="5640.0" VPOS="5838.0" CONTENT="all" WC="1.0"/>
<SP WIDTH="83.0" HPOS="5787.0" VPOS="5838.0"/>
<String STYLEREFS="ID7" HEIGHT="111.0" WIDTH="183.0" HPOS="5871.0" VPOS="5835.0" CONTENT="the" WC="0.95238096">
<ALTERNATIVE>tll</ALTERNATIVE>
<ALTERNATIVE>Cu</ALTERNATIVE>
<ALTERNATIVE>tall</ALTERNATIVE>
</String>
<SP WIDTH="75.0" HPOS="6054.0" VPOS="5835.0"/>
<String STYLEREFS="ID3" HEIGHT="132.0" WIDTH="351.0" HPOS="6129.0" VPOS="5814.0" CONTENT="cattle" WC="0.95238096"/>
</TextLine>
```
We are interested in the strings after `CONTENT=`. We are going to use functions from the `{stringr}`
package to get the strings after `CONTENT=`. In Chapter 10, we are going to explore this file
again, but using complex regular expressions to get all the content in one go.
#### 4\.7\.3\.1 Getting text data into Rstudio
First of all, let us read in the file:
```
winchester <- read_lines("https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt")
```
Even though the file is an XML file, I still read it in using `read_lines()` and not `read_xml()`
from the `{xml2}` package. This is for the purposes of the current exercise, and also because I
always have trouble with XML files, and prefer to treat them as simple text files, and use regular
expressions to get what I need.
Now that the ALTO file is read in and saved in the `winchester` variable, you might want to print
the whole thing in the console. Before that, take a look at the structure:
```
str(winchester)
```
```
## chr [1:43] "" ...
```
So the `winchester` variable is a character atomic vector with 43 elements. So first, we need to
understand what these elements are. Let’s start with the first one:
```
winchester[1]
```
```
## [1] ""
```
Ok, so it seems like the first element is part of the header of the file. What about the second one?
```
winchester[2]
```
```
## [1] "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"><base href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\"><style>body{margin-left:0;margin-right:0;margin-top:0}#bN015htcoyT__google-cache-hdr{background:#f5f5f5;font:13px arial,sans-serif;text-align:left;color:#202020;border:0;margin:0;border-bottom:1px solid #cecece;line-height:16px;padding:16px 28px 24px 28px}#bN015htcoyT__google-cache-hdr *{display:inline;font:inherit;text-align:inherit;color:inherit;line-height:inherit;background:none;border:0;margin:0;padding:0;letter-spacing:0}#bN015htcoyT__google-cache-hdr a{text-decoration:none;color:#1a0dab}#bN015htcoyT__google-cache-hdr a:hover{text-decoration:underline}#bN015htcoyT__google-cache-hdr a:visited{color:#609}#bN015htcoyT__google-cache-hdr div{display:block;margin-top:4px}#bN015htcoyT__google-cache-hdr b{font-weight:bold;display:inline-block;direction:ltr}</style><div id=\"bN015htcoyT__google-cache-hdr\"><div><span>This is Google's cache of <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml</a>.</span> <span>It is a snapshot of the page as it appeared on 21 Jan 2019 05:18:18 GMT.</span> <span>The <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">current page</a> could have changed in the meantime.</span> <a href=\"http://support.google.com/websearch/bin/answer.py?hl=en&p=cached&answer=1687222\"><span>Learn more</span>.</a></div><div><span style=\"display:inline-block;margin-top:8px;margin-right:104px;white-space:nowrap\"><span style=\"margin-right:28px\"><span style=\"font-weight:bold\">Full version</span></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=1&vwsrc=0\"><span>Text-only version</span></a></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=0&vwsrc=1\"><span>View source</span></a></span></span></div><span style=\"display:inline-block;margin-top:8px;color:#717171\"><span>Tip: To quickly find your search term on this page, press <b>Ctrl+F</b> or <b>⌘-F</b> (Mac) and use the find bar.</span></span></div><div style=\"position:relative;\"><?xml version=\"1.0\" encoding=\"UTF-8\"?>"
```
Same. So where is the content? The file is very large, so if you print it in the console, it will
take quite some time to print, and you will not really be able to make out anything. The best
way would be to try to detect the string `CONTENT` and work from there.
#### 4\.7\.3\.2 Detecting, getting the position and locating strings
When confronted to an atomic vector of strings, you might want to know inside which elements you
can find certain strings. For example, to know which elements of `winchester` contain the string
`CONTENT`, use `str_detect()`:
```
winchester %>%
str_detect("CONTENT")
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [25] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [37] FALSE FALSE FALSE FALSE FALSE FALSE TRUE
```
This returns a boolean atomic vector of the same length as `winchester`. If the string `CONTENT` is
nowhere to be found, the result will equal `FALSE`, if not it will equal `TRUE`. Here it is easy to
see that the last element contains the string `CONTENT`. But what if instead of having 43 elements,
the vector had 24192 elements? And hundreds would contain the string `CONTENT`? It would be easier
to instead have the indices of the vector where one can find the word `CONTENT`. This is possible
with `str_which()`:
```
winchester %>%
str_which("CONTENT")
```
```
## [1] 43
```
Here, the result is 43, meaning that the 43rd element of `winchester` contains the string `CONTENT`
somewhere. If we need more precision, we can use `str_locate()` and `str_locate_all()`. To explain
how both these functions work, let’s create a very small example:
```
ancient_philosophers <- c("aristotle", "plato", "epictetus", "seneca the younger", "epicurus", "marcus aurelius")
```
Now suppose I am interested in philosophers whose name ends in `us`. Let us use `str_locate()` first:
```
ancient_philosophers %>%
str_locate("us")
```
```
## start end
## [1,] NA NA
## [2,] NA NA
## [3,] 8 9
## [4,] NA NA
## [5,] 7 8
## [6,] 5 6
```
You can interpret the result as follows: in the rows, the index of the vector where the
string `us` is found. So the 3rd, 5th and 6th philosopher have `us` somewhere in their name.
The result also has two columns: `start` and `end`. These give the position of the string. So the
string `us` can be found starting at position 8 of the 3rd element of the vector, and ends at position
9\. Same goes for the other philisophers. However, consider Marcus Aurelius. He has two names, both
ending with `us`. However, `str_locate()` only shows the position of the `us` in `Marcus`.
To get both `us` strings, you need to use `str_locate_all()`:
```
ancient_philosophers %>%
str_locate_all("us")
```
```
## [[1]]
## start end
##
## [[2]]
## start end
##
## [[3]]
## start end
## [1,] 8 9
##
## [[4]]
## start end
##
## [[5]]
## start end
## [1,] 7 8
##
## [[6]]
## start end
## [1,] 5 6
## [2,] 14 15
```
Now we get the position of the two `us` in Marcus Aurelius. Doing this on the `winchester` vector
will give use the position of the `CONTENT` string, but this is not really important right now. What
matters is that you know how `str_locate()` and `str_locate_all()` work.
So now that we know what interests us in the 43nd element of `winchester`, let’s take a closer
look at it:
```
winchester[43]
```
As you can see, it’s a mess:
```
<TextLine HEIGHT=\"126.0\" WIDTH=\"1731.0\" HPOS=\"17160.0\" VPOS=\"21252.0\"><String HEIGHT=\"114.0\" WIDTH=\"354.0\" HPOS=\"17160.0\" VPOS=\"21264.0\" CONTENT=\"0tV\" WC=\"0.8095238\"/><SP WIDTH=\"131.0\" HPOS=\"17514.0\" VPOS=\"21264.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"111.0\" WIDTH=\"474.0\" HPOS=\"17646.0\" VPOS=\"21258.0\" CONTENT=\"BATES\" WC=\"1.0\"/><SP WIDTH=\"140.0\" HPOS=\"18120.0\" VPOS=\"21258.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"114.0\" WIDTH=\"630.0\" HPOS=\"18261.0\" VPOS=\"21252.0\" CONTENT=\"President\" WC=\"1.0\"><ALTERNATIVE>Prcideht</ALTERNATIVE><ALTERNATIVE>Pride</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"153.0\" WIDTH=\"1689.0\" HPOS=\"17145.0\" VPOS=\"21417.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"258.0\" HPOS=\"17145.0\" VPOS=\"21439.0\" CONTENT=\"WM\" WC=\"0.82539684\"><TextLine HEIGHT=\"120.0\" WIDTH=\"2211.0\" HPOS=\"16788.0\" VPOS=\"21870.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"102.0\" HPOS=\"16788.0\" VPOS=\"21894.0\" CONTENT=\"It\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"16890.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"93.0\" HPOS=\"16962.0\" VPOS=\"21885.0\" CONTENT=\"is\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17055.0\" VPOS=\"21885.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"102.0\" WIDTH=\"417.0\" HPOS=\"17136.0\" VPOS=\"21879.0\" CONTENT=\"seldom\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17553.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"267.0\" HPOS=\"17634.0\" VPOS=\"21873.0\" CONTENT=\"hard\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"17901.0\" VPOS=\"21873.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"87.0\" WIDTH=\"111.0\" HPOS=\"17982.0\" VPOS=\"21879.0\" CONTENT=\"to\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"18093.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"219.0\" HPOS=\"18174.0\" VPOS=\"21870.0\" CONTENT=\"find\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18393.0\" VPOS=\"21870.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"66.0\" HPOS=\"18471.0\" VPOS=\"21894.0\" CONTENT=\"a\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18537.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"78.0\" WIDTH=\"384.0\" HPOS=\"18615.0\" VPOS=\"21888.0\" CONTENT=\"succes\" WC=\"0.82539684\"><ALTERNATIVE>success</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"126.0\" WIDTH=\"2316.0\" HPOS=\"16662.0\" VPOS=\"22008.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"75.0\" WIDTH=\"183.0\" HPOS=\"16662.0\" VPOS=\"22059.0\" CONTENT=\"sor\" WC=\"1.0\"><ALTERNATIVE>soar</ALTERNATIVE></String><SP WIDTH=\"72.0\" HPOS=\"16845.0\" VPOS=\"22059.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"90.0\" WIDTH=\"168.0\" HPOS=\"16917.0\" VPOS=\"22035.0\" CONTENT=\"for\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"17085.0\" VPOS=\"22035.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"267.0\" HPOS=\"17157.0\" VPOS=\"22050.0\" CONTENT=\"even\" WC=\"1.0\"><ALTERNATIVE>cen</ALTERNATIVE><ALTERNATIVE>cent</ALTERNATIVE></String><SP WIDTH=\"77.0\" HPOS=\"17434.0\" VPOS=\"22050.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"66.0\" WIDTH=\"63.0\" HPOS=\"17502.0\" VPOS=\"22044.0\"
```
The file was imported without any newlines. So we need to insert them ourselves, by splitting the
string in a clever way.
#### 4\.7\.3\.3 Splitting strings
There are two functions included in `{stringr}` to split strings, `str_split()` and `str_split_fixed()`.
Let’s go back to our ancient philosophers. Two of them, Seneca the Younger and Marcus Aurelius have
something else in common than both being Roman Stoic philosophers. Their names are composed of several
words. If we want to split their names at the space character, we can use `str_split()` like this:
```
ancient_philosophers %>%
str_split(" ")
```
```
## [[1]]
## [1] "aristotle"
##
## [[2]]
## [1] "plato"
##
## [[3]]
## [1] "epictetus"
##
## [[4]]
## [1] "seneca" "the" "younger"
##
## [[5]]
## [1] "epicurus"
##
## [[6]]
## [1] "marcus" "aurelius"
```
`str_split()` also has a `simplify = TRUE` option:
```
ancient_philosophers %>%
str_split(" ", simplify = TRUE)
```
```
## [,1] [,2] [,3]
## [1,] "aristotle" "" ""
## [2,] "plato" "" ""
## [3,] "epictetus" "" ""
## [4,] "seneca" "the" "younger"
## [5,] "epicurus" "" ""
## [6,] "marcus" "aurelius" ""
```
This time, the returned object is a matrix.
What about `str_split_fixed()`? The difference is that here you can specify the number of pieces
to return. For example, you could consider the name “Aurelius” to be the middle name of Marcus Aurelius,
and the “the younger” to be the middle name of Seneca the younger. This means that you would want
to split the name only at the first space character, and not at all of them. This is easily achieved
with `str_split_fixed()`:
```
ancient_philosophers %>%
str_split_fixed(" ", 2)
```
```
## [,1] [,2]
## [1,] "aristotle" ""
## [2,] "plato" ""
## [3,] "epictetus" ""
## [4,] "seneca" "the younger"
## [5,] "epicurus" ""
## [6,] "marcus" "aurelius"
```
This gives the expected result.
So how does this help in our case? Well, if you look at how the ALTO file looks like, at the beginning
of this section, you will notice that every line ends with the “\>” character. So let’s split at
that character!
```
winchester_text <- winchester[43] %>%
str_split(">")
```
Let’s take a closer look at `winchester_text`:
```
str(winchester_text)
```
```
## List of 1
## $ : chr [1:19706] "</processingStepSettings" "<processingSoftware" "<softwareCreator" "iArchives</softwareCreator" ...
```
So this is a list of length one, and the first, and only, element of that list is an atomic vector
with 19706 elements. Since this is a list of only one element, we can simplify it by saving the
atomic vector in a variable:
```
winchester_text <- winchester_text[[1]]
```
Let’s now look at some lines:
```
winchester_text[1232:1245]
```
```
## [1] "<SP WIDTH=\"66.0\" HPOS=\"5763.0\" VPOS=\"9696.0\"/"
## [2] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"612.0\" HPOS=\"5829.0\" VPOS=\"9693.0\" CONTENT=\"Louisville\" WC=\"1.0\""
## [3] "<ALTERNATIVE"
## [4] "Loniile</ALTERNATIVE"
## [5] "<ALTERNATIVE"
## [6] "Lenities</ALTERNATIVE"
## [7] "</String"
## [8] "</TextLine"
## [9] "<TextLine HEIGHT=\"150.0\" WIDTH=\"2520.0\" HPOS=\"4032.0\" VPOS=\"9849.0\""
## [10] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"510.0\" HPOS=\"4032.0\" VPOS=\"9861.0\" CONTENT=\"Tobacco\" WC=\"1.0\"/"
## [11] "<SP WIDTH=\"113.0\" HPOS=\"4542.0\" VPOS=\"9861.0\"/"
## [12] "<String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"696.0\" HPOS=\"4656.0\" VPOS=\"9861.0\" CONTENT=\"Warehouse\" WC=\"1.0\""
## [13] "<ALTERNATIVE"
## [14] "WHrchons</ALTERNATIVE"
```
This now looks easier to handle. We can narrow it down to the lines that only contain the string
we are interested in, “CONTENT”. First, let’s get the indices:
```
content_winchester_index <- winchester_text %>%
str_which("CONTENT")
```
How many lines contain the string “CONTENT”?
```
length(content_winchester_index)
```
```
## [1] 4462
```
As you can see, this reduces the amount of data we have to work with. Let us save this is a new
variable:
```
content_winchester <- winchester_text[content_winchester_index]
```
#### 4\.7\.3\.4 Matching strings
Matching strings is useful, but only in combination with regular expressions. As stated at the
beginning of this section, we are going to learn about regular expressions in Chapter 10, but in
order to make this section useful, we are going to learn the easiest, but perhaps the most useful
regular expression: `.*`.
Let’s go back to our ancient philosophers, and use `str_match()` and see what happens. Let’s match
the “us” string:
```
ancient_philosophers %>%
str_match("us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "us"
## [4,] NA
## [5,] "us"
## [6,] "us"
```
Not very useful, but what about the regular expression `.*`? How could it help?
```
ancient_philosophers %>%
str_match(".*us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "epictetus"
## [4,] NA
## [5,] "epicurus"
## [6,] "marcus aurelius"
```
That’s already very interesting! So how does `.*` work? To understand, let’s first start by using
`.` alone:
```
ancient_philosophers %>%
str_match(".us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "tus"
## [4,] NA
## [5,] "rus"
## [6,] "cus"
```
This also matched whatever symbol comes just before the “u” from “us”. What if we use two `.` instead?
```
ancient_philosophers %>%
str_match("..us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "etus"
## [4,] NA
## [5,] "urus"
## [6,] "rcus"
```
This time, we get the two symbols that immediately precede “us”. Instead of continuing like this
we now use the `*`, which matches zero or more of `.`. So by combining `*` and `.`, we can match
any symbol repeatedly, until there is nothing more to match. Note that there is also `+`, which works
similarly to `*`, but it matches one or more symbols.
There is also a `str_match_all()`:
```
ancient_philosophers %>%
str_match_all(".*us")
```
```
## [[1]]
## [,1]
##
## [[2]]
## [,1]
##
## [[3]]
## [,1]
## [1,] "epictetus"
##
## [[4]]
## [,1]
##
## [[5]]
## [,1]
## [1,] "epicurus"
##
## [[6]]
## [,1]
## [1,] "marcus aurelius"
```
In this particular case it does not change the end result, but keep it in mind for cases like this one:
```
c("haha", "huhu") %>%
str_match("ha")
```
```
## [,1]
## [1,] "ha"
## [2,] NA
```
and:
```
c("haha", "huhu") %>%
str_match_all("ha")
```
```
## [[1]]
## [,1]
## [1,] "ha"
## [2,] "ha"
##
## [[2]]
## [,1]
```
What if we want to match names containing the letter “t”? Easy:
```
ancient_philosophers %>%
str_match(".*t.*")
```
```
## [,1]
## [1,] "aristotle"
## [2,] "plato"
## [3,] "epictetus"
## [4,] "seneca the younger"
## [5,] NA
## [6,] NA
```
So how does this help us with our historical newspaper? Let’s try to get the strings that come
after “CONTENT”:
```
winchester_content <- winchester_text %>%
str_match("CONTENT.*")
```
Let’s use our faithful `str()` function to take a look:
```
winchester_content %>%
str
```
```
## chr [1:19706, 1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA ...
```
Hum, there’s a lot of `NA` values! This is because a lot of the lines from the file did not have the
string “CONTENT”, so there is no match possible. Let’s us remove all these `NA`s. Because the
result is a matrix, we cannot use the `filter()` function from `{dplyr}`. So we need to convert it
to a tibble first:
```
winchester_content <- winchester_content %>%
as.tibble() %>%
filter(!is.na(V1))
```
```
## Warning: `as.tibble()` was deprecated in tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
```
```
## Warning: The `x` argument of `as_tibble.matrix()` must have unique column names if `.name_repair` is omitted as of tibble 2.0.0.
## Using compatibility `.name_repair`.
```
Because matrix columns do not have names, when a matrix gets converted into a tibble, the firt column
gets automatically called `V1`. This is why I filter on this column. Let’s take a look at the data:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## V1
## <chr>
## 1 "CONTENT=\"J\" WC=\"0.8095238\"/"
## 2 "CONTENT=\"a\" WC=\"0.8095238\"/"
## 3 "CONTENT=\"Ira\" WC=\"0.95238096\"/"
## 4 "CONTENT=\"mj\" WC=\"0.8095238\"/"
## 5 "CONTENT=\"iI\" WC=\"0.8095238\"/"
## 6 "CONTENT=\"tE1r\" WC=\"0.8095238\"/"
```
#### 4\.7\.3\.5 Searching and replacing strings
We are getting close to the final result. We still need to do some cleaning however. Since our data
is inside a nice tibble, we might as well stick with it. So let’s first rename the column and
change all the strings to lowercase:
```
winchester_content <- winchester_content %>%
mutate(content = tolower(V1)) %>%
select(-V1)
```
Let’s take a look at the result:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" wc=\"0.8095238\"/"
## 2 "content=\"a\" wc=\"0.8095238\"/"
## 3 "content=\"ira\" wc=\"0.95238096\"/"
## 4 "content=\"mj\" wc=\"0.8095238\"/"
## 5 "content=\"ii\" wc=\"0.8095238\"/"
## 6 "content=\"te1r\" wc=\"0.8095238\"/"
```
The second part of the string, “wc\=….” is not really interesting. Let’s search and replace this
with an empty string, using `str_replace()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "wc.*", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" "
## 2 "content=\"a\" "
## 3 "content=\"ira\" "
## 4 "content=\"mj\" "
## 5 "content=\"ii\" "
## 6 "content=\"te1r\" "
```
We need to use the regular expression from before to replace “wc” and every character that follows.
The same can be use to remove “content\=”:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
We are almost done, but some cleaning is still necessary:
#### 4\.7\.3\.6 Exctracting or removing strings
Now, because I now the ALTO spec, I know how to find words that are split between two sentences:
```
winchester_content %>%
filter(str_detect(content, "hyppart"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "\"aver\" subs_type=\"hyppart1\" subs_content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "\"considera\" subs_type=\"hyppart1\" subs_content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "\"re\" subs_type=\"hyppart1\" subs_content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "\"install\" subs_type=\"hyppart1\" subs_content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "\"be\" subs_type=\"hyppart1\" subs_content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
For instance, the word “average” was split over two lines, the first part of the word, “aver” on the
first line, and the second part of the word, “age”, on the second line. We want to keep what comes
after “subs\_content”. Let’s extract the word “average” using `str_extract()`. However, because only
some words were split between two lines, we first need to detect where the string “hyppart1” is
located, and only then can we extract what comes after “subs\_content”. Thus, we need to combine
`str_detect()` to first detect the string, and then `str_extract()` to extract what comes after
“subs\_content”:
```
winchester_content <- winchester_content %>%
mutate(content = if_else(str_detect(content, "hyppart1"),
str_extract_all(content, "content=.*", simplify = TRUE),
content))
```
Let’s take a look at the result:
```
winchester_content %>%
filter(str_detect(content, "content"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
We still need to get rid of the string “content\=” and then of all the strings that contain “hyppart2”,
which are not needed now:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", "")) %>%
mutate(content = if_else(str_detect(content, "hyppart2"), NA_character_, content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
Almost done! We only need to remove the `"` characters:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace_all(content, "\"", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "j "
## 2 "a "
## 3 "ira "
## 4 "mj "
## 5 "ii "
## 6 "te1r "
```
Let’s remove space characters with `str_trim()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_trim(content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 j
## 2 a
## 3 ira
## 4 mj
## 5 ii
## 6 te1r
```
To finish off this section, let’s remove stop words (words that do not add any meaning to a sentence,
such as “as”, “and”…) and words that are composed of less than 3 characters. You can find a dataset
with stopwords inside the `{stopwords}` package:
```
library(stopwords)
data(data_stopwords_stopwordsiso)
eng_stopwords <- tibble("content" = data_stopwords_stopwordsiso$en)
winchester_content <- winchester_content %>%
anti_join(eng_stopwords) %>%
filter(nchar(content) > 3)
```
```
## Joining, by = "content"
```
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 te1r
## 2 jilas
## 3 edition
## 4 winchester
## 5 news
## 6 injuries
```
That’s it for this section! You now know how to work with strings, but in Chapter 10 we are going
one step further by learning about regular expressions, which offer much more power.
#### 4\.7\.3\.1 Getting text data into Rstudio
First of all, let us read in the file:
```
winchester <- read_lines("https://gist.githubusercontent.com/b-rodrigues/5139560e7d0f2ecebe5da1df3629e015/raw/e3031d894ffb97217ddbad1ade1b307c9937d2c8/gistfile1.txt")
```
Even though the file is an XML file, I still read it in using `read_lines()` and not `read_xml()`
from the `{xml2}` package. This is for the purposes of the current exercise, and also because I
always have trouble with XML files, and prefer to treat them as simple text files, and use regular
expressions to get what I need.
Now that the ALTO file is read in and saved in the `winchester` variable, you might want to print
the whole thing in the console. Before that, take a look at the structure:
```
str(winchester)
```
```
## chr [1:43] "" ...
```
So the `winchester` variable is a character atomic vector with 43 elements. So first, we need to
understand what these elements are. Let’s start with the first one:
```
winchester[1]
```
```
## [1] ""
```
Ok, so it seems like the first element is part of the header of the file. What about the second one?
```
winchester[2]
```
```
## [1] "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"><base href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\"><style>body{margin-left:0;margin-right:0;margin-top:0}#bN015htcoyT__google-cache-hdr{background:#f5f5f5;font:13px arial,sans-serif;text-align:left;color:#202020;border:0;margin:0;border-bottom:1px solid #cecece;line-height:16px;padding:16px 28px 24px 28px}#bN015htcoyT__google-cache-hdr *{display:inline;font:inherit;text-align:inherit;color:inherit;line-height:inherit;background:none;border:0;margin:0;padding:0;letter-spacing:0}#bN015htcoyT__google-cache-hdr a{text-decoration:none;color:#1a0dab}#bN015htcoyT__google-cache-hdr a:hover{text-decoration:underline}#bN015htcoyT__google-cache-hdr a:visited{color:#609}#bN015htcoyT__google-cache-hdr div{display:block;margin-top:4px}#bN015htcoyT__google-cache-hdr b{font-weight:bold;display:inline-block;direction:ltr}</style><div id=\"bN015htcoyT__google-cache-hdr\"><div><span>This is Google's cache of <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml</a>.</span> <span>It is a snapshot of the page as it appeared on 21 Jan 2019 05:18:18 GMT.</span> <span>The <a href=\"https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml\">current page</a> could have changed in the meantime.</span> <a href=\"http://support.google.com/websearch/bin/answer.py?hl=en&p=cached&answer=1687222\"><span>Learn more</span>.</a></div><div><span style=\"display:inline-block;margin-top:8px;margin-right:104px;white-space:nowrap\"><span style=\"margin-right:28px\"><span style=\"font-weight:bold\">Full version</span></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=1&vwsrc=0\"><span>Text-only version</span></a></span><span style=\"margin-right:28px\"><a href=\"http://webcache.googleusercontent.com/search?q=cache:2BVPV8QGj3oJ:https://chroniclingamerica.loc.gov/lccn/sn86069133/1910-10-31/ed-1/seq-1/ocr.xml&hl=en&gl=lu&strip=0&vwsrc=1\"><span>View source</span></a></span></span></div><span style=\"display:inline-block;margin-top:8px;color:#717171\"><span>Tip: To quickly find your search term on this page, press <b>Ctrl+F</b> or <b>⌘-F</b> (Mac) and use the find bar.</span></span></div><div style=\"position:relative;\"><?xml version=\"1.0\" encoding=\"UTF-8\"?>"
```
Same. So where is the content? The file is very large, so if you print it in the console, it will
take quite some time to print, and you will not really be able to make out anything. The best
way would be to try to detect the string `CONTENT` and work from there.
#### 4\.7\.3\.2 Detecting, getting the position and locating strings
When confronted to an atomic vector of strings, you might want to know inside which elements you
can find certain strings. For example, to know which elements of `winchester` contain the string
`CONTENT`, use `str_detect()`:
```
winchester %>%
str_detect("CONTENT")
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [25] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [37] FALSE FALSE FALSE FALSE FALSE FALSE TRUE
```
This returns a boolean atomic vector of the same length as `winchester`. If the string `CONTENT` is
nowhere to be found, the result will equal `FALSE`, if not it will equal `TRUE`. Here it is easy to
see that the last element contains the string `CONTENT`. But what if instead of having 43 elements,
the vector had 24192 elements? And hundreds would contain the string `CONTENT`? It would be easier
to instead have the indices of the vector where one can find the word `CONTENT`. This is possible
with `str_which()`:
```
winchester %>%
str_which("CONTENT")
```
```
## [1] 43
```
Here, the result is 43, meaning that the 43rd element of `winchester` contains the string `CONTENT`
somewhere. If we need more precision, we can use `str_locate()` and `str_locate_all()`. To explain
how both these functions work, let’s create a very small example:
```
ancient_philosophers <- c("aristotle", "plato", "epictetus", "seneca the younger", "epicurus", "marcus aurelius")
```
Now suppose I am interested in philosophers whose name ends in `us`. Let us use `str_locate()` first:
```
ancient_philosophers %>%
str_locate("us")
```
```
## start end
## [1,] NA NA
## [2,] NA NA
## [3,] 8 9
## [4,] NA NA
## [5,] 7 8
## [6,] 5 6
```
You can interpret the result as follows: in the rows, the index of the vector where the
string `us` is found. So the 3rd, 5th and 6th philosopher have `us` somewhere in their name.
The result also has two columns: `start` and `end`. These give the position of the string. So the
string `us` can be found starting at position 8 of the 3rd element of the vector, and ends at position
9\. Same goes for the other philisophers. However, consider Marcus Aurelius. He has two names, both
ending with `us`. However, `str_locate()` only shows the position of the `us` in `Marcus`.
To get both `us` strings, you need to use `str_locate_all()`:
```
ancient_philosophers %>%
str_locate_all("us")
```
```
## [[1]]
## start end
##
## [[2]]
## start end
##
## [[3]]
## start end
## [1,] 8 9
##
## [[4]]
## start end
##
## [[5]]
## start end
## [1,] 7 8
##
## [[6]]
## start end
## [1,] 5 6
## [2,] 14 15
```
Now we get the position of the two `us` in Marcus Aurelius. Doing this on the `winchester` vector
will give use the position of the `CONTENT` string, but this is not really important right now. What
matters is that you know how `str_locate()` and `str_locate_all()` work.
So now that we know what interests us in the 43nd element of `winchester`, let’s take a closer
look at it:
```
winchester[43]
```
As you can see, it’s a mess:
```
<TextLine HEIGHT=\"126.0\" WIDTH=\"1731.0\" HPOS=\"17160.0\" VPOS=\"21252.0\"><String HEIGHT=\"114.0\" WIDTH=\"354.0\" HPOS=\"17160.0\" VPOS=\"21264.0\" CONTENT=\"0tV\" WC=\"0.8095238\"/><SP WIDTH=\"131.0\" HPOS=\"17514.0\" VPOS=\"21264.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"111.0\" WIDTH=\"474.0\" HPOS=\"17646.0\" VPOS=\"21258.0\" CONTENT=\"BATES\" WC=\"1.0\"/><SP WIDTH=\"140.0\" HPOS=\"18120.0\" VPOS=\"21258.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"114.0\" WIDTH=\"630.0\" HPOS=\"18261.0\" VPOS=\"21252.0\" CONTENT=\"President\" WC=\"1.0\"><ALTERNATIVE>Prcideht</ALTERNATIVE><ALTERNATIVE>Pride</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"153.0\" WIDTH=\"1689.0\" HPOS=\"17145.0\" VPOS=\"21417.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"258.0\" HPOS=\"17145.0\" VPOS=\"21439.0\" CONTENT=\"WM\" WC=\"0.82539684\"><TextLine HEIGHT=\"120.0\" WIDTH=\"2211.0\" HPOS=\"16788.0\" VPOS=\"21870.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"102.0\" HPOS=\"16788.0\" VPOS=\"21894.0\" CONTENT=\"It\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"16890.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"93.0\" HPOS=\"16962.0\" VPOS=\"21885.0\" CONTENT=\"is\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17055.0\" VPOS=\"21885.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"102.0\" WIDTH=\"417.0\" HPOS=\"17136.0\" VPOS=\"21879.0\" CONTENT=\"seldom\" WC=\"1.0\"/><SP WIDTH=\"80.0\" HPOS=\"17553.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"267.0\" HPOS=\"17634.0\" VPOS=\"21873.0\" CONTENT=\"hard\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"17901.0\" VPOS=\"21873.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"87.0\" WIDTH=\"111.0\" HPOS=\"17982.0\" VPOS=\"21879.0\" CONTENT=\"to\" WC=\"1.0\"/><SP WIDTH=\"81.0\" HPOS=\"18093.0\" VPOS=\"21879.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"96.0\" WIDTH=\"219.0\" HPOS=\"18174.0\" VPOS=\"21870.0\" CONTENT=\"find\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18393.0\" VPOS=\"21870.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"66.0\" HPOS=\"18471.0\" VPOS=\"21894.0\" CONTENT=\"a\" WC=\"1.0\"/><SP WIDTH=\"77.0\" HPOS=\"18537.0\" VPOS=\"21894.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"78.0\" WIDTH=\"384.0\" HPOS=\"18615.0\" VPOS=\"21888.0\" CONTENT=\"succes\" WC=\"0.82539684\"><ALTERNATIVE>success</ALTERNATIVE></String></TextLine><TextLine HEIGHT=\"126.0\" WIDTH=\"2316.0\" HPOS=\"16662.0\" VPOS=\"22008.0\"><String STYLEREFS=\"ID7\" HEIGHT=\"75.0\" WIDTH=\"183.0\" HPOS=\"16662.0\" VPOS=\"22059.0\" CONTENT=\"sor\" WC=\"1.0\"><ALTERNATIVE>soar</ALTERNATIVE></String><SP WIDTH=\"72.0\" HPOS=\"16845.0\" VPOS=\"22059.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"90.0\" WIDTH=\"168.0\" HPOS=\"16917.0\" VPOS=\"22035.0\" CONTENT=\"for\" WC=\"1.0\"/><SP WIDTH=\"72.0\" HPOS=\"17085.0\" VPOS=\"22035.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"69.0\" WIDTH=\"267.0\" HPOS=\"17157.0\" VPOS=\"22050.0\" CONTENT=\"even\" WC=\"1.0\"><ALTERNATIVE>cen</ALTERNATIVE><ALTERNATIVE>cent</ALTERNATIVE></String><SP WIDTH=\"77.0\" HPOS=\"17434.0\" VPOS=\"22050.0\"/><String STYLEREFS=\"ID7\" HEIGHT=\"66.0\" WIDTH=\"63.0\" HPOS=\"17502.0\" VPOS=\"22044.0\"
```
The file was imported without any newlines. So we need to insert them ourselves, by splitting the
string in a clever way.
#### 4\.7\.3\.3 Splitting strings
There are two functions included in `{stringr}` to split strings, `str_split()` and `str_split_fixed()`.
Let’s go back to our ancient philosophers. Two of them, Seneca the Younger and Marcus Aurelius have
something else in common than both being Roman Stoic philosophers. Their names are composed of several
words. If we want to split their names at the space character, we can use `str_split()` like this:
```
ancient_philosophers %>%
str_split(" ")
```
```
## [[1]]
## [1] "aristotle"
##
## [[2]]
## [1] "plato"
##
## [[3]]
## [1] "epictetus"
##
## [[4]]
## [1] "seneca" "the" "younger"
##
## [[5]]
## [1] "epicurus"
##
## [[6]]
## [1] "marcus" "aurelius"
```
`str_split()` also has a `simplify = TRUE` option:
```
ancient_philosophers %>%
str_split(" ", simplify = TRUE)
```
```
## [,1] [,2] [,3]
## [1,] "aristotle" "" ""
## [2,] "plato" "" ""
## [3,] "epictetus" "" ""
## [4,] "seneca" "the" "younger"
## [5,] "epicurus" "" ""
## [6,] "marcus" "aurelius" ""
```
This time, the returned object is a matrix.
What about `str_split_fixed()`? The difference is that here you can specify the number of pieces
to return. For example, you could consider the name “Aurelius” to be the middle name of Marcus Aurelius,
and the “the younger” to be the middle name of Seneca the younger. This means that you would want
to split the name only at the first space character, and not at all of them. This is easily achieved
with `str_split_fixed()`:
```
ancient_philosophers %>%
str_split_fixed(" ", 2)
```
```
## [,1] [,2]
## [1,] "aristotle" ""
## [2,] "plato" ""
## [3,] "epictetus" ""
## [4,] "seneca" "the younger"
## [5,] "epicurus" ""
## [6,] "marcus" "aurelius"
```
This gives the expected result.
So how does this help in our case? Well, if you look at how the ALTO file looks like, at the beginning
of this section, you will notice that every line ends with the “\>” character. So let’s split at
that character!
```
winchester_text <- winchester[43] %>%
str_split(">")
```
Let’s take a closer look at `winchester_text`:
```
str(winchester_text)
```
```
## List of 1
## $ : chr [1:19706] "</processingStepSettings" "<processingSoftware" "<softwareCreator" "iArchives</softwareCreator" ...
```
So this is a list of length one, and the first, and only, element of that list is an atomic vector
with 19706 elements. Since this is a list of only one element, we can simplify it by saving the
atomic vector in a variable:
```
winchester_text <- winchester_text[[1]]
```
Let’s now look at some lines:
```
winchester_text[1232:1245]
```
```
## [1] "<SP WIDTH=\"66.0\" HPOS=\"5763.0\" VPOS=\"9696.0\"/"
## [2] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"612.0\" HPOS=\"5829.0\" VPOS=\"9693.0\" CONTENT=\"Louisville\" WC=\"1.0\""
## [3] "<ALTERNATIVE"
## [4] "Loniile</ALTERNATIVE"
## [5] "<ALTERNATIVE"
## [6] "Lenities</ALTERNATIVE"
## [7] "</String"
## [8] "</TextLine"
## [9] "<TextLine HEIGHT=\"150.0\" WIDTH=\"2520.0\" HPOS=\"4032.0\" VPOS=\"9849.0\""
## [10] "<String STYLEREFS=\"ID7\" HEIGHT=\"108.0\" WIDTH=\"510.0\" HPOS=\"4032.0\" VPOS=\"9861.0\" CONTENT=\"Tobacco\" WC=\"1.0\"/"
## [11] "<SP WIDTH=\"113.0\" HPOS=\"4542.0\" VPOS=\"9861.0\"/"
## [12] "<String STYLEREFS=\"ID7\" HEIGHT=\"105.0\" WIDTH=\"696.0\" HPOS=\"4656.0\" VPOS=\"9861.0\" CONTENT=\"Warehouse\" WC=\"1.0\""
## [13] "<ALTERNATIVE"
## [14] "WHrchons</ALTERNATIVE"
```
This now looks easier to handle. We can narrow it down to the lines that only contain the string
we are interested in, “CONTENT”. First, let’s get the indices:
```
content_winchester_index <- winchester_text %>%
str_which("CONTENT")
```
How many lines contain the string “CONTENT”?
```
length(content_winchester_index)
```
```
## [1] 4462
```
As you can see, this reduces the amount of data we have to work with. Let us save this is a new
variable:
```
content_winchester <- winchester_text[content_winchester_index]
```
#### 4\.7\.3\.4 Matching strings
Matching strings is useful, but only in combination with regular expressions. As stated at the
beginning of this section, we are going to learn about regular expressions in Chapter 10, but in
order to make this section useful, we are going to learn the easiest, but perhaps the most useful
regular expression: `.*`.
Let’s go back to our ancient philosophers, and use `str_match()` and see what happens. Let’s match
the “us” string:
```
ancient_philosophers %>%
str_match("us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "us"
## [4,] NA
## [5,] "us"
## [6,] "us"
```
Not very useful, but what about the regular expression `.*`? How could it help?
```
ancient_philosophers %>%
str_match(".*us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "epictetus"
## [4,] NA
## [5,] "epicurus"
## [6,] "marcus aurelius"
```
That’s already very interesting! So how does `.*` work? To understand, let’s first start by using
`.` alone:
```
ancient_philosophers %>%
str_match(".us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "tus"
## [4,] NA
## [5,] "rus"
## [6,] "cus"
```
This also matched whatever symbol comes just before the “u” from “us”. What if we use two `.` instead?
```
ancient_philosophers %>%
str_match("..us")
```
```
## [,1]
## [1,] NA
## [2,] NA
## [3,] "etus"
## [4,] NA
## [5,] "urus"
## [6,] "rcus"
```
This time, we get the two symbols that immediately precede “us”. Instead of continuing like this
we now use the `*`, which matches zero or more of `.`. So by combining `*` and `.`, we can match
any symbol repeatedly, until there is nothing more to match. Note that there is also `+`, which works
similarly to `*`, but it matches one or more symbols.
There is also a `str_match_all()`:
```
ancient_philosophers %>%
str_match_all(".*us")
```
```
## [[1]]
## [,1]
##
## [[2]]
## [,1]
##
## [[3]]
## [,1]
## [1,] "epictetus"
##
## [[4]]
## [,1]
##
## [[5]]
## [,1]
## [1,] "epicurus"
##
## [[6]]
## [,1]
## [1,] "marcus aurelius"
```
In this particular case it does not change the end result, but keep it in mind for cases like this one:
```
c("haha", "huhu") %>%
str_match("ha")
```
```
## [,1]
## [1,] "ha"
## [2,] NA
```
and:
```
c("haha", "huhu") %>%
str_match_all("ha")
```
```
## [[1]]
## [,1]
## [1,] "ha"
## [2,] "ha"
##
## [[2]]
## [,1]
```
What if we want to match names containing the letter “t”? Easy:
```
ancient_philosophers %>%
str_match(".*t.*")
```
```
## [,1]
## [1,] "aristotle"
## [2,] "plato"
## [3,] "epictetus"
## [4,] "seneca the younger"
## [5,] NA
## [6,] NA
```
So how does this help us with our historical newspaper? Let’s try to get the strings that come
after “CONTENT”:
```
winchester_content <- winchester_text %>%
str_match("CONTENT.*")
```
Let’s use our faithful `str()` function to take a look:
```
winchester_content %>%
str
```
```
## chr [1:19706, 1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA ...
```
Hum, there’s a lot of `NA` values! This is because a lot of the lines from the file did not have the
string “CONTENT”, so there is no match possible. Let’s us remove all these `NA`s. Because the
result is a matrix, we cannot use the `filter()` function from `{dplyr}`. So we need to convert it
to a tibble first:
```
winchester_content <- winchester_content %>%
as.tibble() %>%
filter(!is.na(V1))
```
```
## Warning: `as.tibble()` was deprecated in tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
```
```
## Warning: The `x` argument of `as_tibble.matrix()` must have unique column names if `.name_repair` is omitted as of tibble 2.0.0.
## Using compatibility `.name_repair`.
```
Because matrix columns do not have names, when a matrix gets converted into a tibble, the firt column
gets automatically called `V1`. This is why I filter on this column. Let’s take a look at the data:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## V1
## <chr>
## 1 "CONTENT=\"J\" WC=\"0.8095238\"/"
## 2 "CONTENT=\"a\" WC=\"0.8095238\"/"
## 3 "CONTENT=\"Ira\" WC=\"0.95238096\"/"
## 4 "CONTENT=\"mj\" WC=\"0.8095238\"/"
## 5 "CONTENT=\"iI\" WC=\"0.8095238\"/"
## 6 "CONTENT=\"tE1r\" WC=\"0.8095238\"/"
```
#### 4\.7\.3\.5 Searching and replacing strings
We are getting close to the final result. We still need to do some cleaning however. Since our data
is inside a nice tibble, we might as well stick with it. So let’s first rename the column and
change all the strings to lowercase:
```
winchester_content <- winchester_content %>%
mutate(content = tolower(V1)) %>%
select(-V1)
```
Let’s take a look at the result:
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" wc=\"0.8095238\"/"
## 2 "content=\"a\" wc=\"0.8095238\"/"
## 3 "content=\"ira\" wc=\"0.95238096\"/"
## 4 "content=\"mj\" wc=\"0.8095238\"/"
## 5 "content=\"ii\" wc=\"0.8095238\"/"
## 6 "content=\"te1r\" wc=\"0.8095238\"/"
```
The second part of the string, “wc\=….” is not really interesting. Let’s search and replace this
with an empty string, using `str_replace()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "wc.*", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "content=\"j\" "
## 2 "content=\"a\" "
## 3 "content=\"ira\" "
## 4 "content=\"mj\" "
## 5 "content=\"ii\" "
## 6 "content=\"te1r\" "
```
We need to use the regular expression from before to replace “wc” and every character that follows.
The same can be use to remove “content\=”:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
We are almost done, but some cleaning is still necessary:
#### 4\.7\.3\.6 Exctracting or removing strings
Now, because I now the ALTO spec, I know how to find words that are split between two sentences:
```
winchester_content %>%
filter(str_detect(content, "hyppart"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "\"aver\" subs_type=\"hyppart1\" subs_content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "\"considera\" subs_type=\"hyppart1\" subs_content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "\"re\" subs_type=\"hyppart1\" subs_content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "\"install\" subs_type=\"hyppart1\" subs_content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "\"be\" subs_type=\"hyppart1\" subs_content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
For instance, the word “average” was split over two lines, the first part of the word, “aver” on the
first line, and the second part of the word, “age”, on the second line. We want to keep what comes
after “subs\_content”. Let’s extract the word “average” using `str_extract()`. However, because only
some words were split between two lines, we first need to detect where the string “hyppart1” is
located, and only then can we extract what comes after “subs\_content”. Thus, we need to combine
`str_detect()` to first detect the string, and then `str_extract()` to extract what comes after
“subs\_content”:
```
winchester_content <- winchester_content %>%
mutate(content = if_else(str_detect(content, "hyppart1"),
str_extract_all(content, "content=.*", simplify = TRUE),
content))
```
Let’s take a look at the result:
```
winchester_content %>%
filter(str_detect(content, "content"))
```
```
## # A tibble: 64 × 1
## content
## <chr>
## 1 "content=\"average\" "
## 2 "\"age\" subs_type=\"hyppart2\" subs_content=\"average\" "
## 3 "content=\"consideration\" "
## 4 "\"tion\" subs_type=\"hyppart2\" subs_content=\"consideration\" "
## 5 "content=\"resigned\" "
## 6 "\"signed\" subs_type=\"hyppart2\" subs_content=\"resigned\" "
## 7 "content=\"installed\" "
## 8 "\"ed\" subs_type=\"hyppart2\" subs_content=\"installed\" "
## 9 "content=\"before\" "
## 10 "\"fore\" subs_type=\"hyppart2\" subs_content=\"before\" "
## # … with 54 more rows
```
We still need to get rid of the string “content\=” and then of all the strings that contain “hyppart2”,
which are not needed now:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace(content, "content=", "")) %>%
mutate(content = if_else(str_detect(content, "hyppart2"), NA_character_, content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "\"j\" "
## 2 "\"a\" "
## 3 "\"ira\" "
## 4 "\"mj\" "
## 5 "\"ii\" "
## 6 "\"te1r\" "
```
Almost done! We only need to remove the `"` characters:
```
winchester_content <- winchester_content %>%
mutate(content = str_replace_all(content, "\"", ""))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 "j "
## 2 "a "
## 3 "ira "
## 4 "mj "
## 5 "ii "
## 6 "te1r "
```
Let’s remove space characters with `str_trim()`:
```
winchester_content <- winchester_content %>%
mutate(content = str_trim(content))
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 j
## 2 a
## 3 ira
## 4 mj
## 5 ii
## 6 te1r
```
To finish off this section, let’s remove stop words (words that do not add any meaning to a sentence,
such as “as”, “and”…) and words that are composed of less than 3 characters. You can find a dataset
with stopwords inside the `{stopwords}` package:
```
library(stopwords)
data(data_stopwords_stopwordsiso)
eng_stopwords <- tibble("content" = data_stopwords_stopwordsiso$en)
winchester_content <- winchester_content %>%
anti_join(eng_stopwords) %>%
filter(nchar(content) > 3)
```
```
## Joining, by = "content"
```
```
head(winchester_content)
```
```
## # A tibble: 6 × 1
## content
## <chr>
## 1 te1r
## 2 jilas
## 3 edition
## 4 winchester
## 5 news
## 6 injuries
```
That’s it for this section! You now know how to work with strings, but in Chapter 10 we are going
one step further by learning about regular expressions, which offer much more power.
### 4\.7\.4 Tidy data frames with `{tibble}`
We have already seen and used several functions from the `{tibble}` package. Let’s now go through
some more useful functions.
#### 4\.7\.4\.1 Creating tibbles
`tribble()` makes it easy to create tibble row by row, manually:
It is also possible to create a tibble from a named list:
```
as_tibble(list("combustion" = c("oil", "diesel", "oil", "electric"),
"doors" = c(3, 5, 5, 5)))
```
```
## # A tibble: 4 × 2
## combustion doors
## <chr> <dbl>
## 1 oil 3
## 2 diesel 5
## 3 oil 5
## 4 electric 5
```
```
enframe(list("combustion" = c(1,2), "doors" = c(1,2,4), "cylinders" = c(1,8,9,10)))
```
```
## # A tibble: 3 × 2
## name value
## <chr> <list>
## 1 combustion <dbl [2]>
## 2 doors <dbl [3]>
## 3 cylinders <dbl [4]>
```
#### 4\.7\.4\.1 Creating tibbles
`tribble()` makes it easy to create tibble row by row, manually:
It is also possible to create a tibble from a named list:
```
as_tibble(list("combustion" = c("oil", "diesel", "oil", "electric"),
"doors" = c(3, 5, 5, 5)))
```
```
## # A tibble: 4 × 2
## combustion doors
## <chr> <dbl>
## 1 oil 3
## 2 diesel 5
## 3 oil 5
## 4 electric 5
```
```
enframe(list("combustion" = c(1,2), "doors" = c(1,2,4), "cylinders" = c(1,8,9,10)))
```
```
## # A tibble: 3 × 2
## name value
## <chr> <list>
## 1 combustion <dbl [2]>
## 2 doors <dbl [3]>
## 3 cylinders <dbl [4]>
```
4\.8 List\-columns
------------------
To learn about list\-columns, let’s first focus on a single character of the `starwars` dataset:
```
data(starwars)
```
```
starwars %>%
filter(name == "Luke Skywalker") %>%
glimpse()
```
```
## Rows: 1
## Columns: 14
## $ name <chr> "Luke Skywalker"
## $ height <int> 172
## $ mass <dbl> 77
## $ hair_color <chr> "blond"
## $ skin_color <chr> "fair"
## $ eye_color <chr> "blue"
## $ birth_year <dbl> 19
## $ sex <chr> "male"
## $ gender <chr> "masculine"
## $ homeworld <chr> "Tatooine"
## $ species <chr> "Human"
## $ films <list> <"The Empire Strikes Back", "Revenge of the Sith", "Return …
## $ vehicles <list> <"Snowspeeder", "Imperial Speeder Bike">
## $ starships <list> <"X-wing", "Imperial shuttle">
```
We see that the columns `films`, `vehicles` and `starships` (at the bottom) are all lists, and in
the case of `films`, it lists all the films where Luke Skywalker has appeared. What if you want to
take a closer look at films where Luke Skywalker appeared?
```
starwars %>%
filter(name == "Luke Skywalker") %>%
pull(films)
```
```
## [[1]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
```
`pull()` is a `{dplyr}` function that extract (pulls) the column you’re interested in. It is quite
useful when you want to inspect a column. Instead of just looking at Luke Skywalker’s films,
let’s pull the complete `films` column instead:
```
starwars %>%
head() %>% # let's just look at the first six rows
pull(films)
```
```
## [[1]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
##
## [[2]]
## [1] "The Empire Strikes Back" "Attack of the Clones"
## [3] "The Phantom Menace" "Revenge of the Sith"
## [5] "Return of the Jedi" "A New Hope"
##
## [[3]]
## [1] "The Empire Strikes Back" "Attack of the Clones"
## [3] "The Phantom Menace" "Revenge of the Sith"
## [5] "Return of the Jedi" "A New Hope"
## [7] "The Force Awakens"
##
## [[4]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
##
## [[5]]
## [1] "The Empire Strikes Back" "Revenge of the Sith"
## [3] "Return of the Jedi" "A New Hope"
## [5] "The Force Awakens"
##
## [[6]]
## [1] "Attack of the Clones" "Revenge of the Sith" "A New Hope"
```
Let’s stop here a moment. As you see, the `films` column contains several items in it. How is it
possible that a single *cell* contains more than one film? This is because what is actually
contained in the cell is not the seven films, as seven separate characters, but an atomic vector
that happens to have seven elements. But it is still only one vector. *Zooming* in into the data
frame helps understand:
In the picture above we see three columns. The first two, `name` and `sex` are what you’re used
to see, just one element defining the characters `name` and `sex` respectively. The last one
also contains only one element for each character, it just so happens to be a complete
vector of characters. Because what is inside the *cells* of a list\-column can be very different
things (as list can contain anything), you have to think a bit about it in order to extract
insights from such columns. List\-columns may seem arcane, but they are extremely powerful
once you master them.
As an example, suppose we want to create a numerical variable which counts the number of movies
in which the characters have appeared. For this we need to compute the length of the list, or count
the number of elements this list has. Let’s try with `length()` a base R function:
```
starwars %>%
filter(name == "Luke Skywalker") %>%
pull(films) %>%
length()
```
```
## [1] 1
```
This might be surprising, but remember that a list with only one element, has a length of 1:
```
length(
list(words) # this creates a list which one element. This element is a list of 980 words.
)
```
```
## [1] 1
```
Even though `words` contain a vector of 980 words, if we put this very long vector inside the
first element of list, `length(list(words))` will this compute the length of the list. Let’s
see what happens if we create a more complex list:
```
numbers <- seq(1, 5)
length(
list(words, # this creates a list which one element. This element is a list of 980 words.
numbers) # numbers contains numbers 1 through 5
)
```
```
## [1] 2
```
`list(words, numbers)` is now a list of two elements, `words` and `numbers`. If we want to compute
the length of `words` and `numbers`, we need to learn about another powerful concept called
*higher\-order functions*. We are going to learn about this in greater detail in Chapter 8\. For now,
let’s use the fact that our list `films` is contained inside of a data frame, and use a convenience
function included in `{dplyr}` to handle situations like this:
```
starwars <- starwars %>%
rowwise() %>% # <- Apply the next steps for each row individually
mutate(n_films = length(films))
```
`dplyr::rowwise()` is useful when working with list\-columns because whatever instructions follow
get run on the single element contained in the list. The picture below illustrates this:
Let’s take a look at the characters and the number of films they have appeared in:
```
starwars %>%
select(name, films, n_films)
```
```
## # A tibble: 87 × 3
## # Rowwise:
## name films n_films
## <chr> <list> <int>
## 1 Luke Skywalker <chr [5]> 5
## 2 C-3PO <chr [6]> 6
## 3 R2-D2 <chr [7]> 7
## 4 Darth Vader <chr [4]> 4
## 5 Leia Organa <chr [5]> 5
## 6 Owen Lars <chr [3]> 3
## 7 Beru Whitesun lars <chr [3]> 3
## 8 R5-D4 <chr [1]> 1
## 9 Biggs Darklighter <chr [1]> 1
## 10 Obi-Wan Kenobi <chr [6]> 6
## # … with 77 more rows
```
Now we can, for example, create a factor variable that groups characters by asking whether they appeared only in
1 movie, or more:
```
starwars <- starwars %>%
mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie",
n_films >= 1 ~ "More than 1 movie"))
```
You can also create list\-columns with your own datasets, by using `tidyr::nest()`. Remember the
fake `survey_data` I created to illustrate `pivot_longer()` and `pivot_wider()`? Let’s go back to that dataset
again:
```
survey_data <- tribble(
~id, ~variable, ~value,
1, "var1", 1,
1, "var2", 0.2,
NA, "var3", 0.3,
2, "var1", 1.4,
2, "var2", 1.9,
2, "var3", 4.1,
3, "var1", 0.1,
3, "var2", 2.8,
3, "var3", 8.9,
4, "var1", 1.7,
NA, "var2", 1.9,
4, "var3", 7.6
)
print(survey_data)
```
```
## # A tibble: 12 × 3
## id variable value
## <dbl> <chr> <dbl>
## 1 1 var1 1
## 2 1 var2 0.2
## 3 NA var3 0.3
## 4 2 var1 1.4
## 5 2 var2 1.9
## 6 2 var3 4.1
## 7 3 var1 0.1
## 8 3 var2 2.8
## 9 3 var3 8.9
## 10 4 var1 1.7
## 11 NA var2 1.9
## 12 4 var3 7.6
```
```
nested_data <- survey_data %>%
group_by(id) %>%
nest()
glimpse(nested_data)
```
```
## Rows: 5
## Columns: 2
## Groups: id [5]
## $ id <dbl> 1, NA, 2, 3, 4
## $ data <list> [<tbl_df[2 x 2]>], [<tbl_df[2 x 2]>], [<tbl_df[3 x 2]>], [<tbl_df…
```
This creates a new tibble, with columns `id` and `data`. `data` is a list\-column that contains
tibbles; each tibble is the `variable` and `value` for each individual:
```
nested_data %>%
filter(id == "1") %>%
pull(data)
```
```
## [[1]]
## # A tibble: 2 × 2
## variable value
## <chr> <dbl>
## 1 var1 1
## 2 var2 0.2
```
As you can see, for individual 1, the column data contains a 2x2 tibble with columns `variable` and
`value`. Because `group_by()` followed by `nest()` is so useful, there is a wrapper around these two functions
called `group_nest()`:
```
survey_data %>%
group_nest(id)
```
```
## # A tibble: 5 × 2
## id data
## <dbl> <list<tibble[,2]>>
## 1 1 [2 × 2]
## 2 2 [3 × 2]
## 3 3 [3 × 2]
## 4 4 [2 × 2]
## 5 NA [2 × 2]
```
You might be wondering why this is useful, because this seems to introduce an unnecessary
layer of complexity. The usefulness of list\-columns will become apparent in the next chapters,
where we are going to learn how to repeat actions over, say, individuals. So if you’ve reached
the end of this section and still didn’t really grok list\-columns, go take some fresh air and
come back to this section again later on.
4\.9 Going beyond descriptive statistics and data manipulation
--------------------------------------------------------------
The `{tidyverse}` collection of packages can do much more than simply data manipulation and
descriptive statisics. You can use the principles we have covered and the functions you now know
to do much more. For instance, you can use a few `{tidyverse}` functions to do Monte Carlo simulations,
for example to estimate \\(\\pi\\).
Draw the unit circle inside the unit square, the ratio of the area of the circle to the area of the
square will be \\(\\pi/4\\). Then shot K arrows at the square; roughly \\(K\*\\pi/4\\) should have fallen
inside the circle. So if now you shoot N arrows at the square, and M fall inside the circle, you have
the following relationship \\(M \= N\*\\pi/4\\). You can thus compute \\(\\pi\\) like so: \\(\\pi \= 4\*M/N\\).
The more arrows N you throw at the square, the better approximation of \\(\\pi\\) you’ll have. Let’s
try to do this with a tidy Monte Carlo simulation. First, let’s randomly pick some points inside
the unit square:
```
library(tidyverse)
n <- 5000
set.seed(2019)
points <- tibble("x" = runif(n), "y" = runif(n))
```
Now, to know if a point is inside the unit circle, we need to check wether \\(x^2 \+ y^2 \< 1\\). Let’s
add a new column to the `points` tibble, called `inside` equal to 1 if the point is inside the
unit circle and 0 if not:
```
points <- points %>%
mutate(inside = map2_dbl(.x = x, .y = y, ~ifelse(.x**2 + .y**2 < 1, 1, 0))) %>%
rowid_to_column("N")
```
Let’s take a look at `points`:
```
points
```
```
## # A tibble: 5,000 × 4
## N x y inside
## <int> <dbl> <dbl> <dbl>
## 1 1 0.770 0.984 0
## 2 2 0.713 0.0107 1
## 3 3 0.303 0.133 1
## 4 4 0.618 0.0378 1
## 5 5 0.0505 0.677 1
## 6 6 0.0432 0.0846 1
## 7 7 0.820 0.727 0
## 8 8 0.00961 0.0758 1
## 9 9 0.102 0.373 1
## 10 10 0.609 0.676 1
## # … with 4,990 more rows
```
Now, I can compute the estimation
of \\(\\pi\\) at each row, by computing the cumulative sum of the 1’s in the `inside` column and dividing
that by the current value of `N` column:
```
points <- points %>%
mutate(estimate = 4*cumsum(inside)/N)
```
`cumsum(inside)` is the `M` from the formula. Now, we can finish by plotting the result:
```
ggplot(points) +
geom_line(aes(y = estimate, x = N)) +
geom_hline(yintercept = pi)
```
In the next chapter, we are going to learn all about `{ggplot2}`, the package I used in the lines
above to create this plot.
As the number of tries grows, the estimation of \\(\\pi\\) gets better.
Using a data frame as a structure to hold our simulated points and the results makes it very easy
to avoid loops, and thus write code that is more concise and easier to follow.
If you studied a quantitative field in university, you might have done a similar exercise at the
time, very likely by defining a matrix to hold your points, and an empty vector to hold whether a
particular point was inside the unit circle. Then you wrote a loop to compute whether
a point was inside the unit circle, save this result in the before\-defined empty vector and then
compute the estimation of \\(\\pi\\). Again, I take this opportunity here to stress that there is nothing
wrong with this approach per se, but R is better suited for a workflow where lists or data frames
are the central objects and where the analyst operates over them with functional programming techniques.
4\.10 Exercises
---------------
### Exercise 1
* Combine `mutate()` with `across()` to exponentiate every column of type `double` of the `gasoline` dataset.
To obtain the `gasoline` dataset, run the following lines:
```
data(Gasoline, package = "plm")
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
* Exponeniate columns starting with the character `"l"` of the `gasoline` dataset.
* Convert all columns’ classes into the character class.
### Exercise 2
Load the `LaborSupply` dataset from the `{Ecdat}` package and answer the following questions:
* Compute the average annual hours worked by year (plus standard deviation)
* What age group worked the most hours in the year 1982?
* Create a variable, `n_years` that equals the number of years an individual stays in the panel. Is the panel balanced?
* Which are the individuals that do not have any kids during the whole period? Create a variable, `no_kids`, that flags these individuals (1 \= no kids, 0 \= kids)
* Using the `no_kids` variable from before compute the average wage, standard deviation and number of observations in each group for the year 1980 (no kids group vs kids group).
* Create the lagged logarithm of hours worked and wages. Remember that this is a panel.
### Exercise 3
* What does the following code do? Copy and paste it in an R interpreter to find out!
```
LaborSupply %>%
group_by(id) %>%
mutate(across(starts_with("l"), tibble::lst(lag, lead)))
```
* Using `summarise()` and `across()`, compute the mean, standard deviation and number of individuals of `lnhr` and `lnwg` for each individual.
### Exercise 1
* Combine `mutate()` with `across()` to exponentiate every column of type `double` of the `gasoline` dataset.
To obtain the `gasoline` dataset, run the following lines:
```
data(Gasoline, package = "plm")
gasoline <- as_tibble(Gasoline)
gasoline <- gasoline %>%
mutate(country = tolower(country))
```
* Exponeniate columns starting with the character `"l"` of the `gasoline` dataset.
* Convert all columns’ classes into the character class.
### Exercise 2
Load the `LaborSupply` dataset from the `{Ecdat}` package and answer the following questions:
* Compute the average annual hours worked by year (plus standard deviation)
* What age group worked the most hours in the year 1982?
* Create a variable, `n_years` that equals the number of years an individual stays in the panel. Is the panel balanced?
* Which are the individuals that do not have any kids during the whole period? Create a variable, `no_kids`, that flags these individuals (1 \= no kids, 0 \= kids)
* Using the `no_kids` variable from before compute the average wage, standard deviation and number of observations in each group for the year 1980 (no kids group vs kids group).
* Create the lagged logarithm of hours worked and wages. Remember that this is a panel.
### Exercise 3
* What does the following code do? Copy and paste it in an R interpreter to find out!
```
LaborSupply %>%
group_by(id) %>%
mutate(across(starts_with("l"), tibble::lst(lag, lead)))
```
* Using `summarise()` and `across()`, compute the mean, standard deviation and number of individuals of `lnhr` and `lnwg` for each individual.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/graphs.html |
Chapter 5 Graphs
================
By default, it is possible to make a lot of graphs with R without the need of any external
packages. However, in this chapter, we are going to learn how to make graphs using `{ggplot2}` which
is a very powerful package that produces amazing graphs. There is an entry cost to `{ggplot2}` as it
works in a very different way than what you would expect, especially if you know how to make plots
with the basic R functions already. But the resulting graphs are well worth the effort and once
you will know more about `{ggplot2}` you will see that in a lot of situations it is actually faster
and easier. Another advantage is that making plots with `{ggplot2}` is consistent, so you do not need
to learn anything specific to make, say, density plots. There are a lot of extensions to `{ggplot2}`,
such as `{ggridges}` to create so\-called ridge plots and `{gganimate}` to create animated plots. By
the end of this chapter you will know how to do basic plots with `{ggplot2}` and also how to use these
two extensions.
5\.1 Resources
--------------
Before showing some examples and the general functionality of `{ggplot2}`, I list here some online
resources that I keep coming back to:
* [Data Visualization for Social Science](http://socviz.co/)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/)
* [R graph gallery](http://www.r-graph-gallery.com/portfolio/ggplot2-package/)
* [Tufte in R](http://motioninsocial.com/tufte/)
* [ggplot2 extensions](https://exts.ggplot2.tidyverse.org/)
* [ggthemes function reference](https://jrnold.github.io/ggthemes/reference/index.html)
* [ggplot2 cheatsheet](https://raw.githubusercontent.com/rstudio/cheatsheets/main/data-visualization.pdf)
When I first started using `{ggplot2}`, I had a cookbook approach to it; I tried findinge examples
online that looked like what I needed, copy and paste the code and then adapted it to my case. The above resources
are the ones I consulted and keep consulting in these situations (I also go back to past code I’ve written, of
course). Don’t hesitate to skim these resources for inspiration and to learn more about some
extensions to `{ggplot2}`. In the next subsections I am going to show you how to draw the most common
plots, as well as show you how to customize your plots with `{ggthemes}`, a package that contains pre\-defined
themes for `{ggplot2}`.
5\.2 Examples
-------------
I think that the best way to learn how to use `{ggplot2}` is to jump right into it. Let’s first start with
barplots.
### 5\.2\.1 Barplots
To follow the examples below, load the following libraries:
```
library(ggplot2)
library(ggthemes)
```
`{ggplot2}` is an implementation of the *Grammar of Graphics* by Wilkinson ([2006](#ref-wilkinson2006)), but you don’t need
to read the books to start using it. If we go back to the Star Wars data (contained in `dplyr`),
and wish to draw a barplot of the gender, the following lines are enough:
```
ggplot(starwars, aes(gender)) +
geom_bar()
```
The first argument of the function is the data (called `starwars` in this example), and then the
function `aes()`. This function is where you list the variables that you want to map to the aesthetics
of the *geoms* functions. On the second line, you see that we use the `geom_bar()` function. This
function creates a barplot of `gender` variable.
You can get different kind of plots by using different `geom_` functions. You can also provide the
`aes()` argument to the `geom_*()` function:
```
ggplot(starwars) +
geom_bar(aes(gender))
```
The difference between these two approaches is that when you specify the aesthetics in the `ggplot()` function,
all the `geom_*()` functions that follow will inherited these aesthetics. This is useful if you want to avoid
writing the same code over and over again, but can be problematic if you need to specify different aesthetics
to different `geom_*()` functions. This will become clear in a later example.
You can add options to your plots, for instance, you can change the coordinate system in your barplot:
```
ggplot(starwars, aes(gender)) +
geom_bar() +
coord_flip()
```
This is the basic recipe to create plots using `{ggplot2}`: start with a call to `ggplot()` where you specify
the data you want to plot, and optionally the aesthetics. Then, use the `geom_*()` function you need; if you
did not specify the aesthetics in the call to the `ggplot()` function, do it here. Then, you can add different
options, such as changing the coordinate system, changing the theme, the colour palette used, changing the
position of the legend and much, much more. This chapter will only give you an overview of the capabilities
of `{ggplot2}`.
### 5\.2\.2 Scatter plots
Scatter plots are very useful, especially if you are trying to figure out the relationship between two variables.
For instance, let’s make a scatter plot of height vs weight of Star Wars characters:
```
ggplot(starwars) +
geom_point(aes(height, mass))
```
As you can see there is an outlier; a very heavy character! Star Wars fans already guessed it, it’s Jabba the Hut.
To make the plot easier to read, let’s remove this outlier:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass))
```
There is a positive correlation between height and mass, by adding `geom_smooth()` with the option `method = "lm"`:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth(method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
I’ve moved the `aes(height, mass)` up to the `ggplot()` function because both `geom_point()` and `geom_smooth()`
need them, and as explained in the begging of this section, the aesthetics listed in `ggplot()` get passed down
to the other geoms.
If you omit `method = "lm`, you get a non\-parametric curve:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 5\.2\.3 Density
Use `geom_density()` to get density plots:
```
ggplot(starwars, aes(height)) +
geom_density()
```
```
## Warning: Removed 6 rows containing non-finite values (stat_density).
```
Let’s go into more detail now; what if you would like to plot the densities for feminines and masculines
only (removing the droids from the data first)? This can be done by first filtering the data using
`dplyr` and then separating the dataset by gender:
```
starwars %>%
filter(gender %in% c("feminine", "masculine"))
```
The above lines do the filtering; only keep gender if gender is in the vector `"feminine", "masculine"`.
This is much easier than having to write `gender == "feminine" | gender == "masculine"`. Then, we pipe
this dataset to `ggplot`:
```
starwars %>%
filter(gender %in% c("feminine", "masculine")) %>%
ggplot(aes(height, fill = gender)) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Let’s take a closer look to the `aes()` function: I’ve added `fill = gender`. This means that
there will be one density plot for each gender in the data, and each will be coloured accordingly.
This is where `{ggplot2}` might be confusing; there is no need to write explicitly (even if it is
possible) that you want the *feminine* density to be red and the *masculine* density to be blue. You just
map the variable `gender` to this particular aesthetic. You conclude the plot by adding
`geom_density()` which is this case is the plot you want. We will see later how to change the
colours of your plot.
An alternative way to write this code is first to save the filtered data in a variable, and define
the aesthetics inside the `geom_density()` function:
```
filtered_data <- starwars %>%
filter(gender %in% c("feminine", "masculine"))
ggplot(filtered_data) +
geom_density(aes(height, fill = gender))
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.4 Line plots
For the line plots, we are going to use official unemployment data (the same as in the previous
chapter, but with all the available years). Get it from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from the website of the [Luxembourguish national statistical institute](https://lustat.statec.lu/vis?pg=0&df%5Bds%5D=release&df%5Bid%5D=DF_X026&df%5Bag%5D=LU1&df%5Bvs%5D=1.0&pd=2021%2C&dq=..A&ly%5Brw%5D=SPECIFICATION&ly%5Bcl%5D=VARIABLE&lc=en).
Let’s plot the unemployment for the canton of Luxembourg only:
```
unemp_lux_data <- import("datasets/unemployment/all/unemployment_lux_all.csv")
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = 1)) +
geom_line()
```
Because line plots are 2D, you need to specify the y and x axes. There is also another option you
need to add, `group = 1`. This is to tell `aes()` that the dots have to be connected with a single
line. What if you want to plot more than one commune?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
This time, I’ve specified `group = division` which means that there has to be one line per as many
communes as in the variable `division`. I do the same for colours. I think the next example
illustrates how `{ggplot2}` is actually brilliant; if you need to add a third commune, there is no
need to specify anything else; no need to add anything to the legend, no need to specify a third
colour etc:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
The three communes get mapped to the colour aesthetic so whatever the number of communes, as long
as there are enough colours, the communes will each get mapped to one of these colours.
### 5\.2\.5 Facets
In some case you have a factor variable that separates the data you wish to plot into different
categories. If you want to have a plot per category you can use the `facet_grid()` function.
Careful though, this function does not take a variable as an argument, but a formula, hence the `~`
symbol in the code below:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(. ~ human) + #<--- this is a formula
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
I first created a factor variable that specifies if a Star Wars character is human or not, and then
use it for facetting. By changing the formula, you change how the facetting is done:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ .) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Recall the categorical variable `more_1` that we computed in the previous chapter? Let’s use it as
a faceting variable:
```
starwars %>%
rowwise() %>%
mutate(n_films = length(films)) %>%
mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie",
n_films != 1 ~ "More than 1 movie")) %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ more_1) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.6 Pie Charts
I am not a huge fan of pie charts, but sometimes this is what you have to do. So let’s see how you
can create pie charts.
First, let’s create a mock dataset with the function `tibble::tribble()` which allows you to create a
dataset line by line:
```
test_data <- tribble(
~id, ~var1, ~var2, ~var3, ~var4, ~var5,
"a", 26.5, 38, 30, 32, 34,
"b", 30, 30, 28, 32, 30,
"c", 34, 32, 30, 28, 26.5
)
```
This data is not in the right format though, which is wide. We need to have it in the long format
for it to work with `{ggplot2}`. For this, let’s use `tidyr::gather()` as seen in the previous chapter:
```
test_data_long = test_data %>%
gather(variable, value, starts_with("var"))
```
Now, let’s plot this data, first by creating 3 bar plots:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity")
```
In the code above, I introduce a new option, called `stat = "identity"`. By default, `geom_bar()` counts
the number of observations of each category that is plotted, which is a statistical transformation.
By adding `stat = "identity"`, I force the statistical transformation to be the identity function, and
thus plot the data as is.
To create the pie chart, first we need to compute the share of each `id` to `var1`, `var2`, etc…
To do this, we first group by `id`, then compute the total. Then we use a new function `ungroup()`.
After using `ungroup()` all the computations are done on the whole dataset instead of by group, which
is what we need to compute the share:
```
test_data_long <- test_data_long %>%
group_by(id) %>%
mutate(total = sum(value)) %>%
ungroup() %>%
mutate(share = value/total)
```
Let’s take a look to see if this is what we wanted:
```
print(test_data_long)
```
```
## # A tibble: 15 × 5
## id variable value total share
## <chr> <chr> <dbl> <dbl> <dbl>
## 1 a var1 26.5 160. 0.165
## 2 b var1 30 150 0.2
## 3 c var1 34 150. 0.226
## 4 a var2 38 160. 0.237
## 5 b var2 30 150 0.2
## 6 c var2 32 150. 0.213
## 7 a var3 30 160. 0.187
## 8 b var3 28 150 0.187
## 9 c var3 30 150. 0.199
## 10 a var4 32 160. 0.199
## 11 b var4 32 150 0.213
## 12 c var4 28 150. 0.186
## 13 a var5 34 160. 0.212
## 14 b var5 30 150 0.2
## 15 c var5 26.5 150. 0.176
```
If you didn’t understand what `ungroup()` did, rerun the last few lines with it and inspect the
output.
To plot the pie chart, we create a barplot again, but specify polar coordinates:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = "", fill = variable), stat = "identity") +
theme() +
coord_polar("y", start = 0)
```
As you can see, this typical pie chart is not very easy to read; compared to the barplots above it
is not easy to distinguish if `a` has a higher share than `b` or `c`. You can change the look of the
pie chart, for example by specifying `variable` as the `x`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = variable, fill = variable), stat = "identity") +
theme() +
coord_polar("x", start = 0)
```
But as a general rule, avoid pie charts if possible. I find that pie charts are only interesting if
you need to show proportions that are hugely unequal, to really emphasize the difference between
said proportions.
### 5\.2\.7 Adding text to plots
Sometimes you might want to add some text to your plots. This is possible with `geom_text()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value))
```
You can put anything after `label =` but in general what you want are the values, so that’s what
I put there. But you can also refine it, imagine the values are actually in euros:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = paste(value, "€")))
```
You can also achieve something similar with `geom_label()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_label(aes(variable, value + 1.5, label = paste(value, "€")))
```
5\.3 Customization
------------------
Every plot you’ve seen until now was made with the default look of `{ggplot2}`. If you want to change
the look, you can apply a theme, and a colour scheme. Let’s take a look at themes first by using the
ones found in the package `ggthemes`. But first, let’s learn how to change the names of the axes
and how to title a plot.
### 5\.3\.1 Changing titles, axes labels, options, mixing geoms and changing themes
The name of this subsection is quite long, but this is because everything is kind of linked. Let’s
start by learning what the `labs()` function does. To change the title of the plot, and of the axes,
you need to pass the names to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What if you want to make the lines thicker?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line(size = 2)
```
Each `geom_*()` function has its own options. Notice that the `size=2` argument is not inside
an `aes()` function. This is because I do not want to map a variable of the data to the size
of the line, in other words, I do not want to make the size of the line proportional to a certain
variable in the data. Recall the scatter plot we did earlier, where we showed that height and mass of
star wars characters increased together? Let’s take this plot again, but make the size of the dots proportional
to the birth year of the character:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year))
```
Making the size proportional to the birth year (the age would have been more informative) allows
us to see a third dimension. It is also possible to “see” a fourth dimension, the gender for instance,
by changing the colour of the dots:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender))
```
As I promised above, we are now going to learn how to add a regression line to this scatter plot:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass), method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
`geom_smooth()` adds a regression line, but only if you specify `method = "lm"` (“lm” stands for
“linear model”). What happens if you remove this option?
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default, `geom_smooth()` does a non\-parametric regression called LOESS (locally estimated scatterplot smoothing),
which is more flexible. It is also possible to have one regression line by gender:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass, colour = gender))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Because there are only a few observations for feminines and `NA`s the regression lines are not very informative,
but this was only an example to show you some options of `geom_smooth()`.
Let’s go back to the unemployment line plots. For now, let’s keep the base `{ggplot2}` theme, but
modify it a bit. For example, the legend placement is actually a feature of the theme. This means
that if you want to change where the legend is placed you need to modify this feature from the
theme. This is done with the function `theme()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What I also like to do is remove the title of the legend, because it is often superfluous:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
The legend title has to be an `element_text` object.`element_text` objects are used with `theme` to
specify how text should be displayed. `element_blank()` draws nothing and assigns no space (not
even blank space). If you want to keep the legend title but change it, you need to use `element_text()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_text(colour = "red")) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
If you want to change the word “division” to something else, you can do so by providing the `colour` argument
to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate", colour = "Administrative division") +
geom_line()
```
You could modify every feature of the theme like that, but there are built\-in themes that you can use:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
For example in the code above, I have used `theme_minimal()` which I like quite a lot. You can also
use themes from the `ggthemes` package, which even contains a STATA theme, if you like it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
As you can see, `theme_stata()` has the legend on the bottom by default, because this is how the
legend position is defined within the theme. However the legend title is still there. Let’s remove
it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
theme(legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
`ggthemes` even features an Excel 2003 theme (don’t use it though):
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_excel() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
You can create your own theme by using a simple theme, such as `theme_minimal()` as a base
and then add your options. We are going to create one theme after we learn how to create our
own functions, in Chapter 7\. Then, we are going to create a package to share this theme with
the world, and we are going to learn how to make packages in Chapter 9\.
### 5\.3\.2 Colour schemes
You can also change colour schemes, by specifying either `scale_colour_*()` or `scale_fill_*()`
functions. `scale_colour_*()` functions are used for continuous variables, while `scale_fill_*()`
functions for discrete variables (so for barplots for example). A colour scheme I like is the
[Highcharts](https://www.highcharts.com/) colour scheme.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_hc() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
An example with a barplot:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
scale_fill_hc()
```
It is also possible to define and use your own palette.
To use your own colours you can use `scale_colour_manual()` and `scale_fill_manual()` and specify
the html codes of the colours you want to use.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_manual(values = c("#FF336C", "#334BFF", "#2CAE00")) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
To get html codes of colours you can use [this online
tool](http://htmlcolorcodes.com/color-picker/).
There is also a very nice package, called `colourpicker` that allows you to
pick colours from with RStudio. Also, you do not even need to load it to use
it, since it comes with an Addin:
For a barplot you would do the same:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
scale_fill_manual(values = c("#FF336C", "#334BFF", "#2CAE00", "#B3C9C6", "#765234"))
```
For countinuous variables, things are a bit different. Let’s first create a plot where we map a continuous
variable to the colour argument of `aes()`:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth))
```
To change the colour, we need to use `scale_color_gradient()` and specify a value for low values of the variable,
and a value for high values of the variable. For example, using the colours of the theme I made for my blog:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth)) +
scale_color_gradient(low = "#bec3b8", high = "#ad2c6c")
```
5\.4 Saving plots to disk
-------------------------
There are two ways to save plots on disk; one through the *Plots* plane in RStudio and another using the
`ggsave()` function. Using RStudio, navigate to the *Plots* pane and click on *Export*. You can
then choose where to save the plot and other various options:
This is fine if you only generate one or two plots but if you generate a large number of them, it
is less tedious to use the `ggsave()` function:
```
my_plot1 <- ggplot(my_data) +
geom_bar(aes(variable))
ggsave("path/you/want/to/save/the/plot/to/my_plot1.pdf", my_plot1)
```
There are other options that you can specify such as the width and height, resolution, units,
etc…
5\.5 Exercises
--------------
### Exercise 1
Load the `Bwages` dataset from the `Ecdat` package. Your first task is to create a new variable,
`educ_level`, which is a factor variable that equals:
* “Primary school” if `educ == 1`
* “High school” if `educ == 2`
* “Some university” if `educ == 3`
* “Master’s degree” if `educ == 4`
* “Doctoral degree” if `educ == 5`
Use `case_when()` for this.
Then, plot a scatter plot of wages on experience, by education level. Add a theme that you like,
and remove the title of the legend.
The scatter plot is not very useful, because you cannot make anything out. Instead, use another
geom that shows you a non\-parametric fit with confidence bands.
5\.1 Resources
--------------
Before showing some examples and the general functionality of `{ggplot2}`, I list here some online
resources that I keep coming back to:
* [Data Visualization for Social Science](http://socviz.co/)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/)
* [R graph gallery](http://www.r-graph-gallery.com/portfolio/ggplot2-package/)
* [Tufte in R](http://motioninsocial.com/tufte/)
* [ggplot2 extensions](https://exts.ggplot2.tidyverse.org/)
* [ggthemes function reference](https://jrnold.github.io/ggthemes/reference/index.html)
* [ggplot2 cheatsheet](https://raw.githubusercontent.com/rstudio/cheatsheets/main/data-visualization.pdf)
When I first started using `{ggplot2}`, I had a cookbook approach to it; I tried findinge examples
online that looked like what I needed, copy and paste the code and then adapted it to my case. The above resources
are the ones I consulted and keep consulting in these situations (I also go back to past code I’ve written, of
course). Don’t hesitate to skim these resources for inspiration and to learn more about some
extensions to `{ggplot2}`. In the next subsections I am going to show you how to draw the most common
plots, as well as show you how to customize your plots with `{ggthemes}`, a package that contains pre\-defined
themes for `{ggplot2}`.
5\.2 Examples
-------------
I think that the best way to learn how to use `{ggplot2}` is to jump right into it. Let’s first start with
barplots.
### 5\.2\.1 Barplots
To follow the examples below, load the following libraries:
```
library(ggplot2)
library(ggthemes)
```
`{ggplot2}` is an implementation of the *Grammar of Graphics* by Wilkinson ([2006](#ref-wilkinson2006)), but you don’t need
to read the books to start using it. If we go back to the Star Wars data (contained in `dplyr`),
and wish to draw a barplot of the gender, the following lines are enough:
```
ggplot(starwars, aes(gender)) +
geom_bar()
```
The first argument of the function is the data (called `starwars` in this example), and then the
function `aes()`. This function is where you list the variables that you want to map to the aesthetics
of the *geoms* functions. On the second line, you see that we use the `geom_bar()` function. This
function creates a barplot of `gender` variable.
You can get different kind of plots by using different `geom_` functions. You can also provide the
`aes()` argument to the `geom_*()` function:
```
ggplot(starwars) +
geom_bar(aes(gender))
```
The difference between these two approaches is that when you specify the aesthetics in the `ggplot()` function,
all the `geom_*()` functions that follow will inherited these aesthetics. This is useful if you want to avoid
writing the same code over and over again, but can be problematic if you need to specify different aesthetics
to different `geom_*()` functions. This will become clear in a later example.
You can add options to your plots, for instance, you can change the coordinate system in your barplot:
```
ggplot(starwars, aes(gender)) +
geom_bar() +
coord_flip()
```
This is the basic recipe to create plots using `{ggplot2}`: start with a call to `ggplot()` where you specify
the data you want to plot, and optionally the aesthetics. Then, use the `geom_*()` function you need; if you
did not specify the aesthetics in the call to the `ggplot()` function, do it here. Then, you can add different
options, such as changing the coordinate system, changing the theme, the colour palette used, changing the
position of the legend and much, much more. This chapter will only give you an overview of the capabilities
of `{ggplot2}`.
### 5\.2\.2 Scatter plots
Scatter plots are very useful, especially if you are trying to figure out the relationship between two variables.
For instance, let’s make a scatter plot of height vs weight of Star Wars characters:
```
ggplot(starwars) +
geom_point(aes(height, mass))
```
As you can see there is an outlier; a very heavy character! Star Wars fans already guessed it, it’s Jabba the Hut.
To make the plot easier to read, let’s remove this outlier:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass))
```
There is a positive correlation between height and mass, by adding `geom_smooth()` with the option `method = "lm"`:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth(method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
I’ve moved the `aes(height, mass)` up to the `ggplot()` function because both `geom_point()` and `geom_smooth()`
need them, and as explained in the begging of this section, the aesthetics listed in `ggplot()` get passed down
to the other geoms.
If you omit `method = "lm`, you get a non\-parametric curve:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 5\.2\.3 Density
Use `geom_density()` to get density plots:
```
ggplot(starwars, aes(height)) +
geom_density()
```
```
## Warning: Removed 6 rows containing non-finite values (stat_density).
```
Let’s go into more detail now; what if you would like to plot the densities for feminines and masculines
only (removing the droids from the data first)? This can be done by first filtering the data using
`dplyr` and then separating the dataset by gender:
```
starwars %>%
filter(gender %in% c("feminine", "masculine"))
```
The above lines do the filtering; only keep gender if gender is in the vector `"feminine", "masculine"`.
This is much easier than having to write `gender == "feminine" | gender == "masculine"`. Then, we pipe
this dataset to `ggplot`:
```
starwars %>%
filter(gender %in% c("feminine", "masculine")) %>%
ggplot(aes(height, fill = gender)) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Let’s take a closer look to the `aes()` function: I’ve added `fill = gender`. This means that
there will be one density plot for each gender in the data, and each will be coloured accordingly.
This is where `{ggplot2}` might be confusing; there is no need to write explicitly (even if it is
possible) that you want the *feminine* density to be red and the *masculine* density to be blue. You just
map the variable `gender` to this particular aesthetic. You conclude the plot by adding
`geom_density()` which is this case is the plot you want. We will see later how to change the
colours of your plot.
An alternative way to write this code is first to save the filtered data in a variable, and define
the aesthetics inside the `geom_density()` function:
```
filtered_data <- starwars %>%
filter(gender %in% c("feminine", "masculine"))
ggplot(filtered_data) +
geom_density(aes(height, fill = gender))
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.4 Line plots
For the line plots, we are going to use official unemployment data (the same as in the previous
chapter, but with all the available years). Get it from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from the website of the [Luxembourguish national statistical institute](https://lustat.statec.lu/vis?pg=0&df%5Bds%5D=release&df%5Bid%5D=DF_X026&df%5Bag%5D=LU1&df%5Bvs%5D=1.0&pd=2021%2C&dq=..A&ly%5Brw%5D=SPECIFICATION&ly%5Bcl%5D=VARIABLE&lc=en).
Let’s plot the unemployment for the canton of Luxembourg only:
```
unemp_lux_data <- import("datasets/unemployment/all/unemployment_lux_all.csv")
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = 1)) +
geom_line()
```
Because line plots are 2D, you need to specify the y and x axes. There is also another option you
need to add, `group = 1`. This is to tell `aes()` that the dots have to be connected with a single
line. What if you want to plot more than one commune?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
This time, I’ve specified `group = division` which means that there has to be one line per as many
communes as in the variable `division`. I do the same for colours. I think the next example
illustrates how `{ggplot2}` is actually brilliant; if you need to add a third commune, there is no
need to specify anything else; no need to add anything to the legend, no need to specify a third
colour etc:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
The three communes get mapped to the colour aesthetic so whatever the number of communes, as long
as there are enough colours, the communes will each get mapped to one of these colours.
### 5\.2\.5 Facets
In some case you have a factor variable that separates the data you wish to plot into different
categories. If you want to have a plot per category you can use the `facet_grid()` function.
Careful though, this function does not take a variable as an argument, but a formula, hence the `~`
symbol in the code below:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(. ~ human) + #<--- this is a formula
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
I first created a factor variable that specifies if a Star Wars character is human or not, and then
use it for facetting. By changing the formula, you change how the facetting is done:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ .) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Recall the categorical variable `more_1` that we computed in the previous chapter? Let’s use it as
a faceting variable:
```
starwars %>%
rowwise() %>%
mutate(n_films = length(films)) %>%
mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie",
n_films != 1 ~ "More than 1 movie")) %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ more_1) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.6 Pie Charts
I am not a huge fan of pie charts, but sometimes this is what you have to do. So let’s see how you
can create pie charts.
First, let’s create a mock dataset with the function `tibble::tribble()` which allows you to create a
dataset line by line:
```
test_data <- tribble(
~id, ~var1, ~var2, ~var3, ~var4, ~var5,
"a", 26.5, 38, 30, 32, 34,
"b", 30, 30, 28, 32, 30,
"c", 34, 32, 30, 28, 26.5
)
```
This data is not in the right format though, which is wide. We need to have it in the long format
for it to work with `{ggplot2}`. For this, let’s use `tidyr::gather()` as seen in the previous chapter:
```
test_data_long = test_data %>%
gather(variable, value, starts_with("var"))
```
Now, let’s plot this data, first by creating 3 bar plots:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity")
```
In the code above, I introduce a new option, called `stat = "identity"`. By default, `geom_bar()` counts
the number of observations of each category that is plotted, which is a statistical transformation.
By adding `stat = "identity"`, I force the statistical transformation to be the identity function, and
thus plot the data as is.
To create the pie chart, first we need to compute the share of each `id` to `var1`, `var2`, etc…
To do this, we first group by `id`, then compute the total. Then we use a new function `ungroup()`.
After using `ungroup()` all the computations are done on the whole dataset instead of by group, which
is what we need to compute the share:
```
test_data_long <- test_data_long %>%
group_by(id) %>%
mutate(total = sum(value)) %>%
ungroup() %>%
mutate(share = value/total)
```
Let’s take a look to see if this is what we wanted:
```
print(test_data_long)
```
```
## # A tibble: 15 × 5
## id variable value total share
## <chr> <chr> <dbl> <dbl> <dbl>
## 1 a var1 26.5 160. 0.165
## 2 b var1 30 150 0.2
## 3 c var1 34 150. 0.226
## 4 a var2 38 160. 0.237
## 5 b var2 30 150 0.2
## 6 c var2 32 150. 0.213
## 7 a var3 30 160. 0.187
## 8 b var3 28 150 0.187
## 9 c var3 30 150. 0.199
## 10 a var4 32 160. 0.199
## 11 b var4 32 150 0.213
## 12 c var4 28 150. 0.186
## 13 a var5 34 160. 0.212
## 14 b var5 30 150 0.2
## 15 c var5 26.5 150. 0.176
```
If you didn’t understand what `ungroup()` did, rerun the last few lines with it and inspect the
output.
To plot the pie chart, we create a barplot again, but specify polar coordinates:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = "", fill = variable), stat = "identity") +
theme() +
coord_polar("y", start = 0)
```
As you can see, this typical pie chart is not very easy to read; compared to the barplots above it
is not easy to distinguish if `a` has a higher share than `b` or `c`. You can change the look of the
pie chart, for example by specifying `variable` as the `x`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = variable, fill = variable), stat = "identity") +
theme() +
coord_polar("x", start = 0)
```
But as a general rule, avoid pie charts if possible. I find that pie charts are only interesting if
you need to show proportions that are hugely unequal, to really emphasize the difference between
said proportions.
### 5\.2\.7 Adding text to plots
Sometimes you might want to add some text to your plots. This is possible with `geom_text()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value))
```
You can put anything after `label =` but in general what you want are the values, so that’s what
I put there. But you can also refine it, imagine the values are actually in euros:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = paste(value, "€")))
```
You can also achieve something similar with `geom_label()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_label(aes(variable, value + 1.5, label = paste(value, "€")))
```
### 5\.2\.1 Barplots
To follow the examples below, load the following libraries:
```
library(ggplot2)
library(ggthemes)
```
`{ggplot2}` is an implementation of the *Grammar of Graphics* by Wilkinson ([2006](#ref-wilkinson2006)), but you don’t need
to read the books to start using it. If we go back to the Star Wars data (contained in `dplyr`),
and wish to draw a barplot of the gender, the following lines are enough:
```
ggplot(starwars, aes(gender)) +
geom_bar()
```
The first argument of the function is the data (called `starwars` in this example), and then the
function `aes()`. This function is where you list the variables that you want to map to the aesthetics
of the *geoms* functions. On the second line, you see that we use the `geom_bar()` function. This
function creates a barplot of `gender` variable.
You can get different kind of plots by using different `geom_` functions. You can also provide the
`aes()` argument to the `geom_*()` function:
```
ggplot(starwars) +
geom_bar(aes(gender))
```
The difference between these two approaches is that when you specify the aesthetics in the `ggplot()` function,
all the `geom_*()` functions that follow will inherited these aesthetics. This is useful if you want to avoid
writing the same code over and over again, but can be problematic if you need to specify different aesthetics
to different `geom_*()` functions. This will become clear in a later example.
You can add options to your plots, for instance, you can change the coordinate system in your barplot:
```
ggplot(starwars, aes(gender)) +
geom_bar() +
coord_flip()
```
This is the basic recipe to create plots using `{ggplot2}`: start with a call to `ggplot()` where you specify
the data you want to plot, and optionally the aesthetics. Then, use the `geom_*()` function you need; if you
did not specify the aesthetics in the call to the `ggplot()` function, do it here. Then, you can add different
options, such as changing the coordinate system, changing the theme, the colour palette used, changing the
position of the legend and much, much more. This chapter will only give you an overview of the capabilities
of `{ggplot2}`.
### 5\.2\.2 Scatter plots
Scatter plots are very useful, especially if you are trying to figure out the relationship between two variables.
For instance, let’s make a scatter plot of height vs weight of Star Wars characters:
```
ggplot(starwars) +
geom_point(aes(height, mass))
```
As you can see there is an outlier; a very heavy character! Star Wars fans already guessed it, it’s Jabba the Hut.
To make the plot easier to read, let’s remove this outlier:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass))
```
There is a positive correlation between height and mass, by adding `geom_smooth()` with the option `method = "lm"`:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth(method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
I’ve moved the `aes(height, mass)` up to the `ggplot()` function because both `geom_point()` and `geom_smooth()`
need them, and as explained in the begging of this section, the aesthetics listed in `ggplot()` get passed down
to the other geoms.
If you omit `method = "lm`, you get a non\-parametric curve:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot(aes(height, mass)) +
geom_point(aes(height, mass)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 5\.2\.3 Density
Use `geom_density()` to get density plots:
```
ggplot(starwars, aes(height)) +
geom_density()
```
```
## Warning: Removed 6 rows containing non-finite values (stat_density).
```
Let’s go into more detail now; what if you would like to plot the densities for feminines and masculines
only (removing the droids from the data first)? This can be done by first filtering the data using
`dplyr` and then separating the dataset by gender:
```
starwars %>%
filter(gender %in% c("feminine", "masculine"))
```
The above lines do the filtering; only keep gender if gender is in the vector `"feminine", "masculine"`.
This is much easier than having to write `gender == "feminine" | gender == "masculine"`. Then, we pipe
this dataset to `ggplot`:
```
starwars %>%
filter(gender %in% c("feminine", "masculine")) %>%
ggplot(aes(height, fill = gender)) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Let’s take a closer look to the `aes()` function: I’ve added `fill = gender`. This means that
there will be one density plot for each gender in the data, and each will be coloured accordingly.
This is where `{ggplot2}` might be confusing; there is no need to write explicitly (even if it is
possible) that you want the *feminine* density to be red and the *masculine* density to be blue. You just
map the variable `gender` to this particular aesthetic. You conclude the plot by adding
`geom_density()` which is this case is the plot you want. We will see later how to change the
colours of your plot.
An alternative way to write this code is first to save the filtered data in a variable, and define
the aesthetics inside the `geom_density()` function:
```
filtered_data <- starwars %>%
filter(gender %in% c("feminine", "masculine"))
ggplot(filtered_data) +
geom_density(aes(height, fill = gender))
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.4 Line plots
For the line plots, we are going to use official unemployment data (the same as in the previous
chapter, but with all the available years). Get it from
[here](https://github.com/b-rodrigues/modern_R/tree/master/datasets/unemployment/all)
(downloaded from the website of the [Luxembourguish national statistical institute](https://lustat.statec.lu/vis?pg=0&df%5Bds%5D=release&df%5Bid%5D=DF_X026&df%5Bag%5D=LU1&df%5Bvs%5D=1.0&pd=2021%2C&dq=..A&ly%5Brw%5D=SPECIFICATION&ly%5Bcl%5D=VARIABLE&lc=en).
Let’s plot the unemployment for the canton of Luxembourg only:
```
unemp_lux_data <- import("datasets/unemployment/all/unemployment_lux_all.csv")
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = 1)) +
geom_line()
```
Because line plots are 2D, you need to specify the y and x axes. There is also another option you
need to add, `group = 1`. This is to tell `aes()` that the dots have to be connected with a single
line. What if you want to plot more than one commune?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
This time, I’ve specified `group = division` which means that there has to be one line per as many
communes as in the variable `division`. I do the same for colours. I think the next example
illustrates how `{ggplot2}` is actually brilliant; if you need to add a third commune, there is no
need to specify anything else; no need to add anything to the legend, no need to specify a third
colour etc:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(x = year, y = unemployment_rate_in_percent, group = division, colour = division)) +
geom_line()
```
The three communes get mapped to the colour aesthetic so whatever the number of communes, as long
as there are enough colours, the communes will each get mapped to one of these colours.
### 5\.2\.5 Facets
In some case you have a factor variable that separates the data you wish to plot into different
categories. If you want to have a plot per category you can use the `facet_grid()` function.
Careful though, this function does not take a variable as an argument, but a formula, hence the `~`
symbol in the code below:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(. ~ human) + #<--- this is a formula
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
I first created a factor variable that specifies if a Star Wars character is human or not, and then
use it for facetting. By changing the formula, you change how the facetting is done:
```
starwars %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ .) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
Recall the categorical variable `more_1` that we computed in the previous chapter? Let’s use it as
a faceting variable:
```
starwars %>%
rowwise() %>%
mutate(n_films = length(films)) %>%
mutate(more_1 = case_when(n_films == 1 ~ "Exactly one movie",
n_films != 1 ~ "More than 1 movie")) %>%
mutate(human = case_when(species == "Human" ~ "Human",
species != "Human" ~ "Not Human")) %>%
filter(gender %in% c("feminine", "masculine"), !is.na(human)) %>%
ggplot(aes(height, fill = gender)) +
facet_grid(human ~ more_1) +
geom_density()
```
```
## Warning: Removed 5 rows containing non-finite values (stat_density).
```
### 5\.2\.6 Pie Charts
I am not a huge fan of pie charts, but sometimes this is what you have to do. So let’s see how you
can create pie charts.
First, let’s create a mock dataset with the function `tibble::tribble()` which allows you to create a
dataset line by line:
```
test_data <- tribble(
~id, ~var1, ~var2, ~var3, ~var4, ~var5,
"a", 26.5, 38, 30, 32, 34,
"b", 30, 30, 28, 32, 30,
"c", 34, 32, 30, 28, 26.5
)
```
This data is not in the right format though, which is wide. We need to have it in the long format
for it to work with `{ggplot2}`. For this, let’s use `tidyr::gather()` as seen in the previous chapter:
```
test_data_long = test_data %>%
gather(variable, value, starts_with("var"))
```
Now, let’s plot this data, first by creating 3 bar plots:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity")
```
In the code above, I introduce a new option, called `stat = "identity"`. By default, `geom_bar()` counts
the number of observations of each category that is plotted, which is a statistical transformation.
By adding `stat = "identity"`, I force the statistical transformation to be the identity function, and
thus plot the data as is.
To create the pie chart, first we need to compute the share of each `id` to `var1`, `var2`, etc…
To do this, we first group by `id`, then compute the total. Then we use a new function `ungroup()`.
After using `ungroup()` all the computations are done on the whole dataset instead of by group, which
is what we need to compute the share:
```
test_data_long <- test_data_long %>%
group_by(id) %>%
mutate(total = sum(value)) %>%
ungroup() %>%
mutate(share = value/total)
```
Let’s take a look to see if this is what we wanted:
```
print(test_data_long)
```
```
## # A tibble: 15 × 5
## id variable value total share
## <chr> <chr> <dbl> <dbl> <dbl>
## 1 a var1 26.5 160. 0.165
## 2 b var1 30 150 0.2
## 3 c var1 34 150. 0.226
## 4 a var2 38 160. 0.237
## 5 b var2 30 150 0.2
## 6 c var2 32 150. 0.213
## 7 a var3 30 160. 0.187
## 8 b var3 28 150 0.187
## 9 c var3 30 150. 0.199
## 10 a var4 32 160. 0.199
## 11 b var4 32 150 0.213
## 12 c var4 28 150. 0.186
## 13 a var5 34 160. 0.212
## 14 b var5 30 150 0.2
## 15 c var5 26.5 150. 0.176
```
If you didn’t understand what `ungroup()` did, rerun the last few lines with it and inspect the
output.
To plot the pie chart, we create a barplot again, but specify polar coordinates:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = "", fill = variable), stat = "identity") +
theme() +
coord_polar("y", start = 0)
```
As you can see, this typical pie chart is not very easy to read; compared to the barplots above it
is not easy to distinguish if `a` has a higher share than `b` or `c`. You can change the look of the
pie chart, for example by specifying `variable` as the `x`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(y = share, x = variable, fill = variable), stat = "identity") +
theme() +
coord_polar("x", start = 0)
```
But as a general rule, avoid pie charts if possible. I find that pie charts are only interesting if
you need to show proportions that are hugely unequal, to really emphasize the difference between
said proportions.
### 5\.2\.7 Adding text to plots
Sometimes you might want to add some text to your plots. This is possible with `geom_text()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value))
```
You can put anything after `label =` but in general what you want are the values, so that’s what
I put there. But you can also refine it, imagine the values are actually in euros:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = paste(value, "€")))
```
You can also achieve something similar with `geom_label()`:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_label(aes(variable, value + 1.5, label = paste(value, "€")))
```
5\.3 Customization
------------------
Every plot you’ve seen until now was made with the default look of `{ggplot2}`. If you want to change
the look, you can apply a theme, and a colour scheme. Let’s take a look at themes first by using the
ones found in the package `ggthemes`. But first, let’s learn how to change the names of the axes
and how to title a plot.
### 5\.3\.1 Changing titles, axes labels, options, mixing geoms and changing themes
The name of this subsection is quite long, but this is because everything is kind of linked. Let’s
start by learning what the `labs()` function does. To change the title of the plot, and of the axes,
you need to pass the names to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What if you want to make the lines thicker?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line(size = 2)
```
Each `geom_*()` function has its own options. Notice that the `size=2` argument is not inside
an `aes()` function. This is because I do not want to map a variable of the data to the size
of the line, in other words, I do not want to make the size of the line proportional to a certain
variable in the data. Recall the scatter plot we did earlier, where we showed that height and mass of
star wars characters increased together? Let’s take this plot again, but make the size of the dots proportional
to the birth year of the character:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year))
```
Making the size proportional to the birth year (the age would have been more informative) allows
us to see a third dimension. It is also possible to “see” a fourth dimension, the gender for instance,
by changing the colour of the dots:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender))
```
As I promised above, we are now going to learn how to add a regression line to this scatter plot:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass), method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
`geom_smooth()` adds a regression line, but only if you specify `method = "lm"` (“lm” stands for
“linear model”). What happens if you remove this option?
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default, `geom_smooth()` does a non\-parametric regression called LOESS (locally estimated scatterplot smoothing),
which is more flexible. It is also possible to have one regression line by gender:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass, colour = gender))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Because there are only a few observations for feminines and `NA`s the regression lines are not very informative,
but this was only an example to show you some options of `geom_smooth()`.
Let’s go back to the unemployment line plots. For now, let’s keep the base `{ggplot2}` theme, but
modify it a bit. For example, the legend placement is actually a feature of the theme. This means
that if you want to change where the legend is placed you need to modify this feature from the
theme. This is done with the function `theme()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What I also like to do is remove the title of the legend, because it is often superfluous:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
The legend title has to be an `element_text` object.`element_text` objects are used with `theme` to
specify how text should be displayed. `element_blank()` draws nothing and assigns no space (not
even blank space). If you want to keep the legend title but change it, you need to use `element_text()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_text(colour = "red")) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
If you want to change the word “division” to something else, you can do so by providing the `colour` argument
to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate", colour = "Administrative division") +
geom_line()
```
You could modify every feature of the theme like that, but there are built\-in themes that you can use:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
For example in the code above, I have used `theme_minimal()` which I like quite a lot. You can also
use themes from the `ggthemes` package, which even contains a STATA theme, if you like it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
As you can see, `theme_stata()` has the legend on the bottom by default, because this is how the
legend position is defined within the theme. However the legend title is still there. Let’s remove
it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
theme(legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
`ggthemes` even features an Excel 2003 theme (don’t use it though):
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_excel() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
You can create your own theme by using a simple theme, such as `theme_minimal()` as a base
and then add your options. We are going to create one theme after we learn how to create our
own functions, in Chapter 7\. Then, we are going to create a package to share this theme with
the world, and we are going to learn how to make packages in Chapter 9\.
### 5\.3\.2 Colour schemes
You can also change colour schemes, by specifying either `scale_colour_*()` or `scale_fill_*()`
functions. `scale_colour_*()` functions are used for continuous variables, while `scale_fill_*()`
functions for discrete variables (so for barplots for example). A colour scheme I like is the
[Highcharts](https://www.highcharts.com/) colour scheme.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_hc() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
An example with a barplot:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
scale_fill_hc()
```
It is also possible to define and use your own palette.
To use your own colours you can use `scale_colour_manual()` and `scale_fill_manual()` and specify
the html codes of the colours you want to use.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_manual(values = c("#FF336C", "#334BFF", "#2CAE00")) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
To get html codes of colours you can use [this online
tool](http://htmlcolorcodes.com/color-picker/).
There is also a very nice package, called `colourpicker` that allows you to
pick colours from with RStudio. Also, you do not even need to load it to use
it, since it comes with an Addin:
For a barplot you would do the same:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
scale_fill_manual(values = c("#FF336C", "#334BFF", "#2CAE00", "#B3C9C6", "#765234"))
```
For countinuous variables, things are a bit different. Let’s first create a plot where we map a continuous
variable to the colour argument of `aes()`:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth))
```
To change the colour, we need to use `scale_color_gradient()` and specify a value for low values of the variable,
and a value for high values of the variable. For example, using the colours of the theme I made for my blog:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth)) +
scale_color_gradient(low = "#bec3b8", high = "#ad2c6c")
```
### 5\.3\.1 Changing titles, axes labels, options, mixing geoms and changing themes
The name of this subsection is quite long, but this is because everything is kind of linked. Let’s
start by learning what the `labs()` function does. To change the title of the plot, and of the axes,
you need to pass the names to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What if you want to make the lines thicker?
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line(size = 2)
```
Each `geom_*()` function has its own options. Notice that the `size=2` argument is not inside
an `aes()` function. This is because I do not want to map a variable of the data to the size
of the line, in other words, I do not want to make the size of the line proportional to a certain
variable in the data. Recall the scatter plot we did earlier, where we showed that height and mass of
star wars characters increased together? Let’s take this plot again, but make the size of the dots proportional
to the birth year of the character:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year))
```
Making the size proportional to the birth year (the age would have been more informative) allows
us to see a third dimension. It is also possible to “see” a fourth dimension, the gender for instance,
by changing the colour of the dots:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender))
```
As I promised above, we are now going to learn how to add a regression line to this scatter plot:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass), method = "lm")
```
```
## `geom_smooth()` using formula 'y ~ x'
```
`geom_smooth()` adds a regression line, but only if you specify `method = "lm"` (“lm” stands for
“linear model”). What happens if you remove this option?
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default, `geom_smooth()` does a non\-parametric regression called LOESS (locally estimated scatterplot smoothing),
which is more flexible. It is also possible to have one regression line by gender:
```
starwars %>%
filter(!str_detect(name, "Jabba")) %>%
ggplot() +
geom_point(aes(height, mass, size = birth_year, colour = gender)) +
geom_smooth(aes(height, mass, colour = gender))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Because there are only a few observations for feminines and `NA`s the regression lines are not very informative,
but this was only an example to show you some options of `geom_smooth()`.
Let’s go back to the unemployment line plots. For now, let’s keep the base `{ggplot2}` theme, but
modify it a bit. For example, the legend placement is actually a feature of the theme. This means
that if you want to change where the legend is placed you need to modify this feature from the
theme. This is done with the function `theme()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
What I also like to do is remove the title of the legend, because it is often superfluous:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
The legend title has to be an `element_text` object.`element_text` objects are used with `theme` to
specify how text should be displayed. `element_blank()` draws nothing and assigns no space (not
even blank space). If you want to keep the legend title but change it, you need to use `element_text()`:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom", legend.title = element_text(colour = "red")) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
If you want to change the word “division” to something else, you can do so by providing the `colour` argument
to the `labs()` function:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme(legend.position = "bottom") +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate", colour = "Administrative division") +
geom_line()
```
You could modify every feature of the theme like that, but there are built\-in themes that you can use:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
For example in the code above, I have used `theme_minimal()` which I like quite a lot. You can also
use themes from the `ggthemes` package, which even contains a STATA theme, if you like it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
As you can see, `theme_stata()` has the legend on the bottom by default, because this is how the
legend position is defined within the theme. However the legend title is still there. Let’s remove
it:
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_stata() +
theme(legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
`ggthemes` even features an Excel 2003 theme (don’t use it though):
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_excel() +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
You can create your own theme by using a simple theme, such as `theme_minimal()` as a base
and then add your options. We are going to create one theme after we learn how to create our
own functions, in Chapter 7\. Then, we are going to create a package to share this theme with
the world, and we are going to learn how to make packages in Chapter 9\.
### 5\.3\.2 Colour schemes
You can also change colour schemes, by specifying either `scale_colour_*()` or `scale_fill_*()`
functions. `scale_colour_*()` functions are used for continuous variables, while `scale_fill_*()`
functions for discrete variables (so for barplots for example). A colour scheme I like is the
[Highcharts](https://www.highcharts.com/) colour scheme.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_hc() +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
An example with a barplot:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
scale_fill_hc()
```
It is also possible to define and use your own palette.
To use your own colours you can use `scale_colour_manual()` and `scale_fill_manual()` and specify
the html codes of the colours you want to use.
```
unemp_lux_data %>%
filter(division %in% c("Luxembourg", "Esch-sur-Alzette", "Wiltz")) %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division, colour = division)) +
theme_minimal() +
scale_colour_manual(values = c("#FF336C", "#334BFF", "#2CAE00")) +
theme(legend.position = "bottom", legend.title = element_blank()) +
labs(title = "Unemployment in Luxembourg, Esch/Alzette and Wiltz", x = "Year", y = "Rate") +
geom_line()
```
To get html codes of colours you can use [this online
tool](http://htmlcolorcodes.com/color-picker/).
There is also a very nice package, called `colourpicker` that allows you to
pick colours from with RStudio. Also, you do not even need to load it to use
it, since it comes with an Addin:
For a barplot you would do the same:
```
ggplot(test_data_long) +
facet_wrap(~id) +
geom_bar(aes(variable, value, fill = variable), stat = "identity") +
geom_text(aes(variable, value + 1.5, label = value)) +
theme_minimal() +
theme(legend.position = "bottom", legend.title = element_blank()) +
scale_fill_manual(values = c("#FF336C", "#334BFF", "#2CAE00", "#B3C9C6", "#765234"))
```
For countinuous variables, things are a bit different. Let’s first create a plot where we map a continuous
variable to the colour argument of `aes()`:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth))
```
To change the colour, we need to use `scale_color_gradient()` and specify a value for low values of the variable,
and a value for high values of the variable. For example, using the colours of the theme I made for my blog:
```
ggplot(diamonds) +
geom_point(aes(carat, price, colour = depth)) +
scale_color_gradient(low = "#bec3b8", high = "#ad2c6c")
```
5\.4 Saving plots to disk
-------------------------
There are two ways to save plots on disk; one through the *Plots* plane in RStudio and another using the
`ggsave()` function. Using RStudio, navigate to the *Plots* pane and click on *Export*. You can
then choose where to save the plot and other various options:
This is fine if you only generate one or two plots but if you generate a large number of them, it
is less tedious to use the `ggsave()` function:
```
my_plot1 <- ggplot(my_data) +
geom_bar(aes(variable))
ggsave("path/you/want/to/save/the/plot/to/my_plot1.pdf", my_plot1)
```
There are other options that you can specify such as the width and height, resolution, units,
etc…
5\.5 Exercises
--------------
### Exercise 1
Load the `Bwages` dataset from the `Ecdat` package. Your first task is to create a new variable,
`educ_level`, which is a factor variable that equals:
* “Primary school” if `educ == 1`
* “High school” if `educ == 2`
* “Some university” if `educ == 3`
* “Master’s degree” if `educ == 4`
* “Doctoral degree” if `educ == 5`
Use `case_when()` for this.
Then, plot a scatter plot of wages on experience, by education level. Add a theme that you like,
and remove the title of the legend.
The scatter plot is not very useful, because you cannot make anything out. Instead, use another
geom that shows you a non\-parametric fit with confidence bands.
### Exercise 1
Load the `Bwages` dataset from the `Ecdat` package. Your first task is to create a new variable,
`educ_level`, which is a factor variable that equals:
* “Primary school” if `educ == 1`
* “High school” if `educ == 2`
* “Some university” if `educ == 3`
* “Master’s degree” if `educ == 4`
* “Doctoral degree” if `educ == 5`
Use `case_when()` for this.
Then, plot a scatter plot of wages on experience, by education level. Add a theme that you like,
and remove the title of the legend.
The scatter plot is not very useful, because you cannot make anything out. Instead, use another
geom that shows you a non\-parametric fit with confidence bands.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/statistical-models.html |
Chapter 6 Statistical models
============================
In this chapter, we will not learn about all the models out there that you may or may not need.
Instead, I will show you how can use what you have learned until now and how you can apply these
concepts to modeling. Also, as you read in the beginning of the book, R has many many packages. So
the model you need is most probably already implemented in some package and you will very likely
not need to write your own from scratch.
In the first section, I will discuss the terminology used in this book. Then I will discuss
linear regression; showing how linear regression works illsutrates very well how other models
work too, without loss of generality. Then I will introduce the concepte of hyper\-parameters
with ridge regression. This chapter will then finish with an introduction to cross\-validation as
a way to tune the hyper\-parameters of models that features them.
6\.1 Terminology
----------------
Before continuing discussing about statistical models and model fitting it is worthwhile to discuss
terminology a little bit. Depending on your background, you might call an explanatory variable a
feature or the dependent variable the target. These are the same objects. The matrix of features
is usually called a design matrix, and what statisticians call the intercept is what
machine learning engineers call the bias. Referring to the intercept by bias is unfortunate, as bias
also has a very different meaning; bias is also what we call the error in a model that may cause
*biased* estimates. To finish up, the estimated parameters of the model may be called coefficients
or weights. Here again, I don’t like the using *weight* as weight as a very different meaning in
statistics.
So, in the remainder of this chapter, and book, I will use the terminology from the statistical
litterature, using dependent and explanatory variables (`y` and `x`), and calling the
estimated parameters coefficients and the intercept… well the intercept (the \\(\\beta\\)s of the model).
However, I will talk of *training* a model, instead of *estimating* a model.
6\.2 Fitting a model to data
----------------------------
Suppose you have a variable `y` that you wish to explain using a set of other variables `x1`, `x2`,
`x3`, etc. Let’s take a look at the `Housing` dataset from the `Ecdat` package:
```
library(Ecdat)
data(Housing)
```
You can read a description of the dataset by running:
```
?Housing
```
```
Housing package:Ecdat R Documentation
Sales Prices of Houses in the City of Windsor
Description:
a cross-section from 1987
_number of observations_ : 546
_observation_ : goods
_country_ : Canada
Usage:
data(Housing)
Format:
A dataframe containing :
price: sale price of a house
lotsize: the lot size of a property in square feet
bedrooms: number of bedrooms
bathrms: number of full bathrooms
stories: number of stories excluding basement
driveway: does the house has a driveway ?
recroom: does the house has a recreational room ?
fullbase: does the house has a full finished basement ?
gashw: does the house uses gas for hot water heating ?
airco: does the house has central air conditioning ?
garagepl: number of garage places
prefarea: is the house located in the preferred neighbourhood of the city ?
Source:
Anglin, P.M. and R. Gencay (1996) “Semiparametric estimation of
a hedonic price function”, _Journal of Applied Econometrics_,
*11(6)*, 633-648.
References:
Verbeek, Marno (2004) _A Guide to Modern Econometrics_, John Wiley
and Sons, chapter 3.
Journal of Applied Econometrics data archive : <URL:
http://qed.econ.queensu.ca/jae/>.
See Also:
‘Index.Source’, ‘Index.Economics’, ‘Index.Econometrics’,
‘Index.Observations’
```
or by looking for `Housing` in the help pane of RStudio. Usually, you would take a look a the data
before doing any modeling:
```
glimpse(Housing)
```
```
## Rows: 546
## Columns: 12
## $ price <dbl> 42000, 38500, 49500, 60500, 61000, 66000, 66000, 69000, 83800…
## $ lotsize <dbl> 5850, 4000, 3060, 6650, 6360, 4160, 3880, 4160, 4800, 5500, 7…
## $ bedrooms <dbl> 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 2, 3, 4, 1, 2, 3…
## $ bathrms <dbl> 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1…
## $ stories <dbl> 2, 1, 1, 2, 1, 1, 2, 3, 1, 4, 1, 1, 2, 1, 1, 1, 2, 3, 1, 1, 2…
## $ driveway <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, no, ye…
## $ recroom <fct> no, no, no, yes, no, yes, no, no, yes, yes, no, no, no, no, n…
## $ fullbase <fct> yes, no, no, no, no, yes, yes, no, yes, no, yes, no, no, no, …
## $ gashw <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ airco <fct> no, no, no, no, no, yes, no, no, no, yes, yes, no, no, no, no…
## $ garagepl <dbl> 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1…
## $ prefarea <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
```
Housing prices depend on a set of variables such as the number of bedrooms, the area it is located
and so on. If you believe that housing prices depend linearly on a set of explanatory variables,
you will want to estimate a linear model. To estimate a *linear model*, you will need to use the
built\-in `lm()` function:
```
model1 <- lm(price ~ lotsize + bedrooms, data = Housing)
```
`lm()` takes a formula as an argument, which defines the model you want to estimate. In this case,
I ran the following regression:
\\\[
\\text{price} \= \\beta\_0 \+ \\beta\_1 \* \\text{lotsize} \+ \\beta\_2 \* \\text{bedrooms} \+ \\varepsilon
\\]
where \\(\\beta\_0, \\beta\_1\\) and \\(\\beta\_2\\) are three parameters to estimate. To take a look at the
results, you can use the `summary()` method (not to be confused with `dplyr::summarise()`):
```
summary(model1)
```
```
##
## Call:
## lm(formula = price ~ lotsize + bedrooms, data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -65665 -12498 -2075 8970 97205
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.613e+03 4.103e+03 1.368 0.172
## lotsize 6.053e+00 4.243e-01 14.265 < 2e-16 ***
## bedrooms 1.057e+04 1.248e+03 8.470 2.31e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 21230 on 543 degrees of freedom
## Multiple R-squared: 0.3703, Adjusted R-squared: 0.3679
## F-statistic: 159.6 on 2 and 543 DF, p-value: < 2.2e-16
```
if you wish to remove the intercept (\\(\\beta\_0\\) in the above equation) from your model, you can
do so with `-1`:
```
model2 <- lm(price ~ -1 + lotsize + bedrooms, data = Housing)
summary(model2)
```
```
##
## Call:
## lm(formula = price ~ -1 + lotsize + bedrooms, data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -67229 -12342 -1333 9627 95509
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## lotsize 6.283 0.390 16.11 <2e-16 ***
## bedrooms 11968.362 713.194 16.78 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 21250 on 544 degrees of freedom
## Multiple R-squared: 0.916, Adjusted R-squared: 0.9157
## F-statistic: 2965 on 2 and 544 DF, p-value: < 2.2e-16
```
or if you want to use all the columns inside `Housing`, replacing the column names by `.`:
```
model3 <- lm(price ~ ., data = Housing)
summary(model3)
```
```
##
## Call:
## lm(formula = price ~ ., data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -41389 -9307 -591 7353 74875
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -4038.3504 3409.4713 -1.184 0.236762
## lotsize 3.5463 0.3503 10.124 < 2e-16 ***
## bedrooms 1832.0035 1047.0002 1.750 0.080733 .
## bathrms 14335.5585 1489.9209 9.622 < 2e-16 ***
## stories 6556.9457 925.2899 7.086 4.37e-12 ***
## drivewayyes 6687.7789 2045.2458 3.270 0.001145 **
## recroomyes 4511.2838 1899.9577 2.374 0.017929 *
## fullbaseyes 5452.3855 1588.0239 3.433 0.000642 ***
## gashwyes 12831.4063 3217.5971 3.988 7.60e-05 ***
## aircoyes 12632.8904 1555.0211 8.124 3.15e-15 ***
## garagepl 4244.8290 840.5442 5.050 6.07e-07 ***
## prefareayes 9369.5132 1669.0907 5.614 3.19e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15420 on 534 degrees of freedom
## Multiple R-squared: 0.6731, Adjusted R-squared: 0.6664
## F-statistic: 99.97 on 11 and 534 DF, p-value: < 2.2e-16
```
You can access different elements of `model3` with `$`, because the result of `lm()` is a list
(you can check this claim with `typeof(model3)`:
```
print(model3$coefficients)
```
```
## (Intercept) lotsize bedrooms bathrms stories drivewayyes
## -4038.350425 3.546303 1832.003466 14335.558468 6556.945711 6687.778890
## recroomyes fullbaseyes gashwyes aircoyes garagepl prefareayes
## 4511.283826 5452.385539 12831.406266 12632.890405 4244.829004 9369.513239
```
but I prefer to use the `{broom}` package, and more specifically the `tidy()` function, which
converts `model3` into a neat `data.frame`:
```
results3 <- broom::tidy(model3)
glimpse(results3)
```
```
## Rows: 12
## Columns: 5
## $ term <chr> "(Intercept)", "lotsize", "bedrooms", "bathrms", "stories", …
## $ estimate <dbl> -4038.350425, 3.546303, 1832.003466, 14335.558468, 6556.9457…
## $ std.error <dbl> 3409.4713, 0.3503, 1047.0002, 1489.9209, 925.2899, 2045.2458…
## $ statistic <dbl> -1.184451, 10.123618, 1.749764, 9.621691, 7.086369, 3.269914…
## $ p.value <dbl> 2.367616e-01, 3.732442e-22, 8.073341e-02, 2.570369e-20, 4.37…
```
I explicitely write `broom::tidy()` because `tidy()` is a popular function name. For instance,
it is also a function from the `{yardstick}` package, which does not do the same thing at all. Since
I will also be using `{yardstick}` I prefer to explicitely write `broom::tidy()` to avoid conflicts.
Using `broom::tidy()` is useful, because you can then work on the results easily, for example if
you wish to only keep results that are significant at the 5% level:
```
results3 %>%
filter(p.value < 0.05)
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 lotsize 3.55 0.350 10.1 3.73e-22
## 2 bathrms 14336. 1490. 9.62 2.57e-20
## 3 stories 6557. 925. 7.09 4.37e-12
## 4 drivewayyes 6688. 2045. 3.27 1.15e- 3
## 5 recroomyes 4511. 1900. 2.37 1.79e- 2
## 6 fullbaseyes 5452. 1588. 3.43 6.42e- 4
## 7 gashwyes 12831. 3218. 3.99 7.60e- 5
## 8 aircoyes 12633. 1555. 8.12 3.15e-15
## 9 garagepl 4245. 841. 5.05 6.07e- 7
## 10 prefareayes 9370. 1669. 5.61 3.19e- 8
```
You can even add new columns, such as the confidence intervals:
```
results3 <- broom::tidy(model3, conf.int = TRUE, conf.level = 0.95)
print(results3)
```
```
## # A tibble: 12 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -4038. 3409. -1.18 2.37e- 1 -10736. 2659.
## 2 lotsize 3.55 0.350 10.1 3.73e-22 2.86 4.23
## 3 bedrooms 1832. 1047. 1.75 8.07e- 2 -225. 3889.
## 4 bathrms 14336. 1490. 9.62 2.57e-20 11409. 17262.
## 5 stories 6557. 925. 7.09 4.37e-12 4739. 8375.
## 6 drivewayyes 6688. 2045. 3.27 1.15e- 3 2670. 10705.
## 7 recroomyes 4511. 1900. 2.37 1.79e- 2 779. 8244.
## 8 fullbaseyes 5452. 1588. 3.43 6.42e- 4 2333. 8572.
## 9 gashwyes 12831. 3218. 3.99 7.60e- 5 6511. 19152.
## 10 aircoyes 12633. 1555. 8.12 3.15e-15 9578. 15688.
## 11 garagepl 4245. 841. 5.05 6.07e- 7 2594. 5896.
## 12 prefareayes 9370. 1669. 5.61 3.19e- 8 6091. 12648.
```
Going back to model estimation, you can of course use `lm()` in a pipe workflow:
```
Housing %>%
select(-driveway, -stories) %>%
lm(price ~ ., data = .) %>%
broom::tidy()
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3025. 3263. 0.927 3.54e- 1
## 2 lotsize 3.67 0.363 10.1 4.52e-22
## 3 bedrooms 4140. 1036. 3.99 7.38e- 5
## 4 bathrms 16443. 1546. 10.6 4.29e-24
## 5 recroomyes 5660. 2010. 2.82 5.05e- 3
## 6 fullbaseyes 2241. 1618. 1.38 1.67e- 1
## 7 gashwyes 13568. 3411. 3.98 7.93e- 5
## 8 aircoyes 15578. 1597. 9.75 8.53e-21
## 9 garagepl 4232. 883. 4.79 2.12e- 6
## 10 prefareayes 10729. 1753. 6.12 1.81e- 9
```
The first `.` in the `lm()` function is used to indicate that we wish to use all the data from `Housing`
(minus `driveway` and `stories` which I removed using `select()` and the `-` sign), and the second `.` is
used to *place* the result from the two `dplyr` instructions that preceded is to be placed there.
The picture below should help you understand:
You have to specify this, because by default, when using `%>%` the left hand side argument gets
passed as the first argument of the function on the right hand side.
Since version 4\.2, R now also natively includes a placeholder, `_`:
```
Housing |>
select(-driveway, -stories) |>
lm(price ~ ., data = _) |>
broom::tidy()
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3025. 3263. 0.927 3.54e- 1
## 2 lotsize 3.67 0.363 10.1 4.52e-22
## 3 bedrooms 4140. 1036. 3.99 7.38e- 5
## 4 bathrms 16443. 1546. 10.6 4.29e-24
## 5 recroomyes 5660. 2010. 2.82 5.05e- 3
## 6 fullbaseyes 2241. 1618. 1.38 1.67e- 1
## 7 gashwyes 13568. 3411. 3.98 7.93e- 5
## 8 aircoyes 15578. 1597. 9.75 8.53e-21
## 9 garagepl 4232. 883. 4.79 2.12e- 6
## 10 prefareayes 10729. 1753. 6.12 1.81e- 9
```
For the example above, I’ve also switched from `%>%` to `|>`, or else I can’t use the `_` placeholder.
The advantage of the `_` placeholder is that it disambiguates `.`. So here, the `.` is a placeholder for
all the variables in the dataset, and `_` is a placeholder for the dataset.
6\.3 Diagnostics
----------------
Diagnostics are useful metrics to assess model fit. You can read some of these diagnostics, such as
the \\(R^2\\) at the bottom of the summary (when running `summary(my_model)`), but if you want to do
more than simply reading these diagnostics from RStudio, you can put those in a `data.frame` too,
using `broom::glance()`:
```
glance(model3)
```
```
## # A tibble: 1 × 12
## r.squared adj.r.…¹ sigma stati…² p.value df logLik AIC BIC devia…³
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.673 0.666 15423. 100. 6.18e-122 11 -6034. 12094. 12150. 1.27e11
## # … with 2 more variables: df.residual <int>, nobs <int>, and abbreviated
## # variable names ¹adj.r.squared, ²statistic, ³deviance
```
You can also plot the usual diagnostics plots using `ggfortify::autoplot()` which uses the
`{ggplot2}` package under the hood:
```
library(ggfortify)
autoplot(model3, which = 1:6) + theme_minimal()
```
`which=1:6` is an additional option that shows you all the diagnostics plot. If you omit this
option, you will only get 4 of them.
You can also get the residuals of the regression in two ways; either you grab them directly from
the model fit:
```
resi3 <- residuals(model3)
```
or you can augment the original data with a residuals column, using `broom::augment()`:
```
housing_aug <- augment(model3)
```
Let’s take a look at `housing_aug`:
```
glimpse(housing_aug)
```
```
## Rows: 546
## Columns: 18
## $ price <dbl> 42000, 38500, 49500, 60500, 61000, 66000, 66000, 69000, 838…
## $ lotsize <dbl> 5850, 4000, 3060, 6650, 6360, 4160, 3880, 4160, 4800, 5500,…
## $ bedrooms <dbl> 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 2, 3, 4, 1, 2,…
## $ bathrms <dbl> 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2,…
## $ stories <dbl> 2, 1, 1, 2, 1, 1, 2, 3, 1, 4, 1, 1, 2, 1, 1, 1, 2, 3, 1, 1,…
## $ driveway <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, no, …
## $ recroom <fct> no, no, no, yes, no, yes, no, no, yes, yes, no, no, no, no,…
## $ fullbase <fct> yes, no, no, no, no, yes, yes, no, yes, no, yes, no, no, no…
## $ gashw <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no,…
## $ airco <fct> no, no, no, no, no, yes, no, no, no, yes, yes, no, no, no, …
## $ garagepl <dbl> 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 0, 1, 0, 0, 1,…
## $ prefarea <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no,…
## $ .fitted <dbl> 66037.98, 41391.15, 39889.63, 63689.09, 49760.43, 66387.12,…
## $ .resid <dbl> -24037.9757, -2891.1515, 9610.3699, -3189.0873, 11239.5735,…
## $ .hat <dbl> 0.013477335, 0.008316321, 0.009893730, 0.021510891, 0.01033…
## $ .sigma <dbl> 15402.01, 15437.14, 15431.98, 15437.02, 15429.89, 15437.64,…
## $ .cooksd <dbl> 2.803214e-03, 2.476265e-05, 3.265481e-04, 8.004787e-05, 4.6…
## $ .std.resid <dbl> -1.56917096, -0.18823924, 0.62621736, -0.20903274, 0.732539…
```
A few columns have been added to the original data, among them `.resid` which contains the
residuals. Let’s plot them:
```
ggplot(housing_aug) +
geom_density(aes(.resid))
```
Fitted values are also added to the original data, under the variable `.fitted`. It would also have
been possible to get the fitted values with:
```
fit3 <- fitted(model3)
```
but I prefer using `augment()`, because the columns get merged to the original data, which then
makes it easier to find specific individuals, for example, you might want to know for which housing
units the model underestimates the price:
```
total_pos <- housing_aug %>%
filter(.resid > 0) %>%
summarise(total = n()) %>%
pull(total)
```
we find 261 individuals where the residuals are positive. It is also easier to
extract outliers:
```
housing_aug %>%
mutate(prank = cume_dist(.cooksd)) %>%
filter(prank > 0.99) %>%
glimpse()
```
```
## Rows: 6
## Columns: 19
## $ price <dbl> 163000, 125000, 132000, 175000, 190000, 174500
## $ lotsize <dbl> 7420, 4320, 3500, 9960, 7420, 7500
## $ bedrooms <dbl> 4, 3, 4, 3, 4, 4
## $ bathrms <dbl> 1, 1, 2, 2, 2, 2
## $ stories <dbl> 2, 2, 2, 2, 3, 2
## $ driveway <fct> yes, yes, yes, yes, yes, yes
## $ recroom <fct> yes, no, no, no, no, no
## $ fullbase <fct> yes, yes, no, yes, no, yes
## $ gashw <fct> no, yes, yes, no, no, no
## $ airco <fct> yes, no, no, no, yes, yes
## $ garagepl <dbl> 2, 2, 2, 2, 2, 3
## $ prefarea <fct> no, no, no, yes, yes, yes
## $ .fitted <dbl> 94826.68, 77688.37, 85495.58, 108563.18, 115125.03, 118549.…
## $ .resid <dbl> 68173.32, 47311.63, 46504.42, 66436.82, 74874.97, 55951.00
## $ .hat <dbl> 0.02671105, 0.05303793, 0.05282929, 0.02819317, 0.02008141,…
## $ .sigma <dbl> 15144.70, 15293.34, 15298.27, 15159.14, 15085.99, 15240.66
## $ .cooksd <dbl> 0.04590995, 0.04637969, 0.04461464, 0.04616068, 0.04107317,…
## $ .std.resid <dbl> 4.480428, 3.152300, 3.098176, 4.369631, 4.904193, 3.679815
## $ prank <dbl> 0.9963370, 1.0000000, 0.9945055, 0.9981685, 0.9926740, 0.99…
```
`prank` is a variable I created with `cume_dist()` which is a `dplyr` function that returns the
proportion of all values less than or equal to the current rank. For example:
```
example <- c(5, 4.6, 2, 1, 0.8, 0, -1)
cume_dist(example)
```
```
## [1] 1.0000000 0.8571429 0.7142857 0.5714286 0.4285714 0.2857143 0.1428571
```
by filtering `prank > 0.99` we get the top 1% of outliers according to Cook’s distance.
6\.4 Interpreting models
------------------------
Model interpretation is essential in the social sciences, but it is also getting very important
in machine learning. As usual, the terminology is different; in machine learning, we speak about
explainability. There is a very important aspect that one has to understand when it comes to
interpretability/explainability: *classical, parametric* models, and *black\-box* models. This
is very well explained in Breiman ([2001](#ref-breiman2001)), an absolute must read (link to paper, in PDF format:
[click here](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)). The gist of the paper
is that there are two cultures of statistical modeling; one culture relies on modeling the data
generating process, for instance, by considering that a variable y (independent variable, or target)
is a linear combination of input variables x (dependent variables, or features) plus some noise. The
other culture uses complex algorithms (random forests, neural networks)
to model the relationship between y and x. The author argues that most statisticians have relied
for too long on modeling data generating processes and do not use all the potential offered by
these complex algorithms. I think that a lot of things have changed since then, and that nowadays
any practitioner that uses data is open to use any type of model or algorithm, as long as it does
the job. However, the paper is very interesting, and the discussion on trade\-off between
simplicity of the model and interpretability/explainability is still relevant today.
In this section, I will explain how one can go about interpreting or explaining models from these
two cultures.
Also, it is important to note here that the discussion that will follow will be heavily influenced
by my econometrics background. I will focus on marginal effects as way to interpret parametric
models (models from the first culture described above), but depending on the field, practitioners
might use something else (for instance by computing odds ratios in a logistic regression).
I will start by interpretability of *classical* statistical models.
### 6\.4\.1 Marginal effects
If one wants to know the effect of variable `x` on the dependent variable `y`,
so\-called marginal effects have to be computed. This is easily done in R with the `{marginaleffects}` package.
Formally, marginal effects are the partial derivative of the regression equation with respect to the variable
we want to look at.
```
library(marginaleffects)
effects_model3 <- marginaleffects(model3)
summary(effects_model3)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lotsize dY/dX 3.546 0.3503 10.124 < 2.22e-16 2.86 4.233
## 2 bedrooms dY/dX 1832.003 1047.0056 1.750 0.08016056 -220.09 3884.097
## 3 bathrms dY/dX 14335.558 1489.9557 9.621 < 2.22e-16 11415.30 17255.818
## 4 stories dY/dX 6556.946 925.2943 7.086 1.3771e-12 4743.40 8370.489
## 5 driveway yes - no 6687.779 2045.2459 3.270 0.00107580 2679.17 10696.387
## 6 recroom yes - no 4511.284 1899.9577 2.374 0.01757689 787.44 8235.132
## 7 fullbase yes - no 5452.386 1588.0239 3.433 0.00059597 2339.92 8564.855
## 8 gashw yes - no 12831.406 3217.5970 3.988 6.6665e-05 6525.03 19137.781
## 9 airco yes - no 12632.890 1555.0211 8.124 4.5131e-16 9585.11 15680.676
## 10 garagepl dY/dX 4244.829 840.5965 5.050 4.4231e-07 2597.29 5892.368
## 11 prefarea yes - no 9369.513 1669.0906 5.614 1.9822e-08 6098.16 12640.871
##
## Model type: lm
## Prediction type: response
```
Let’s go through this: `summary(effects_model3)` shows the average marginal effects for each of the dependent
variables that were used in `model3`. The way to interpret them is as follows:
*everything else held constant (often you’ll read the Latin ceteris paribus for this), a unit increase in
`lotize` increases the `price` by 3\.546 units, on average.*
The *everything held constant* part is crucial; with marginal effects, you’re looking at just the effect of
one variable at a time. For discrete variables, like `driveway`, this is simpler: imagine you have two
equal houses, exactly the same house, one has a driveway and the other doesn’t. The one with the driveway
is 6687 units more expensive, *on average*.
Now it turns out that in the case of a linear model, the average marginal effects are exactly equal to the
coefficients. Just compare `summary(model3)` to `effects_model3` to see
(and remember, I told you that marginal effects were the partial derivative of the regression equation with
respect to the variable of interest. So the derivative of \\(\\alpha\*X\_1 \+ ....\\) with respect to \\(X\_1\\) will
be \\(\\alpha\\)). But in the case of a more complex, non\-linear model, this is not so obvious. This is
where `{marginaleffects}` will make your life much easier.
It is also possible to plot the results:
```
plot(effects_model3)
```
`effects_model3` is a data frame containing the effects for each house in the data set. For example,
let’s take a look at the first house:
```
effects_model3 %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lotsize dY/dX 3.546303 0.3502195 10.125944
## 2 1 response bedrooms dY/dX 1832.003466 1046.1608842 1.751168
## 3 1 response bathrms dY/dX 14335.558468 1490.4827945 9.618064
## 4 1 response stories dY/dX 6556.945711 925.4764870 7.084940
## 5 1 response driveway yes - no 6687.778890 2045.2460319 3.269914
## 6 1 response recroom yes - no 4511.283826 1899.9577182 2.374413
## 7 1 response fullbase yes - no 5452.385539 1588.0237538 3.433441
## 8 1 response gashw yes - no 12831.406266 3217.5971931 3.987885
## 9 1 response airco yes - no 12632.890405 1555.0207045 8.123937
## 10 1 response garagepl dY/dX 4244.829004 840.8930857 5.048001
## 11 1 response prefarea yes - no 9369.513239 1669.0904968 5.613544
## p.value conf.low conf.high predicted predicted_hi predicted_lo
## 1 4.238689e-24 2.859885 4.232721 66037.98 66043.14 66037.98
## 2 7.991698e-02 -218.434189 3882.441121 66037.98 66038.89 66037.98
## 3 6.708200e-22 11414.265872 17256.851065 66037.98 66042.28 66037.98
## 4 1.391042e-12 4743.045128 8370.846295 66037.98 66039.94 66037.98
## 5 1.075801e-03 2679.170328 10696.387452 66037.98 66037.98 59350.20
## 6 1.757689e-02 787.435126 8235.132526 66037.98 70549.26 66037.98
## 7 5.959723e-04 2339.916175 8564.854903 66037.98 66037.98 60585.59
## 8 6.666508e-05 6525.031651 19137.780882 66037.98 78869.38 66037.98
## 9 4.512997e-16 9585.105829 15680.674981 66037.98 78670.87 66037.98
## 10 4.464572e-07 2596.708842 5892.949167 66037.98 66039.25 66037.98
## 11 1.982240e-08 6098.155978 12640.870499 66037.98 75407.49 66037.98
## price lotsize bedrooms bathrms stories driveway recroom fullbase gashw airco
## 1 42000 5850 3 1 2 yes no yes no no
## 2 42000 5850 3 1 2 yes no yes no no
## 3 42000 5850 3 1 2 yes no yes no no
## 4 42000 5850 3 1 2 yes no yes no no
## 5 42000 5850 3 1 2 yes no yes no no
## 6 42000 5850 3 1 2 yes no yes no no
## 7 42000 5850 3 1 2 yes no yes no no
## 8 42000 5850 3 1 2 yes no yes no no
## 9 42000 5850 3 1 2 yes no yes no no
## 10 42000 5850 3 1 2 yes no yes no no
## 11 42000 5850 3 1 2 yes no yes no no
## garagepl prefarea eps
## 1 1 no 1.4550
## 2 1 no 0.0005
## 3 1 no 0.0003
## 4 1 no 0.0003
## 5 1 no NA
## 6 1 no NA
## 7 1 no NA
## 8 1 no NA
## 9 1 no NA
## 10 1 no 0.0003
## 11 1 no NA
```
`rowid` is column that identifies the houses in the original data set, so `rowid == 1` filters out
the first house. This shows you the marginal effects (column `dydx` computed for this house; but
remember, since we’re dealing with a linear model, the values of the marginal effects are constant.
If you don’t see the point of this discussion, don’t fret, the next example should make things
clearer.
Let’s estimate a logit model and compute the marginal effects. You might know logit models as
*logistic regression*. Logit models can be estimated using the `glm()` function, which stands for
generalized linear models.
As an example, we are going to use the `Participation` data, also from the `{Ecdat}` package:
```
data(Participation)
```
```
?Particpation
```
```
Participation package:Ecdat R Documentation
Labor Force Participation
Description:
a cross-section
_number of observations_ : 872
_observation_ : individuals
_country_ : Switzerland
Usage:
data(Participation)
Format:
A dataframe containing :
lfp labour force participation ?
lnnlinc the log of nonlabour income
age age in years divided by 10
educ years of formal education
nyc the number of young children (younger than 7)
noc number of older children
foreign foreigner ?
Source:
Gerfin, Michael (1996) “Parametric and semiparametric estimation
of the binary response”, _Journal of Applied Econometrics_,
*11(3)*, 321-340.
References:
Davidson, R. and James G. MacKinnon (2004) _Econometric Theory
and Methods_, New York, Oxford University Press, <URL:
http://www.econ.queensu.ca/ETM/>, chapter 11.
Journal of Applied Econometrics data archive : <URL:
http://qed.econ.queensu.ca/jae/>.
See Also:
‘Index.Source’, ‘Index.Economics’, ‘Index.Econometrics’,
‘Index.Observations’
```
The variable of interest is `lfp`: whether the individual participates in the labour force or not.
To know which variables are relevant in the decision to participate in the labour force, one could
train a logit model, using `glm()`:
```
logit_participation <- glm(lfp ~ ., data = Participation, family = "binomial")
broom::tidy(logit_participation)
```
```
## # A tibble: 7 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 10.4 2.17 4.79 1.69e- 6
## 2 lnnlinc -0.815 0.206 -3.97 7.31e- 5
## 3 age -0.510 0.0905 -5.64 1.72e- 8
## 4 educ 0.0317 0.0290 1.09 2.75e- 1
## 5 nyc -1.33 0.180 -7.39 1.51e-13
## 6 noc -0.0220 0.0738 -0.298 7.66e- 1
## 7 foreignyes 1.31 0.200 6.56 5.38e-11
```
From the results above, one can only interpret the sign of the coefficients. To know how much a
variable influences the labour force participation, one has to use `marginaleffects()`:
```
effects_logit_participation <- marginaleffects(logit_participation)
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
As you can see, the average marginal effects here are not equal to the estimated coefficients of the
model. Let’s take a look at the first row of the data:
```
Participation[1, ]
```
```
## lfp lnnlinc age educ nyc noc foreign
## 1 no 10.7875 3 8 1 1 no
```
and let’s now look at `rowid == 1` in the marginal effects data frame:
```
effects_logit_participation %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lnnlinc dY/dX -0.156661756 0.038522800 -4.0667282
## 2 1 response age dY/dX -0.098097148 0.020123709 -4.8747052
## 3 1 response educ dY/dX 0.006099266 0.005367036 1.1364310
## 4 1 response nyc dY/dX -0.255784406 0.029367783 -8.7096942
## 5 1 response noc dY/dX -0.004226368 0.014167283 -0.2983189
## 6 1 response foreign yes - no 0.305630005 0.045174828 6.7654935
## p.value conf.low conf.high predicted predicted_hi predicted_lo lfp
## 1 4.767780e-05 -0.232165056 -0.08115846 0.2596523 0.2595710 0.2596523 no
## 2 1.089711e-06 -0.137538892 -0.05865540 0.2596523 0.2596111 0.2596523 no
## 3 2.557762e-01 -0.004419931 0.01661846 0.2596523 0.2596645 0.2596523 no
## 4 3.046958e-18 -0.313344203 -0.19822461 0.2596523 0.2595755 0.2596523 no
## 5 7.654598e-01 -0.031993732 0.02354100 0.2596523 0.2596497 0.2596523 no
## 6 1.328556e-11 0.217088969 0.39417104 0.2596523 0.5652823 0.2596523 no
## lnnlinc age educ nyc noc foreign eps
## 1 10.7875 3 8 1 1 no 0.0005188749
## 2 10.7875 3 8 1 1 no 0.0004200000
## 3 10.7875 3 8 1 1 no 0.0020000000
## 4 10.7875 3 8 1 1 no 0.0003000000
## 5 10.7875 3 8 1 1 no 0.0006000000
## 6 10.7875 3 8 1 1 no NA
```
Let’s focus on the first row, where `term` is `lnnlinc`. What we see here is the effect of an infinitesimal
increase in the variable `lnnlinc` on the participation, for an individual who has the following
characteristics: `lnnlinc = 10.7875`, `age = 3`, `educ = 8`, `nyc = 1`, `noc = 1` and `foreign = no`, which
are the characteristics of this first individual in our data.
So let’s look at the value of `dydx` for every individual:
```
dydx_lnnlinc <- effects_logit_participation %>%
filter(term == "lnnlinc")
head(dydx_lnnlinc)
```
```
## rowid type term contrast dydx std.error statistic p.value
## 1 1 response lnnlinc dY/dX -0.15666176 0.03852280 -4.066728 4.767780e-05
## 2 2 response lnnlinc dY/dX -0.20013939 0.05124543 -3.905507 9.402813e-05
## 3 3 response lnnlinc dY/dX -0.18493932 0.04319729 -4.281271 1.858287e-05
## 4 4 response lnnlinc dY/dX -0.05376281 0.01586468 -3.388837 7.018964e-04
## 5 5 response lnnlinc dY/dX -0.18709356 0.04502973 -4.154890 3.254439e-05
## 6 6 response lnnlinc dY/dX -0.19586185 0.04782143 -4.095692 4.209096e-05
## conf.low conf.high predicted predicted_hi predicted_lo lfp lnnlinc age
## 1 -0.23216506 -0.08115846 0.25965227 0.25957098 0.25965227 no 10.78750 3.0
## 2 -0.30057859 -0.09970018 0.43340025 0.43329640 0.43340025 yes 10.52425 4.5
## 3 -0.26960445 -0.10027418 0.34808777 0.34799181 0.34808777 no 10.96858 4.6
## 4 -0.08485701 -0.02266862 0.07101902 0.07099113 0.07101902 no 11.10500 3.1
## 5 -0.27535020 -0.09883692 0.35704926 0.35695218 0.35704926 no 11.10847 4.4
## 6 -0.28959014 -0.10213356 0.40160949 0.40150786 0.40160949 yes 11.02825 4.2
## educ nyc noc foreign eps
## 1 8 1 1 no 0.0005188749
## 2 8 0 1 no 0.0005188749
## 3 9 0 0 no 0.0005188749
## 4 11 2 0 no 0.0005188749
## 5 12 0 2 no 0.0005188749
## 6 12 0 1 no 0.0005188749
```
`dydx_lnnlinc` is a data frame with all individual marginal effect for the variable `lnnlinc`.
What if we compute the mean of this column?
```
dydx_lnnlinc %>%
summarise(mean(dydx))
```
```
## mean(dydx)
## 1 -0.1699405
```
Let’s compare this to the average marginal effects:
```
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
Yep, it’s the same! This is why we speak of *average marginal effects*. Now that we know why
these are called average marginal effects, let’s go back to interpreting them. This time,
let’s plot them, because why not:
```
plot(effects_logit_participation)
```
So an infinitesimal increase, in say, non\-labour income (`lnnlinc`) of 0\.001 is associated with a
decrease of the probability of labour force participation by 0\.001\*17 percentage points.
This is just scratching the surface of interpreting these kinds of models. There are many more
types of effects that you can compute and look at. I highly recommend you read the documentation
of `{marginaleffects}` which you can find
[here](https://vincentarelbundock.github.io/marginaleffects/index.html). The author
of the package, Vincent Arel\-Bundock writes a lot of very helpful documentation for his packages,
so if model interpretation is important for your job, definitely take a look.
### 6\.4\.2 Explainability of *black\-box* models
Just read Christoph Molnar’s
[Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/).
Seriously, I cannot add anything meaningful to it. His book is brilliant.
6\.5 Comparing models
---------------------
Consider this section more as an illustration of what is possible with the knowledge you acquired
at this point. Imagine that the task at hand is to compare two models. We would like to select
the one which has the best fit to the data.
Let’s first estimate another model on the same data; prices are only positive, so a linear regression
might not be the best model, because the model could predict negative prices. Let’s look at the
distribution of prices:
```
ggplot(Housing) +
geom_density(aes(price))
```
it looks like modeling the log of `price` might provide a better fit:
```
model_log <- lm(log(price) ~ ., data = Housing)
result_log <- broom::tidy(model_log)
print(result_log)
```
```
## # A tibble: 12 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 10.0 0.0472 212. 0
## 2 lotsize 0.0000506 0.00000485 10.4 2.91e-23
## 3 bedrooms 0.0340 0.0145 2.34 1.94e- 2
## 4 bathrms 0.168 0.0206 8.13 3.10e-15
## 5 stories 0.0923 0.0128 7.20 2.10e-12
## 6 drivewayyes 0.131 0.0283 4.61 5.04e- 6
## 7 recroomyes 0.0735 0.0263 2.79 5.42e- 3
## 8 fullbaseyes 0.0994 0.0220 4.52 7.72e- 6
## 9 gashwyes 0.178 0.0446 4.00 7.22e- 5
## 10 aircoyes 0.178 0.0215 8.26 1.14e-15
## 11 garagepl 0.0508 0.0116 4.36 1.58e- 5
## 12 prefareayes 0.127 0.0231 5.50 6.02e- 8
```
Let’s take a look at the diagnostics:
```
glance(model_log)
```
```
## # A tibble: 1 × 12
## r.squared adj.r.squ…¹ sigma stati…² p.value df logLik AIC BIC devia…³
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.677 0.670 0.214 102. 3.67e-123 11 73.9 -122. -65.8 24.4
## # … with 2 more variables: df.residual <int>, nobs <int>, and abbreviated
## # variable names ¹adj.r.squared, ²statistic, ³deviance
```
Let’s compare these to the ones from the previous model:
```
diag_lm <- glance(model3)
diag_lm <- diag_lm %>%
mutate(model = "lin-lin model")
diag_log <- glance(model_log)
diag_log <- diag_log %>%
mutate(model = "log-lin model")
diagnostics_models <- full_join(diag_lm, diag_log) %>%
select(model, everything()) # put the `model` column first
```
```
## Joining, by = c("r.squared", "adj.r.squared", "sigma", "statistic", "p.value", "df", "logLik", "AIC", "BIC",
## "deviance", "df.residual", "nobs", "model")
```
```
print(diagnostics_models)
```
```
## # A tibble: 2 × 13
## model r.squ…¹ adj.r…² sigma stati…³ p.value df logLik AIC BIC
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 lin-li… 0.673 0.666 1.54e+4 100. 6.18e-122 11 -6034. 12094. 12150.
## 2 log-li… 0.677 0.670 2.14e-1 102. 3.67e-123 11 73.9 -122. -65.8
## # … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>, and
## # abbreviated variable names ¹r.squared, ²adj.r.squared, ³statistic
```
I saved the diagnostics in two different `data.frame` objects using the `glance()` function and added a
`model` column to indicate which model the diagnostics come from. Then I merged both datasets using
`full_join()`, a `{dplyr}` function. Using this approach, we can easily build a data frame with the
diagnostics of several models and compare them. The model using the logarithm of prices has lower
AIC and BIC (and this higher likelihood), so if you’re worried about selecting the model with the better
fit to the data, you’d go for this model.
6\.6 Using a model for prediction
---------------------------------
Once you estimated a model, you might want to use it for prediction. This is easily done using the
`predict()` function that works with most models. Prediction is also useful as a way to test the
accuracy of your model: split your data into a training set (used for training) and a testing
set (used for the pseudo\-prediction) and see if your model overfits the data. We are going to see
how to do that in a later section; for now, let’s just get acquainted with `predict()` and other
functions. I insist, keep in mind that this section is only to get acquainted with these functions.
We are going to explore prediction, overfitting and tuning of models in a later section.
Let’s go back to the models we trained in the previous section, `model3` and `model_log`. Let’s also
take a subsample of data, which we will be using for prediction:
```
set.seed(1234)
pred_set <- Housing %>%
sample_n(20)
```
In order to always get the same `pred_set`, I set the random seed first. Let’s take a look at the
data:
```
print(pred_set)
```
```
## price lotsize bedrooms bathrms stories driveway recroom fullbase gashw
## 284 45000 6750 2 1 1 yes no no no
## 101 57000 4500 3 2 2 no no yes no
## 400 85000 7231 3 1 2 yes yes yes no
## 98 59900 8250 3 1 1 yes no yes no
## 103 125000 4320 3 1 2 yes no yes yes
## 326 99000 8880 3 2 2 yes no yes no
## 79 55000 3180 2 2 1 yes no yes no
## 270 59000 4632 4 1 2 yes no no no
## 382 112500 6550 3 1 2 yes no yes no
## 184 63900 3510 3 1 2 yes no no no
## 4 60500 6650 3 1 2 yes yes no no
## 212 42000 2700 2 1 1 no no no no
## 195 33000 3180 2 1 1 yes no no no
## 511 70000 4646 3 1 2 yes yes yes no
## 479 88000 5450 4 2 1 yes no yes no
## 510 64000 4040 3 1 2 yes no no no
## 424 62900 2880 3 1 2 yes no no no
## 379 84000 7160 3 1 1 yes no yes no
## 108 58500 3680 3 2 2 yes no no no
## 131 35000 4840 2 1 2 yes no no no
## airco garagepl prefarea
## 284 no 0 no
## 101 yes 0 no
## 400 yes 0 yes
## 98 no 3 no
## 103 no 2 no
## 326 yes 1 no
## 79 no 2 no
## 270 yes 0 no
## 382 yes 0 yes
## 184 no 0 no
## 4 no 0 no
## 212 no 0 no
## 195 no 0 no
## 511 no 2 no
## 479 yes 0 yes
## 510 no 1 no
## 424 no 0 yes
## 379 no 2 yes
## 108 no 0 no
## 131 no 0 no
```
If we wish to use it for prediction, this is easily done with `predict()`:
```
predict(model3, pred_set)
```
```
## 284 101 400 98 103 326 79 270
## 51143.48 77286.31 93204.28 76481.82 77688.37 103751.72 66760.79 66486.26
## 382 184 4 212 195 511 479 510
## 86277.96 48042.41 63689.09 30093.18 38483.18 70524.34 91987.65 54166.78
## 424 379 108 131
## 55177.75 77741.03 62980.84 50926.99
```
This returns a vector of predicted prices. This can then be used to compute the Root Mean Squared Error
for instance. Let’s do it within a `tidyverse` pipeline:
```
rmse <- pred_set %>%
mutate(predictions = predict(model3, .)) %>%
summarise(sqrt(sum(predictions - price)**2/n()))
```
The root mean square error of `model3` is 3646\.0817347\.
I also used the `n()` function which returns the number of observations in a group (or all the
observations, if the data is not grouped). Let’s compare `model3` ’s RMSE with the one from
`model_log`:
```
rmse2 <- pred_set %>%
mutate(predictions = exp(predict(model_log, .))) %>%
summarise(sqrt(sum(predictions - price)**2/n()))
```
Don’t forget to exponentiate the predictions, remember you’re dealing with a log\-linear model! `model_log`’s
RMSE is 1\.2125133^{4} which is lower than `model3`’s. However, keep in mind that the model was trained
on the whole data, and then the prediction quality was assessed using a subsample of the data the
model was trained on… so actually we can’t really say if `model_log`’s predictions are very useful.
Of course, this is the same for `model3`.
In a later section we are going to learn how to do cross validation to avoid this issue.
Just as a side note, notice that I had to copy and paste basically the same lines twice to compute
the predictions for both models. That’s not much, but if I wanted to compare 10 models, copy and
paste mistakes could have sneaked in. Instead, it would have been nice to have a function that
computes the RMSE and then use it on my models. We are going to learn how to write our own functions
and use it just like if it was another built\-in R function.
6\.7 Beyond linear regression
-----------------------------
R has a lot of other built\-in functions for regression, such as `glm()` (for Generalized Linear
Models) and `nls()` for (for Nonlinear Least Squares). There are also functions and additional
packages for time series, panel data, machine learning, bayesian and nonparametric methods.
Presenting everything here would take too much space, and would be pretty useless as you can find
whatever you need using an internet search engine. What you have learned until now is quite general
and should work on many type of models. To help you out, here is a list of methods and the
recommended packages that you can use:
| Model | Package | Quick example |
| --- | --- | --- |
| Robust Linear Regression | `MASS` | `rlm(y ~ x, data = mydata)` |
| Nonlinear Least Squares | `stats`[2](#fn2) | `nls(y ~ x1 / (1 + x2), data = mydata)`[3](#fn3) |
| Logit | `stats` | `glm(y ~ x, data = mydata, family = "binomial")` |
| Probit | `stats` | `glm(y ~ x, data = mydata, family = binomial(link = "probit"))` |
| K\-Means | `stats` | `kmeans(data, n)`[4](#fn4) |
| PCA | `stats` | `prcomp(data, scale = TRUE, center = TRUE)`[5](#fn5) |
| Multinomial Logit | `mlogit` | Requires several steps of data pre\-processing and formula definition, refer to the [Vignette](https://cran.r-project.org/web/packages/mlogit/vignettes/mlogit.pdf) for more details. |
| Cox PH | `survival` | `coxph(Surv(y_time, y_status) ~ x, data = mydata)`[6](#fn6) |
| Time series | Several, depending on your needs. | Time series in R is a vast subject that would require a very thick book to cover. You can get started with the following series of blog articles, [Tidy time\-series, part 1](http://www.business-science.io/timeseries-analysis/2017/07/02/tidy-timeseries-analysis.html), [Tidy time\-series, part 2](http://www.business-science.io/timeseries-analysis/2017/07/23/tidy-timeseries-analysis-pt-2.html), [Tidy time\-series, part 3](http://www.business-science.io/timeseries-analysis/2017/07/30/tidy-timeseries-analysis-pt-3.html) and [Tidy time\-series, part 4](http://www.business-science.io/timeseries-analysis/2017/08/30/tidy-timeseries-analysis-pt-4.html) |
| Panel data | `plm` | `plm(y ~ x, data = mydata, model = "within|random")` |
| Machine learning | Several, depending on your needs. | R is a very popular programming language for machine learning. [This book](https://www.tmwr.org/) is a must read if you need to do machine learning with R. |
| Nonparametric regression | `np` | Several functions and options available, refer to the [Vignette](https://cran.r-project.org/web/packages/np/vignettes/np.pdf) for more details. |
This table is far from being complete. Should you be a Bayesian, you’d want to look at packages
such as `{rstan}`, which uses `STAN`, an external piece of software that must be installed on your
system. It is also possible to train models using Bayesian inference without the need of external
tools, with the `{bayesm}` package which estimates the usual micro\-econometric models. There really
are a lot of packages available for Bayesian inference, and you can find them all in the [related
CRAN Task View](https://cran.r-project.org/web/views/Bayesian.html).
6\.8 Hyper\-parameters
----------------------
Hyper\-parameters are parameters of the model that cannot be directly learned from the data.
A linear regression does not have any hyper\-parameters, but a random forest for instance has several.
You might have heard of ridge regression, lasso and elasticnet. These are
extensions of linear models that avoid over\-fitting by penalizing *large* models. These
extensions of the linear regression have hyper\-parameters that the practitioner has to tune. There
are several ways one can tune these parameters, for example, by doing a grid\-search, or a random
search over the grid or using more elaborate methods. To introduce hyper\-parameters, let’s get
to know ridge regression, also called Tikhonov regularization.
### 6\.8\.1 Ridge regression
Ridge regression is used when the data you are working with has a lot of explanatory variables,
or when there is a risk that a simple linear regression might overfit to the training data, because,
for example, your explanatory variables are collinear.
If you are training a linear model and then you notice that it generalizes very badly to new,
unseen data, it is very likely that the linear model you trained overfit the data.
In this case, ridge regression might prove useful. The way ridge regression works might seem
counter\-intuititive; it boils down to fitting a *worse* model to the training data, but in return,
this worse model will generalize better to new data.
The closed form solution of the ordinary least squares estimator is defined as:
\\\[
\\widehat{\\beta} \= (X'X)^{\-1}X'Y
\\]
where \\(X\\) is the design matrix (the matrix made up of the explanatory variables) and \\(Y\\) is the
dependent variable. For ridge regression, this closed form solution changes a little bit:
\\\[
\\widehat{\\beta} \= (X'X \+ \\lambda I\_p)^{\-1}X'Y
\\]
where \\(\\lambda \\in \\mathbb{R}\\) is an hyper\-parameter and \\(I\_p\\) is the identity matrix of dimension \\(p\\)
(\\(p\\) is the number of explanatory variables).
This formula above is the closed form solution to the following optimisation program:
\\\[
\\sum\_{i\=1}^n \\left(y\_i \- \\sum\_{j\=1}^px\_{ij}\\beta\_j\\right)^2
\\]
such that:
\\\[
\\sum\_{j\=1}^p(\\beta\_j)^2 \< c
\\]
for any strictly positive \\(c\\).
The `glmnet()` function from the `{glmnet}` package can be used for ridge regression, by setting
the `alpha` argument to 0 (setting it to 1 would do LASSO, and setting it to a number between
0 and 1 would do elasticnet). But in order to compare linear regression and ridge regression,
let me first divide the data into a training set and a testing set:
```
index <- 1:nrow(Housing)
set.seed(12345)
train_index <- sample(index, round(0.90*nrow(Housing)), replace = FALSE)
test_index <- setdiff(index, train_index)
train_x <- Housing[train_index, ] %>%
select(-price)
train_y <- Housing[train_index, ] %>%
pull(price)
test_x <- Housing[test_index, ] %>%
select(-price)
test_y <- Housing[test_index, ] %>%
pull(price)
```
I do the train/test split this way, because `glmnet()` requires a design matrix as input, and not
a formula. Design matrices can be created using the `model.matrix()` function:
```
library("glmnet")
train_matrix <- model.matrix(train_y ~ ., data = train_x)
test_matrix <- model.matrix(test_y ~ ., data = test_x)
```
Let’s now run a linear regression, by setting the penalty to 0:
```
model_lm_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 0)
```
The model above provides the same result as a linear regression, because I set `lambda` to 0\. Let’s
compare the coefficients between the two:
```
coef(model_lm_ridge)
```
```
## 13 x 1 sparse Matrix of class "dgCMatrix"
## s0
## (Intercept) -2667.542863
## (Intercept) .
## lotsize 3.397596
## bedrooms 2081.087654
## bathrms 13294.192823
## stories 6400.454580
## drivewayyes 6530.644895
## recroomyes 5389.856794
## fullbaseyes 4899.099463
## gashwyes 12575.611265
## aircoyes 13078.144146
## garagepl 4155.249461
## prefareayes 10260.781753
```
and now the coefficients of the linear regression (because I provide a design matrix, I have to use
`lm.fit()` instead of `lm()` which requires a formula, not a matrix.)
```
coef(lm.fit(x = train_matrix, y = train_y))
```
```
## (Intercept) lotsize bedrooms bathrms stories drivewayyes
## -2667.052098 3.397629 2081.344118 13293.707725 6400.416730 6529.972544
## recroomyes fullbaseyes gashwyes aircoyes garagepl prefareayes
## 5388.871137 4899.024787 12575.970220 13077.988867 4155.269629 10261.056772
```
as you can see, the coefficients are the same. Let’s compute the RMSE for the unpenalized linear
regression:
```
preds_lm <- predict(model_lm_ridge, test_matrix)
rmse_lm <- sqrt(mean(preds_lm - test_y)^2)
```
The RMSE for the linear unpenalized regression is equal to 1731\.5553157\.
Let’s now run a ridge regression, with `lambda` equal to 100, and see if the RMSE is smaller:
```
model_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 100)
```
and let’s compute the RMSE again:
```
preds <- predict(model_ridge, test_matrix)
rmse <- sqrt(mean(preds - test_y)^2)
```
The RMSE for the linear penalized regression is equal to 1726\.7632312, which is smaller than before.
But which value of `lambda` gives smallest RMSE? To find out, one must run model over a grid of
`lambda` values and pick the model with lowest RMSE. This procedure is available in the `cv.glmnet()`
function, which picks the best value for `lambda`:
```
best_model <- cv.glmnet(train_matrix, train_y)
# lambda that minimises the MSE
best_model$lambda.min
```
```
## [1] 61.42681
```
According to `cv.glmnet()` the best value for `lambda` is 61\.4268056\. In the
next section, we will implement cross validation ourselves, in order to find the hyper\-parameters
of a random forest.
6\.9 Training, validating, and testing models
---------------------------------------------
Cross\-validation is an important procedure which is used to compare models but also to tune the
hyper\-parameters of a model. In this section, we are going to use several packages from the
[`{tidymodels}`](https://github.com/tidymodels) collection of packages, namely
[`{recipes}`](https://tidymodels.github.io/recipes/),
[`{rsample}`](https://tidymodels.github.io/rsample/) and
[`{parsnip}`](https://tidymodels.github.io/parsnip/) to train a random forest the tidy way. I will
also use [`{mlrMBO}`](http://mlrmbo.mlr-org.com/) to tune the hyper\-parameters of the random forest.
### 6\.9\.1 Set up
Let’s load the needed packages:
```
library("tidyverse")
library("recipes")
library("rsample")
library("parsnip")
library("yardstick")
library("brotools")
library("mlbench")
```
Load the data which is included in the `{mlrbench}` package:
```
data("BostonHousing2")
```
I will train a random forest to predict the housing prices, which is the `cmedv` column:
```
head(BostonHousing2)
```
```
## town tract lon lat medv cmedv crim zn indus chas nox
## 1 Nahant 2011 -70.9550 42.2550 24.0 24.0 0.00632 18 2.31 0 0.538
## 2 Swampscott 2021 -70.9500 42.2875 21.6 21.6 0.02731 0 7.07 0 0.469
## 3 Swampscott 2022 -70.9360 42.2830 34.7 34.7 0.02729 0 7.07 0 0.469
## 4 Marblehead 2031 -70.9280 42.2930 33.4 33.4 0.03237 0 2.18 0 0.458
## 5 Marblehead 2032 -70.9220 42.2980 36.2 36.2 0.06905 0 2.18 0 0.458
## 6 Marblehead 2033 -70.9165 42.3040 28.7 28.7 0.02985 0 2.18 0 0.458
## rm age dis rad tax ptratio b lstat
## 1 6.575 65.2 4.0900 1 296 15.3 396.90 4.98
## 2 6.421 78.9 4.9671 2 242 17.8 396.90 9.14
## 3 7.185 61.1 4.9671 2 242 17.8 392.83 4.03
## 4 6.998 45.8 6.0622 3 222 18.7 394.63 2.94
## 5 7.147 54.2 6.0622 3 222 18.7 396.90 5.33
## 6 6.430 58.7 6.0622 3 222 18.7 394.12 5.21
```
Only keep relevant columns:
```
boston <- BostonHousing2 %>%
select(-medv, -tract, -lon, -lat) %>%
rename(price = cmedv)
```
I remove `tract`, `lat` and `lon` because the information contained in the column `town` is enough.
To train and evaluate the model’s performance, I split the data in two.
One data set, called the training set, will be further split into two down below. I won’t
touch the second data set, the test set, until the very end, to finally assess the model’s
performance.
```
train_test_split <- initial_split(boston, prop = 0.9)
housing_train <- training(train_test_split)
housing_test <- testing(train_test_split)
```
`initial_split()`, `training()` and `testing()` are functions from the `{rsample}` package.
I will train a random forest on the training data, but the question, is *which* random forest?
Because random forests have several hyper\-parameters, and as explained in the intro these
hyper\-parameters cannot be directly learned from the data, which one should we choose? We could
train 6 random forests for instance and compare their performance, but why only 6? Why not 16?
In order to find the right hyper\-parameters, the practitioner can
use values from the literature that seemed to have worked well (like is done in Macro\-econometrics)
or you can further split the train set into two, create a grid of hyperparameter, train the model
on one part of the data for all values of the grid, and compare the predictions of the models on the
second part of the data. You then stick with the model that performed the best, for example, the
model with lowest RMSE. The thing is, you can’t estimate the true value of the RMSE with only
one value. It’s like if you wanted to estimate the height of the population by drawing one single
observation from the population. You need a bit more observations. To approach the true value of the
RMSE for a give set of hyperparameters, instead of doing one split, let’s do 30\. Then we
compute the average RMSE, which implies training 30 models for each combination of the values of the
hyperparameters.
First, let’s split the training data again, using the `mc_cv()` function from `{rsample}` package.
This function implements Monte Carlo cross\-validation:
```
validation_data <- mc_cv(housing_train, prop = 0.9, times = 30)
```
What does `validation_data` look like?
```
validation_data
```
```
## # Monte Carlo cross-validation (0.9/0.1) with 30 resamples
## # A tibble: 30 × 2
## splits id
## <list> <chr>
## 1 <split [409/46]> Resample01
## 2 <split [409/46]> Resample02
## 3 <split [409/46]> Resample03
## 4 <split [409/46]> Resample04
## 5 <split [409/46]> Resample05
## 6 <split [409/46]> Resample06
## 7 <split [409/46]> Resample07
## 8 <split [409/46]> Resample08
## 9 <split [409/46]> Resample09
## 10 <split [409/46]> Resample10
## # … with 20 more rows
```
Let’s look further down:
```
validation_data$splits[[1]]
```
```
## <Analysis/Assess/Total>
## <409/46/455>
```
The first value is the number of rows of the first set, the second value of the second, and the third
was the original amount of values in the training data, before splitting again.
How should we call these two new data sets? The author of `{rsample}`, Max Kuhn, talks about
the *analysis* and the *assessment* sets, and I’m going to use this terminology as well.
Now, in order to continue I need to pre\-process the data. I will do this in three steps.
The first and the second steps are used to center and scale the numeric variables and the third step
converts character and factor variables to dummy variables. This is needed because I will train a
random forest, which cannot handle factor variables directly. Let’s define a recipe to do that,
and start by pre\-processing the testing set. I write a wrapper function around the recipe,
because I will need to apply this recipe to various data sets:
```
simple_recipe <- function(dataset){
recipe(price ~ ., data = dataset) %>%
step_center(all_numeric()) %>%
step_scale(all_numeric()) %>%
step_dummy(all_nominal())
}
```
We have not learned yet about writing functions, and will do so in the next chapter. However, for
now, you only need to know that you can write your own functions, and that these functions can
take any arguments you need. In the case of the above function, which we called `simple_recipe()`,
we only need one argument, which is a dataset, and which we called `dataset`.
Once the recipe is defined, I can use the `prep()` function, which estimates the parameters from
the data which are needed to process the data. For example, for centering, `prep()` estimates
the mean which will then be subtracted from the variables. With `bake()` the estimates are then
applied on the data:
```
testing_rec <- prep(simple_recipe(housing_test), testing = housing_test)
test_data <- bake(testing_rec, new_data = housing_test)
```
It is important to split the data before using `prep()` and `bake()`, because if not, you will
use observations from the test set in the `prep()` step, and thus introduce knowledge from the test
set into the training data. This is called data leakage, and must be avoided. This is why it is
necessary to first split the training data into an analysis and an assessment set, and then also
pre\-process these sets separately. However, the `validation_data` object cannot now be used with
`recipe()`, because it is not a dataframe. No worries, I simply need to write a function that extracts
the analysis and assessment sets from the `validation_data` object, applies the pre\-processing, trains
the model, and returns the RMSE. This will be a big function, at the center of the analysis.
But before that, let’s run a simple linear regression, as a benchmark. For the linear regression, I will
not use any CV, so let’s pre\-process the training set:
```
trainlm_rec <- prep(simple_recipe(housing_train), testing = housing_train)
trainlm_data <- bake(trainlm_rec, new_data = housing_train)
linreg_model <- lm(price ~ ., data = trainlm_data)
broom::augment(linreg_model, newdata = test_data) %>%
yardstick::rmse(price, .fitted)
```
```
## Warning in predict.lm(x, newdata = newdata, na.action = na.pass, ...):
## prediction from a rank-deficient fit may be misleading
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.439
```
`broom::augment()` adds the predictions to the `test_data` in a new column, `.fitted`. I won’t
use this trick with the random forest, because there is no `augment()` method for random forests
from the `{ranger}` package which I’ll use. I’ll add the predictions to the data myself.
Ok, now let’s go back to the random forest and write the big function:
```
my_rf <- function(mtry, trees, split, id){
analysis_set <- analysis(split)
analysis_prep <- prep(simple_recipe(analysis_set), training = analysis_set)
analysis_processed <- bake(analysis_prep, new_data = analysis_set)
model <- rand_forest(mode = "regression", mtry = mtry, trees = trees) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = analysis_processed)
assessment_set <- assessment(split)
assessment_prep <- prep(simple_recipe(assessment_set), testing = assessment_set)
assessment_processed <- bake(assessment_prep, new_data = assessment_set)
tibble::tibble("id" = id,
"truth" = assessment_processed$price,
"prediction" = unlist(predict(model, new_data = assessment_processed)))
}
```
The `rand_forest()` function is available in the `{parsnip}` package. This package provides an
unified interface to a lot of other machine learning packages. This means that instead of having to
learn the syntax of `range()` and `randomForest()` and, and… you can simply use the `rand_forest()`
function and change the `engine` argument to the one you want (`ranger`, `randomForest`, etc).
Let’s try this function:
```
results_example <- map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = 3, trees = 200, split = .x, id = .y))
```
```
head(results_example)
```
```
## # A tibble: 6 × 3
## id truth prediction
## <chr> <dbl> <dbl>
## 1 Resample01 -0.328 -0.0274
## 2 Resample01 1.06 0.686
## 3 Resample01 1.04 0.726
## 4 Resample01 -0.418 -0.0190
## 5 Resample01 0.909 0.642
## 6 Resample01 0.0926 -0.134
```
I can now compute the RMSE when `mtry` \= 3 and `trees` \= 200:
```
results_example %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
```
```
## [1] 0.6305034
```
The random forest has already lower RMSE than the linear regression. The goal now is to lower this
RMSE by tuning the `mtry` and `trees` hyperparameters. For this, I will use Bayesian Optimization
methods implemented in the `{mlrMBO}` package.
### 6\.9\.2 Bayesian hyperparameter optimization
I will re\-use the code from above, and define a function that does everything from pre\-processing
to returning the metric I want to minimize by tuning the hyperparameters, the RMSE:
```
tuning <- function(param, validation_data){
mtry <- param[1]
trees <- param[2]
results <- purrr::map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = mtry, trees = trees, split = .x, id = .y))
results %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
}
```
This is exactly the code from before, but it now returns the RMSE. Let’s try the function
with the values from before:
```
tuning(c(3, 200), validation_data)
```
```
## [1] 0.6319843
```
I now follow the code that can be found in the [arxiv](https://arxiv.org/abs/1703.03373) paper to
run the optimization. A simpler model, called the surrogate model, is used to look for promising
points and to evaluate the value of the function at these points. This seems somewhat similar
(in spirit) to the *Indirect Inference* method as described in
[Gourieroux, Monfort, Renault](https://www.jstor.org/stable/2285076).
If you don’t really get what follows, no worries, it is not really important as such. The idea
is simply to look for hyper\-parameters in an efficient way, and bayesian optimisation provides
this efficient way. However, you could use another method, for example a grid search. This would not
change anything to the general approach. So I will not spend too much time explaining what is
going on below, as you can read the details in the paper cited above as well as the package’s
documentation. The focus here is not on this particular method, but rather showing you how you can
use various packages to solve a data science problem.
Let’s first load the package and create the function to optimize:
```
library("mlrMBO")
```
```
fn <- makeSingleObjectiveFunction(name = "tuning",
fn = tuning,
par.set = makeParamSet(makeIntegerParam("x1", lower = 3, upper = 8),
makeIntegerParam("x2", lower = 100, upper = 500)))
```
This function is based on the function I defined before. The parameters to optimize are also
defined as are their bounds. I will look for `mtry` between the values of 3 and 8, and `trees`
between 50 and 500\.
We still need to define some other objects before continuing:
```
# Create initial random Latin Hypercube Design of 10 points
library(lhs)# for randomLHS
des <- generateDesign(n = 5L * 2L, getParamSet(fn), fun = randomLHS)
```
Then we choose the surrogate model, a random forest too:
```
# Specify kriging model with standard error estimation
surrogate <- makeLearner("regr.ranger", predict.type = "se", keep.inbag = TRUE)
```
Here I define some options:
```
# Set general controls
ctrl <- makeMBOControl()
ctrl <- setMBOControlTermination(ctrl, iters = 10L)
ctrl <- setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI())
```
And this is the optimization part:
```
# Start optimization
result <- mbo(fn, des, surrogate, ctrl, more.args = list("validation_data" = validation_data))
```
```
result
```
```
## Recommended parameters:
## x1=8; x2=314
## Objective: y = 0.484
##
## Optimization path
## 10 + 10 entries in total, displaying last 10 (or less):
## x1 x2 y dob eol error.message exec.time ei error.model
## 11 8 283 0.4855415 1 NA <NA> 7.353 -3.276847e-04 <NA>
## 12 8 284 0.4852047 2 NA <NA> 7.321 -3.283713e-04 <NA>
## 13 8 314 0.4839817 3 NA <NA> 7.703 -3.828517e-04 <NA>
## 14 8 312 0.4841398 4 NA <NA> 7.633 -2.829713e-04 <NA>
## 15 8 318 0.4841066 5 NA <NA> 7.692 -2.668354e-04 <NA>
## 16 8 314 0.4845221 6 NA <NA> 7.574 -1.382333e-04 <NA>
## 17 8 321 0.4843018 7 NA <NA> 7.693 -3.828924e-05 <NA>
## 18 8 318 0.4868457 8 NA <NA> 7.696 -8.692828e-07 <NA>
## 19 8 310 0.4862687 9 NA <NA> 7.594 -1.061185e-07 <NA>
## 20 8 313 0.4878694 10 NA <NA> 7.628 -5.153015e-07 <NA>
## train.time prop.type propose.time se mean
## 11 0.011 infill_ei 0.450 0.0143886864 0.5075765
## 12 0.011 infill_ei 0.427 0.0090265872 0.4971003
## 13 0.012 infill_ei 0.443 0.0062693960 0.4916927
## 14 0.012 infill_ei 0.435 0.0037308971 0.4878950
## 15 0.012 infill_ei 0.737 0.0024446891 0.4860699
## 16 0.013 infill_ei 0.442 0.0012713838 0.4850705
## 17 0.012 infill_ei 0.444 0.0006371109 0.4847248
## 18 0.013 infill_ei 0.467 0.0002106381 0.4844576
## 19 0.014 infill_ei 0.435 0.0002182254 0.4846214
## 20 0.013 infill_ei 0.748 0.0002971160 0.4847383
```
So the recommended parameters are 8 for `mtry` and 314 for `trees`. The
user can access these recommended parameters with `result$x$x1` and `result$x$x2`.
The value of the RMSE is lower than before, and equals 0\.4839817\. It can be accessed with
`result$y`.
Let’s now train the random forest on the training data with this values. First, I pre\-process the
training data
```
training_rec <- prep(simple_recipe(housing_train), testing = housing_train)
train_data <- bake(training_rec, new_data = housing_train)
```
Let’s now train our final model and predict the prices:
```
final_model <- rand_forest(mode = "regression", mtry = result$x$x1, trees = result$x$x2) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = train_data)
price_predict <- predict(final_model, new_data = select(test_data, -price))
```
Let’s transform the data back and compare the predicted prices to the true ones visually:
```
cbind(price_predict * sd(housing_train$price) + mean(housing_train$price),
housing_test$price)
```
```
## .pred housing_test$price
## 1 16.76938 13.5
## 2 27.59510 30.8
## 3 23.14952 24.7
## 4 21.92390 21.2
## 5 21.35030 20.0
## 6 23.15809 22.9
## 7 23.00947 23.9
## 8 25.74268 26.6
## 9 24.13122 22.6
## 10 34.97671 43.8
## 11 19.30543 18.8
## 12 18.09146 15.7
## 13 18.82922 19.2
## 14 18.63397 13.3
## 15 19.14438 14.0
## 16 17.05549 15.6
## 17 23.79491 27.0
## 18 20.30125 17.4
## 19 22.99200 23.6
## 20 32.77092 33.3
## 21 31.66258 34.6
## 22 28.79583 34.9
## 23 39.02755 50.0
## 24 23.53336 21.7
## 25 24.66551 24.3
## 26 24.91737 24.0
## 27 25.11847 25.1
## 28 24.42518 23.7
## 29 24.59139 23.7
## 30 24.91760 26.2
## 31 38.73875 43.5
## 32 29.71848 35.1
## 33 36.89490 46.0
## 34 24.04041 26.4
## 35 20.91349 20.3
## 36 21.18602 23.1
## 37 22.57069 22.2
## 38 25.21751 23.9
## 39 28.55841 50.0
## 40 14.38216 7.2
## 41 12.76573 8.5
## 42 11.78237 9.5
## 43 13.29279 13.4
## 44 14.95076 16.4
## 45 15.79182 19.1
## 46 18.26510 19.6
## 47 14.84985 13.3
## 48 16.01508 16.7
## 49 24.09930 25.0
## 50 20.75357 21.8
## 51 19.49487 19.7
```
Let’s now compute the RMSE:
```
tibble::tibble("truth" = test_data$price,
"prediction" = unlist(price_predict)) %>%
yardstick::rmse(truth, prediction)
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.425
```
As I mentioned above, all the part about looking for hyper\-parameters could be changed to something
else. The general approach though remains what I have described, and can be applied for any models
that have hyper\-parameters.
6\.1 Terminology
----------------
Before continuing discussing about statistical models and model fitting it is worthwhile to discuss
terminology a little bit. Depending on your background, you might call an explanatory variable a
feature or the dependent variable the target. These are the same objects. The matrix of features
is usually called a design matrix, and what statisticians call the intercept is what
machine learning engineers call the bias. Referring to the intercept by bias is unfortunate, as bias
also has a very different meaning; bias is also what we call the error in a model that may cause
*biased* estimates. To finish up, the estimated parameters of the model may be called coefficients
or weights. Here again, I don’t like the using *weight* as weight as a very different meaning in
statistics.
So, in the remainder of this chapter, and book, I will use the terminology from the statistical
litterature, using dependent and explanatory variables (`y` and `x`), and calling the
estimated parameters coefficients and the intercept… well the intercept (the \\(\\beta\\)s of the model).
However, I will talk of *training* a model, instead of *estimating* a model.
6\.2 Fitting a model to data
----------------------------
Suppose you have a variable `y` that you wish to explain using a set of other variables `x1`, `x2`,
`x3`, etc. Let’s take a look at the `Housing` dataset from the `Ecdat` package:
```
library(Ecdat)
data(Housing)
```
You can read a description of the dataset by running:
```
?Housing
```
```
Housing package:Ecdat R Documentation
Sales Prices of Houses in the City of Windsor
Description:
a cross-section from 1987
_number of observations_ : 546
_observation_ : goods
_country_ : Canada
Usage:
data(Housing)
Format:
A dataframe containing :
price: sale price of a house
lotsize: the lot size of a property in square feet
bedrooms: number of bedrooms
bathrms: number of full bathrooms
stories: number of stories excluding basement
driveway: does the house has a driveway ?
recroom: does the house has a recreational room ?
fullbase: does the house has a full finished basement ?
gashw: does the house uses gas for hot water heating ?
airco: does the house has central air conditioning ?
garagepl: number of garage places
prefarea: is the house located in the preferred neighbourhood of the city ?
Source:
Anglin, P.M. and R. Gencay (1996) “Semiparametric estimation of
a hedonic price function”, _Journal of Applied Econometrics_,
*11(6)*, 633-648.
References:
Verbeek, Marno (2004) _A Guide to Modern Econometrics_, John Wiley
and Sons, chapter 3.
Journal of Applied Econometrics data archive : <URL:
http://qed.econ.queensu.ca/jae/>.
See Also:
‘Index.Source’, ‘Index.Economics’, ‘Index.Econometrics’,
‘Index.Observations’
```
or by looking for `Housing` in the help pane of RStudio. Usually, you would take a look a the data
before doing any modeling:
```
glimpse(Housing)
```
```
## Rows: 546
## Columns: 12
## $ price <dbl> 42000, 38500, 49500, 60500, 61000, 66000, 66000, 69000, 83800…
## $ lotsize <dbl> 5850, 4000, 3060, 6650, 6360, 4160, 3880, 4160, 4800, 5500, 7…
## $ bedrooms <dbl> 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 2, 3, 4, 1, 2, 3…
## $ bathrms <dbl> 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1…
## $ stories <dbl> 2, 1, 1, 2, 1, 1, 2, 3, 1, 4, 1, 1, 2, 1, 1, 1, 2, 3, 1, 1, 2…
## $ driveway <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, no, ye…
## $ recroom <fct> no, no, no, yes, no, yes, no, no, yes, yes, no, no, no, no, n…
## $ fullbase <fct> yes, no, no, no, no, yes, yes, no, yes, no, yes, no, no, no, …
## $ gashw <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
## $ airco <fct> no, no, no, no, no, yes, no, no, no, yes, yes, no, no, no, no…
## $ garagepl <dbl> 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1…
## $ prefarea <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, n…
```
Housing prices depend on a set of variables such as the number of bedrooms, the area it is located
and so on. If you believe that housing prices depend linearly on a set of explanatory variables,
you will want to estimate a linear model. To estimate a *linear model*, you will need to use the
built\-in `lm()` function:
```
model1 <- lm(price ~ lotsize + bedrooms, data = Housing)
```
`lm()` takes a formula as an argument, which defines the model you want to estimate. In this case,
I ran the following regression:
\\\[
\\text{price} \= \\beta\_0 \+ \\beta\_1 \* \\text{lotsize} \+ \\beta\_2 \* \\text{bedrooms} \+ \\varepsilon
\\]
where \\(\\beta\_0, \\beta\_1\\) and \\(\\beta\_2\\) are three parameters to estimate. To take a look at the
results, you can use the `summary()` method (not to be confused with `dplyr::summarise()`):
```
summary(model1)
```
```
##
## Call:
## lm(formula = price ~ lotsize + bedrooms, data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -65665 -12498 -2075 8970 97205
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.613e+03 4.103e+03 1.368 0.172
## lotsize 6.053e+00 4.243e-01 14.265 < 2e-16 ***
## bedrooms 1.057e+04 1.248e+03 8.470 2.31e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 21230 on 543 degrees of freedom
## Multiple R-squared: 0.3703, Adjusted R-squared: 0.3679
## F-statistic: 159.6 on 2 and 543 DF, p-value: < 2.2e-16
```
if you wish to remove the intercept (\\(\\beta\_0\\) in the above equation) from your model, you can
do so with `-1`:
```
model2 <- lm(price ~ -1 + lotsize + bedrooms, data = Housing)
summary(model2)
```
```
##
## Call:
## lm(formula = price ~ -1 + lotsize + bedrooms, data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -67229 -12342 -1333 9627 95509
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## lotsize 6.283 0.390 16.11 <2e-16 ***
## bedrooms 11968.362 713.194 16.78 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 21250 on 544 degrees of freedom
## Multiple R-squared: 0.916, Adjusted R-squared: 0.9157
## F-statistic: 2965 on 2 and 544 DF, p-value: < 2.2e-16
```
or if you want to use all the columns inside `Housing`, replacing the column names by `.`:
```
model3 <- lm(price ~ ., data = Housing)
summary(model3)
```
```
##
## Call:
## lm(formula = price ~ ., data = Housing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -41389 -9307 -591 7353 74875
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -4038.3504 3409.4713 -1.184 0.236762
## lotsize 3.5463 0.3503 10.124 < 2e-16 ***
## bedrooms 1832.0035 1047.0002 1.750 0.080733 .
## bathrms 14335.5585 1489.9209 9.622 < 2e-16 ***
## stories 6556.9457 925.2899 7.086 4.37e-12 ***
## drivewayyes 6687.7789 2045.2458 3.270 0.001145 **
## recroomyes 4511.2838 1899.9577 2.374 0.017929 *
## fullbaseyes 5452.3855 1588.0239 3.433 0.000642 ***
## gashwyes 12831.4063 3217.5971 3.988 7.60e-05 ***
## aircoyes 12632.8904 1555.0211 8.124 3.15e-15 ***
## garagepl 4244.8290 840.5442 5.050 6.07e-07 ***
## prefareayes 9369.5132 1669.0907 5.614 3.19e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15420 on 534 degrees of freedom
## Multiple R-squared: 0.6731, Adjusted R-squared: 0.6664
## F-statistic: 99.97 on 11 and 534 DF, p-value: < 2.2e-16
```
You can access different elements of `model3` with `$`, because the result of `lm()` is a list
(you can check this claim with `typeof(model3)`:
```
print(model3$coefficients)
```
```
## (Intercept) lotsize bedrooms bathrms stories drivewayyes
## -4038.350425 3.546303 1832.003466 14335.558468 6556.945711 6687.778890
## recroomyes fullbaseyes gashwyes aircoyes garagepl prefareayes
## 4511.283826 5452.385539 12831.406266 12632.890405 4244.829004 9369.513239
```
but I prefer to use the `{broom}` package, and more specifically the `tidy()` function, which
converts `model3` into a neat `data.frame`:
```
results3 <- broom::tidy(model3)
glimpse(results3)
```
```
## Rows: 12
## Columns: 5
## $ term <chr> "(Intercept)", "lotsize", "bedrooms", "bathrms", "stories", …
## $ estimate <dbl> -4038.350425, 3.546303, 1832.003466, 14335.558468, 6556.9457…
## $ std.error <dbl> 3409.4713, 0.3503, 1047.0002, 1489.9209, 925.2899, 2045.2458…
## $ statistic <dbl> -1.184451, 10.123618, 1.749764, 9.621691, 7.086369, 3.269914…
## $ p.value <dbl> 2.367616e-01, 3.732442e-22, 8.073341e-02, 2.570369e-20, 4.37…
```
I explicitely write `broom::tidy()` because `tidy()` is a popular function name. For instance,
it is also a function from the `{yardstick}` package, which does not do the same thing at all. Since
I will also be using `{yardstick}` I prefer to explicitely write `broom::tidy()` to avoid conflicts.
Using `broom::tidy()` is useful, because you can then work on the results easily, for example if
you wish to only keep results that are significant at the 5% level:
```
results3 %>%
filter(p.value < 0.05)
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 lotsize 3.55 0.350 10.1 3.73e-22
## 2 bathrms 14336. 1490. 9.62 2.57e-20
## 3 stories 6557. 925. 7.09 4.37e-12
## 4 drivewayyes 6688. 2045. 3.27 1.15e- 3
## 5 recroomyes 4511. 1900. 2.37 1.79e- 2
## 6 fullbaseyes 5452. 1588. 3.43 6.42e- 4
## 7 gashwyes 12831. 3218. 3.99 7.60e- 5
## 8 aircoyes 12633. 1555. 8.12 3.15e-15
## 9 garagepl 4245. 841. 5.05 6.07e- 7
## 10 prefareayes 9370. 1669. 5.61 3.19e- 8
```
You can even add new columns, such as the confidence intervals:
```
results3 <- broom::tidy(model3, conf.int = TRUE, conf.level = 0.95)
print(results3)
```
```
## # A tibble: 12 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -4038. 3409. -1.18 2.37e- 1 -10736. 2659.
## 2 lotsize 3.55 0.350 10.1 3.73e-22 2.86 4.23
## 3 bedrooms 1832. 1047. 1.75 8.07e- 2 -225. 3889.
## 4 bathrms 14336. 1490. 9.62 2.57e-20 11409. 17262.
## 5 stories 6557. 925. 7.09 4.37e-12 4739. 8375.
## 6 drivewayyes 6688. 2045. 3.27 1.15e- 3 2670. 10705.
## 7 recroomyes 4511. 1900. 2.37 1.79e- 2 779. 8244.
## 8 fullbaseyes 5452. 1588. 3.43 6.42e- 4 2333. 8572.
## 9 gashwyes 12831. 3218. 3.99 7.60e- 5 6511. 19152.
## 10 aircoyes 12633. 1555. 8.12 3.15e-15 9578. 15688.
## 11 garagepl 4245. 841. 5.05 6.07e- 7 2594. 5896.
## 12 prefareayes 9370. 1669. 5.61 3.19e- 8 6091. 12648.
```
Going back to model estimation, you can of course use `lm()` in a pipe workflow:
```
Housing %>%
select(-driveway, -stories) %>%
lm(price ~ ., data = .) %>%
broom::tidy()
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3025. 3263. 0.927 3.54e- 1
## 2 lotsize 3.67 0.363 10.1 4.52e-22
## 3 bedrooms 4140. 1036. 3.99 7.38e- 5
## 4 bathrms 16443. 1546. 10.6 4.29e-24
## 5 recroomyes 5660. 2010. 2.82 5.05e- 3
## 6 fullbaseyes 2241. 1618. 1.38 1.67e- 1
## 7 gashwyes 13568. 3411. 3.98 7.93e- 5
## 8 aircoyes 15578. 1597. 9.75 8.53e-21
## 9 garagepl 4232. 883. 4.79 2.12e- 6
## 10 prefareayes 10729. 1753. 6.12 1.81e- 9
```
The first `.` in the `lm()` function is used to indicate that we wish to use all the data from `Housing`
(minus `driveway` and `stories` which I removed using `select()` and the `-` sign), and the second `.` is
used to *place* the result from the two `dplyr` instructions that preceded is to be placed there.
The picture below should help you understand:
You have to specify this, because by default, when using `%>%` the left hand side argument gets
passed as the first argument of the function on the right hand side.
Since version 4\.2, R now also natively includes a placeholder, `_`:
```
Housing |>
select(-driveway, -stories) |>
lm(price ~ ., data = _) |>
broom::tidy()
```
```
## # A tibble: 10 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3025. 3263. 0.927 3.54e- 1
## 2 lotsize 3.67 0.363 10.1 4.52e-22
## 3 bedrooms 4140. 1036. 3.99 7.38e- 5
## 4 bathrms 16443. 1546. 10.6 4.29e-24
## 5 recroomyes 5660. 2010. 2.82 5.05e- 3
## 6 fullbaseyes 2241. 1618. 1.38 1.67e- 1
## 7 gashwyes 13568. 3411. 3.98 7.93e- 5
## 8 aircoyes 15578. 1597. 9.75 8.53e-21
## 9 garagepl 4232. 883. 4.79 2.12e- 6
## 10 prefareayes 10729. 1753. 6.12 1.81e- 9
```
For the example above, I’ve also switched from `%>%` to `|>`, or else I can’t use the `_` placeholder.
The advantage of the `_` placeholder is that it disambiguates `.`. So here, the `.` is a placeholder for
all the variables in the dataset, and `_` is a placeholder for the dataset.
6\.3 Diagnostics
----------------
Diagnostics are useful metrics to assess model fit. You can read some of these diagnostics, such as
the \\(R^2\\) at the bottom of the summary (when running `summary(my_model)`), but if you want to do
more than simply reading these diagnostics from RStudio, you can put those in a `data.frame` too,
using `broom::glance()`:
```
glance(model3)
```
```
## # A tibble: 1 × 12
## r.squared adj.r.…¹ sigma stati…² p.value df logLik AIC BIC devia…³
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.673 0.666 15423. 100. 6.18e-122 11 -6034. 12094. 12150. 1.27e11
## # … with 2 more variables: df.residual <int>, nobs <int>, and abbreviated
## # variable names ¹adj.r.squared, ²statistic, ³deviance
```
You can also plot the usual diagnostics plots using `ggfortify::autoplot()` which uses the
`{ggplot2}` package under the hood:
```
library(ggfortify)
autoplot(model3, which = 1:6) + theme_minimal()
```
`which=1:6` is an additional option that shows you all the diagnostics plot. If you omit this
option, you will only get 4 of them.
You can also get the residuals of the regression in two ways; either you grab them directly from
the model fit:
```
resi3 <- residuals(model3)
```
or you can augment the original data with a residuals column, using `broom::augment()`:
```
housing_aug <- augment(model3)
```
Let’s take a look at `housing_aug`:
```
glimpse(housing_aug)
```
```
## Rows: 546
## Columns: 18
## $ price <dbl> 42000, 38500, 49500, 60500, 61000, 66000, 66000, 69000, 838…
## $ lotsize <dbl> 5850, 4000, 3060, 6650, 6360, 4160, 3880, 4160, 4800, 5500,…
## $ bedrooms <dbl> 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 2, 3, 4, 1, 2,…
## $ bathrms <dbl> 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2,…
## $ stories <dbl> 2, 1, 1, 2, 1, 1, 2, 3, 1, 4, 1, 1, 2, 1, 1, 1, 2, 3, 1, 1,…
## $ driveway <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, no, …
## $ recroom <fct> no, no, no, yes, no, yes, no, no, yes, yes, no, no, no, no,…
## $ fullbase <fct> yes, no, no, no, no, yes, yes, no, yes, no, yes, no, no, no…
## $ gashw <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no,…
## $ airco <fct> no, no, no, no, no, yes, no, no, no, yes, yes, no, no, no, …
## $ garagepl <dbl> 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 0, 1, 0, 0, 1,…
## $ prefarea <fct> no, no, no, no, no, no, no, no, no, no, no, no, no, no, no,…
## $ .fitted <dbl> 66037.98, 41391.15, 39889.63, 63689.09, 49760.43, 66387.12,…
## $ .resid <dbl> -24037.9757, -2891.1515, 9610.3699, -3189.0873, 11239.5735,…
## $ .hat <dbl> 0.013477335, 0.008316321, 0.009893730, 0.021510891, 0.01033…
## $ .sigma <dbl> 15402.01, 15437.14, 15431.98, 15437.02, 15429.89, 15437.64,…
## $ .cooksd <dbl> 2.803214e-03, 2.476265e-05, 3.265481e-04, 8.004787e-05, 4.6…
## $ .std.resid <dbl> -1.56917096, -0.18823924, 0.62621736, -0.20903274, 0.732539…
```
A few columns have been added to the original data, among them `.resid` which contains the
residuals. Let’s plot them:
```
ggplot(housing_aug) +
geom_density(aes(.resid))
```
Fitted values are also added to the original data, under the variable `.fitted`. It would also have
been possible to get the fitted values with:
```
fit3 <- fitted(model3)
```
but I prefer using `augment()`, because the columns get merged to the original data, which then
makes it easier to find specific individuals, for example, you might want to know for which housing
units the model underestimates the price:
```
total_pos <- housing_aug %>%
filter(.resid > 0) %>%
summarise(total = n()) %>%
pull(total)
```
we find 261 individuals where the residuals are positive. It is also easier to
extract outliers:
```
housing_aug %>%
mutate(prank = cume_dist(.cooksd)) %>%
filter(prank > 0.99) %>%
glimpse()
```
```
## Rows: 6
## Columns: 19
## $ price <dbl> 163000, 125000, 132000, 175000, 190000, 174500
## $ lotsize <dbl> 7420, 4320, 3500, 9960, 7420, 7500
## $ bedrooms <dbl> 4, 3, 4, 3, 4, 4
## $ bathrms <dbl> 1, 1, 2, 2, 2, 2
## $ stories <dbl> 2, 2, 2, 2, 3, 2
## $ driveway <fct> yes, yes, yes, yes, yes, yes
## $ recroom <fct> yes, no, no, no, no, no
## $ fullbase <fct> yes, yes, no, yes, no, yes
## $ gashw <fct> no, yes, yes, no, no, no
## $ airco <fct> yes, no, no, no, yes, yes
## $ garagepl <dbl> 2, 2, 2, 2, 2, 3
## $ prefarea <fct> no, no, no, yes, yes, yes
## $ .fitted <dbl> 94826.68, 77688.37, 85495.58, 108563.18, 115125.03, 118549.…
## $ .resid <dbl> 68173.32, 47311.63, 46504.42, 66436.82, 74874.97, 55951.00
## $ .hat <dbl> 0.02671105, 0.05303793, 0.05282929, 0.02819317, 0.02008141,…
## $ .sigma <dbl> 15144.70, 15293.34, 15298.27, 15159.14, 15085.99, 15240.66
## $ .cooksd <dbl> 0.04590995, 0.04637969, 0.04461464, 0.04616068, 0.04107317,…
## $ .std.resid <dbl> 4.480428, 3.152300, 3.098176, 4.369631, 4.904193, 3.679815
## $ prank <dbl> 0.9963370, 1.0000000, 0.9945055, 0.9981685, 0.9926740, 0.99…
```
`prank` is a variable I created with `cume_dist()` which is a `dplyr` function that returns the
proportion of all values less than or equal to the current rank. For example:
```
example <- c(5, 4.6, 2, 1, 0.8, 0, -1)
cume_dist(example)
```
```
## [1] 1.0000000 0.8571429 0.7142857 0.5714286 0.4285714 0.2857143 0.1428571
```
by filtering `prank > 0.99` we get the top 1% of outliers according to Cook’s distance.
6\.4 Interpreting models
------------------------
Model interpretation is essential in the social sciences, but it is also getting very important
in machine learning. As usual, the terminology is different; in machine learning, we speak about
explainability. There is a very important aspect that one has to understand when it comes to
interpretability/explainability: *classical, parametric* models, and *black\-box* models. This
is very well explained in Breiman ([2001](#ref-breiman2001)), an absolute must read (link to paper, in PDF format:
[click here](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)). The gist of the paper
is that there are two cultures of statistical modeling; one culture relies on modeling the data
generating process, for instance, by considering that a variable y (independent variable, or target)
is a linear combination of input variables x (dependent variables, or features) plus some noise. The
other culture uses complex algorithms (random forests, neural networks)
to model the relationship between y and x. The author argues that most statisticians have relied
for too long on modeling data generating processes and do not use all the potential offered by
these complex algorithms. I think that a lot of things have changed since then, and that nowadays
any practitioner that uses data is open to use any type of model or algorithm, as long as it does
the job. However, the paper is very interesting, and the discussion on trade\-off between
simplicity of the model and interpretability/explainability is still relevant today.
In this section, I will explain how one can go about interpreting or explaining models from these
two cultures.
Also, it is important to note here that the discussion that will follow will be heavily influenced
by my econometrics background. I will focus on marginal effects as way to interpret parametric
models (models from the first culture described above), but depending on the field, practitioners
might use something else (for instance by computing odds ratios in a logistic regression).
I will start by interpretability of *classical* statistical models.
### 6\.4\.1 Marginal effects
If one wants to know the effect of variable `x` on the dependent variable `y`,
so\-called marginal effects have to be computed. This is easily done in R with the `{marginaleffects}` package.
Formally, marginal effects are the partial derivative of the regression equation with respect to the variable
we want to look at.
```
library(marginaleffects)
effects_model3 <- marginaleffects(model3)
summary(effects_model3)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lotsize dY/dX 3.546 0.3503 10.124 < 2.22e-16 2.86 4.233
## 2 bedrooms dY/dX 1832.003 1047.0056 1.750 0.08016056 -220.09 3884.097
## 3 bathrms dY/dX 14335.558 1489.9557 9.621 < 2.22e-16 11415.30 17255.818
## 4 stories dY/dX 6556.946 925.2943 7.086 1.3771e-12 4743.40 8370.489
## 5 driveway yes - no 6687.779 2045.2459 3.270 0.00107580 2679.17 10696.387
## 6 recroom yes - no 4511.284 1899.9577 2.374 0.01757689 787.44 8235.132
## 7 fullbase yes - no 5452.386 1588.0239 3.433 0.00059597 2339.92 8564.855
## 8 gashw yes - no 12831.406 3217.5970 3.988 6.6665e-05 6525.03 19137.781
## 9 airco yes - no 12632.890 1555.0211 8.124 4.5131e-16 9585.11 15680.676
## 10 garagepl dY/dX 4244.829 840.5965 5.050 4.4231e-07 2597.29 5892.368
## 11 prefarea yes - no 9369.513 1669.0906 5.614 1.9822e-08 6098.16 12640.871
##
## Model type: lm
## Prediction type: response
```
Let’s go through this: `summary(effects_model3)` shows the average marginal effects for each of the dependent
variables that were used in `model3`. The way to interpret them is as follows:
*everything else held constant (often you’ll read the Latin ceteris paribus for this), a unit increase in
`lotize` increases the `price` by 3\.546 units, on average.*
The *everything held constant* part is crucial; with marginal effects, you’re looking at just the effect of
one variable at a time. For discrete variables, like `driveway`, this is simpler: imagine you have two
equal houses, exactly the same house, one has a driveway and the other doesn’t. The one with the driveway
is 6687 units more expensive, *on average*.
Now it turns out that in the case of a linear model, the average marginal effects are exactly equal to the
coefficients. Just compare `summary(model3)` to `effects_model3` to see
(and remember, I told you that marginal effects were the partial derivative of the regression equation with
respect to the variable of interest. So the derivative of \\(\\alpha\*X\_1 \+ ....\\) with respect to \\(X\_1\\) will
be \\(\\alpha\\)). But in the case of a more complex, non\-linear model, this is not so obvious. This is
where `{marginaleffects}` will make your life much easier.
It is also possible to plot the results:
```
plot(effects_model3)
```
`effects_model3` is a data frame containing the effects for each house in the data set. For example,
let’s take a look at the first house:
```
effects_model3 %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lotsize dY/dX 3.546303 0.3502195 10.125944
## 2 1 response bedrooms dY/dX 1832.003466 1046.1608842 1.751168
## 3 1 response bathrms dY/dX 14335.558468 1490.4827945 9.618064
## 4 1 response stories dY/dX 6556.945711 925.4764870 7.084940
## 5 1 response driveway yes - no 6687.778890 2045.2460319 3.269914
## 6 1 response recroom yes - no 4511.283826 1899.9577182 2.374413
## 7 1 response fullbase yes - no 5452.385539 1588.0237538 3.433441
## 8 1 response gashw yes - no 12831.406266 3217.5971931 3.987885
## 9 1 response airco yes - no 12632.890405 1555.0207045 8.123937
## 10 1 response garagepl dY/dX 4244.829004 840.8930857 5.048001
## 11 1 response prefarea yes - no 9369.513239 1669.0904968 5.613544
## p.value conf.low conf.high predicted predicted_hi predicted_lo
## 1 4.238689e-24 2.859885 4.232721 66037.98 66043.14 66037.98
## 2 7.991698e-02 -218.434189 3882.441121 66037.98 66038.89 66037.98
## 3 6.708200e-22 11414.265872 17256.851065 66037.98 66042.28 66037.98
## 4 1.391042e-12 4743.045128 8370.846295 66037.98 66039.94 66037.98
## 5 1.075801e-03 2679.170328 10696.387452 66037.98 66037.98 59350.20
## 6 1.757689e-02 787.435126 8235.132526 66037.98 70549.26 66037.98
## 7 5.959723e-04 2339.916175 8564.854903 66037.98 66037.98 60585.59
## 8 6.666508e-05 6525.031651 19137.780882 66037.98 78869.38 66037.98
## 9 4.512997e-16 9585.105829 15680.674981 66037.98 78670.87 66037.98
## 10 4.464572e-07 2596.708842 5892.949167 66037.98 66039.25 66037.98
## 11 1.982240e-08 6098.155978 12640.870499 66037.98 75407.49 66037.98
## price lotsize bedrooms bathrms stories driveway recroom fullbase gashw airco
## 1 42000 5850 3 1 2 yes no yes no no
## 2 42000 5850 3 1 2 yes no yes no no
## 3 42000 5850 3 1 2 yes no yes no no
## 4 42000 5850 3 1 2 yes no yes no no
## 5 42000 5850 3 1 2 yes no yes no no
## 6 42000 5850 3 1 2 yes no yes no no
## 7 42000 5850 3 1 2 yes no yes no no
## 8 42000 5850 3 1 2 yes no yes no no
## 9 42000 5850 3 1 2 yes no yes no no
## 10 42000 5850 3 1 2 yes no yes no no
## 11 42000 5850 3 1 2 yes no yes no no
## garagepl prefarea eps
## 1 1 no 1.4550
## 2 1 no 0.0005
## 3 1 no 0.0003
## 4 1 no 0.0003
## 5 1 no NA
## 6 1 no NA
## 7 1 no NA
## 8 1 no NA
## 9 1 no NA
## 10 1 no 0.0003
## 11 1 no NA
```
`rowid` is column that identifies the houses in the original data set, so `rowid == 1` filters out
the first house. This shows you the marginal effects (column `dydx` computed for this house; but
remember, since we’re dealing with a linear model, the values of the marginal effects are constant.
If you don’t see the point of this discussion, don’t fret, the next example should make things
clearer.
Let’s estimate a logit model and compute the marginal effects. You might know logit models as
*logistic regression*. Logit models can be estimated using the `glm()` function, which stands for
generalized linear models.
As an example, we are going to use the `Participation` data, also from the `{Ecdat}` package:
```
data(Participation)
```
```
?Particpation
```
```
Participation package:Ecdat R Documentation
Labor Force Participation
Description:
a cross-section
_number of observations_ : 872
_observation_ : individuals
_country_ : Switzerland
Usage:
data(Participation)
Format:
A dataframe containing :
lfp labour force participation ?
lnnlinc the log of nonlabour income
age age in years divided by 10
educ years of formal education
nyc the number of young children (younger than 7)
noc number of older children
foreign foreigner ?
Source:
Gerfin, Michael (1996) “Parametric and semiparametric estimation
of the binary response”, _Journal of Applied Econometrics_,
*11(3)*, 321-340.
References:
Davidson, R. and James G. MacKinnon (2004) _Econometric Theory
and Methods_, New York, Oxford University Press, <URL:
http://www.econ.queensu.ca/ETM/>, chapter 11.
Journal of Applied Econometrics data archive : <URL:
http://qed.econ.queensu.ca/jae/>.
See Also:
‘Index.Source’, ‘Index.Economics’, ‘Index.Econometrics’,
‘Index.Observations’
```
The variable of interest is `lfp`: whether the individual participates in the labour force or not.
To know which variables are relevant in the decision to participate in the labour force, one could
train a logit model, using `glm()`:
```
logit_participation <- glm(lfp ~ ., data = Participation, family = "binomial")
broom::tidy(logit_participation)
```
```
## # A tibble: 7 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 10.4 2.17 4.79 1.69e- 6
## 2 lnnlinc -0.815 0.206 -3.97 7.31e- 5
## 3 age -0.510 0.0905 -5.64 1.72e- 8
## 4 educ 0.0317 0.0290 1.09 2.75e- 1
## 5 nyc -1.33 0.180 -7.39 1.51e-13
## 6 noc -0.0220 0.0738 -0.298 7.66e- 1
## 7 foreignyes 1.31 0.200 6.56 5.38e-11
```
From the results above, one can only interpret the sign of the coefficients. To know how much a
variable influences the labour force participation, one has to use `marginaleffects()`:
```
effects_logit_participation <- marginaleffects(logit_participation)
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
As you can see, the average marginal effects here are not equal to the estimated coefficients of the
model. Let’s take a look at the first row of the data:
```
Participation[1, ]
```
```
## lfp lnnlinc age educ nyc noc foreign
## 1 no 10.7875 3 8 1 1 no
```
and let’s now look at `rowid == 1` in the marginal effects data frame:
```
effects_logit_participation %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lnnlinc dY/dX -0.156661756 0.038522800 -4.0667282
## 2 1 response age dY/dX -0.098097148 0.020123709 -4.8747052
## 3 1 response educ dY/dX 0.006099266 0.005367036 1.1364310
## 4 1 response nyc dY/dX -0.255784406 0.029367783 -8.7096942
## 5 1 response noc dY/dX -0.004226368 0.014167283 -0.2983189
## 6 1 response foreign yes - no 0.305630005 0.045174828 6.7654935
## p.value conf.low conf.high predicted predicted_hi predicted_lo lfp
## 1 4.767780e-05 -0.232165056 -0.08115846 0.2596523 0.2595710 0.2596523 no
## 2 1.089711e-06 -0.137538892 -0.05865540 0.2596523 0.2596111 0.2596523 no
## 3 2.557762e-01 -0.004419931 0.01661846 0.2596523 0.2596645 0.2596523 no
## 4 3.046958e-18 -0.313344203 -0.19822461 0.2596523 0.2595755 0.2596523 no
## 5 7.654598e-01 -0.031993732 0.02354100 0.2596523 0.2596497 0.2596523 no
## 6 1.328556e-11 0.217088969 0.39417104 0.2596523 0.5652823 0.2596523 no
## lnnlinc age educ nyc noc foreign eps
## 1 10.7875 3 8 1 1 no 0.0005188749
## 2 10.7875 3 8 1 1 no 0.0004200000
## 3 10.7875 3 8 1 1 no 0.0020000000
## 4 10.7875 3 8 1 1 no 0.0003000000
## 5 10.7875 3 8 1 1 no 0.0006000000
## 6 10.7875 3 8 1 1 no NA
```
Let’s focus on the first row, where `term` is `lnnlinc`. What we see here is the effect of an infinitesimal
increase in the variable `lnnlinc` on the participation, for an individual who has the following
characteristics: `lnnlinc = 10.7875`, `age = 3`, `educ = 8`, `nyc = 1`, `noc = 1` and `foreign = no`, which
are the characteristics of this first individual in our data.
So let’s look at the value of `dydx` for every individual:
```
dydx_lnnlinc <- effects_logit_participation %>%
filter(term == "lnnlinc")
head(dydx_lnnlinc)
```
```
## rowid type term contrast dydx std.error statistic p.value
## 1 1 response lnnlinc dY/dX -0.15666176 0.03852280 -4.066728 4.767780e-05
## 2 2 response lnnlinc dY/dX -0.20013939 0.05124543 -3.905507 9.402813e-05
## 3 3 response lnnlinc dY/dX -0.18493932 0.04319729 -4.281271 1.858287e-05
## 4 4 response lnnlinc dY/dX -0.05376281 0.01586468 -3.388837 7.018964e-04
## 5 5 response lnnlinc dY/dX -0.18709356 0.04502973 -4.154890 3.254439e-05
## 6 6 response lnnlinc dY/dX -0.19586185 0.04782143 -4.095692 4.209096e-05
## conf.low conf.high predicted predicted_hi predicted_lo lfp lnnlinc age
## 1 -0.23216506 -0.08115846 0.25965227 0.25957098 0.25965227 no 10.78750 3.0
## 2 -0.30057859 -0.09970018 0.43340025 0.43329640 0.43340025 yes 10.52425 4.5
## 3 -0.26960445 -0.10027418 0.34808777 0.34799181 0.34808777 no 10.96858 4.6
## 4 -0.08485701 -0.02266862 0.07101902 0.07099113 0.07101902 no 11.10500 3.1
## 5 -0.27535020 -0.09883692 0.35704926 0.35695218 0.35704926 no 11.10847 4.4
## 6 -0.28959014 -0.10213356 0.40160949 0.40150786 0.40160949 yes 11.02825 4.2
## educ nyc noc foreign eps
## 1 8 1 1 no 0.0005188749
## 2 8 0 1 no 0.0005188749
## 3 9 0 0 no 0.0005188749
## 4 11 2 0 no 0.0005188749
## 5 12 0 2 no 0.0005188749
## 6 12 0 1 no 0.0005188749
```
`dydx_lnnlinc` is a data frame with all individual marginal effect for the variable `lnnlinc`.
What if we compute the mean of this column?
```
dydx_lnnlinc %>%
summarise(mean(dydx))
```
```
## mean(dydx)
## 1 -0.1699405
```
Let’s compare this to the average marginal effects:
```
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
Yep, it’s the same! This is why we speak of *average marginal effects*. Now that we know why
these are called average marginal effects, let’s go back to interpreting them. This time,
let’s plot them, because why not:
```
plot(effects_logit_participation)
```
So an infinitesimal increase, in say, non\-labour income (`lnnlinc`) of 0\.001 is associated with a
decrease of the probability of labour force participation by 0\.001\*17 percentage points.
This is just scratching the surface of interpreting these kinds of models. There are many more
types of effects that you can compute and look at. I highly recommend you read the documentation
of `{marginaleffects}` which you can find
[here](https://vincentarelbundock.github.io/marginaleffects/index.html). The author
of the package, Vincent Arel\-Bundock writes a lot of very helpful documentation for his packages,
so if model interpretation is important for your job, definitely take a look.
### 6\.4\.2 Explainability of *black\-box* models
Just read Christoph Molnar’s
[Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/).
Seriously, I cannot add anything meaningful to it. His book is brilliant.
### 6\.4\.1 Marginal effects
If one wants to know the effect of variable `x` on the dependent variable `y`,
so\-called marginal effects have to be computed. This is easily done in R with the `{marginaleffects}` package.
Formally, marginal effects are the partial derivative of the regression equation with respect to the variable
we want to look at.
```
library(marginaleffects)
effects_model3 <- marginaleffects(model3)
summary(effects_model3)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lotsize dY/dX 3.546 0.3503 10.124 < 2.22e-16 2.86 4.233
## 2 bedrooms dY/dX 1832.003 1047.0056 1.750 0.08016056 -220.09 3884.097
## 3 bathrms dY/dX 14335.558 1489.9557 9.621 < 2.22e-16 11415.30 17255.818
## 4 stories dY/dX 6556.946 925.2943 7.086 1.3771e-12 4743.40 8370.489
## 5 driveway yes - no 6687.779 2045.2459 3.270 0.00107580 2679.17 10696.387
## 6 recroom yes - no 4511.284 1899.9577 2.374 0.01757689 787.44 8235.132
## 7 fullbase yes - no 5452.386 1588.0239 3.433 0.00059597 2339.92 8564.855
## 8 gashw yes - no 12831.406 3217.5970 3.988 6.6665e-05 6525.03 19137.781
## 9 airco yes - no 12632.890 1555.0211 8.124 4.5131e-16 9585.11 15680.676
## 10 garagepl dY/dX 4244.829 840.5965 5.050 4.4231e-07 2597.29 5892.368
## 11 prefarea yes - no 9369.513 1669.0906 5.614 1.9822e-08 6098.16 12640.871
##
## Model type: lm
## Prediction type: response
```
Let’s go through this: `summary(effects_model3)` shows the average marginal effects for each of the dependent
variables that were used in `model3`. The way to interpret them is as follows:
*everything else held constant (often you’ll read the Latin ceteris paribus for this), a unit increase in
`lotize` increases the `price` by 3\.546 units, on average.*
The *everything held constant* part is crucial; with marginal effects, you’re looking at just the effect of
one variable at a time. For discrete variables, like `driveway`, this is simpler: imagine you have two
equal houses, exactly the same house, one has a driveway and the other doesn’t. The one with the driveway
is 6687 units more expensive, *on average*.
Now it turns out that in the case of a linear model, the average marginal effects are exactly equal to the
coefficients. Just compare `summary(model3)` to `effects_model3` to see
(and remember, I told you that marginal effects were the partial derivative of the regression equation with
respect to the variable of interest. So the derivative of \\(\\alpha\*X\_1 \+ ....\\) with respect to \\(X\_1\\) will
be \\(\\alpha\\)). But in the case of a more complex, non\-linear model, this is not so obvious. This is
where `{marginaleffects}` will make your life much easier.
It is also possible to plot the results:
```
plot(effects_model3)
```
`effects_model3` is a data frame containing the effects for each house in the data set. For example,
let’s take a look at the first house:
```
effects_model3 %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lotsize dY/dX 3.546303 0.3502195 10.125944
## 2 1 response bedrooms dY/dX 1832.003466 1046.1608842 1.751168
## 3 1 response bathrms dY/dX 14335.558468 1490.4827945 9.618064
## 4 1 response stories dY/dX 6556.945711 925.4764870 7.084940
## 5 1 response driveway yes - no 6687.778890 2045.2460319 3.269914
## 6 1 response recroom yes - no 4511.283826 1899.9577182 2.374413
## 7 1 response fullbase yes - no 5452.385539 1588.0237538 3.433441
## 8 1 response gashw yes - no 12831.406266 3217.5971931 3.987885
## 9 1 response airco yes - no 12632.890405 1555.0207045 8.123937
## 10 1 response garagepl dY/dX 4244.829004 840.8930857 5.048001
## 11 1 response prefarea yes - no 9369.513239 1669.0904968 5.613544
## p.value conf.low conf.high predicted predicted_hi predicted_lo
## 1 4.238689e-24 2.859885 4.232721 66037.98 66043.14 66037.98
## 2 7.991698e-02 -218.434189 3882.441121 66037.98 66038.89 66037.98
## 3 6.708200e-22 11414.265872 17256.851065 66037.98 66042.28 66037.98
## 4 1.391042e-12 4743.045128 8370.846295 66037.98 66039.94 66037.98
## 5 1.075801e-03 2679.170328 10696.387452 66037.98 66037.98 59350.20
## 6 1.757689e-02 787.435126 8235.132526 66037.98 70549.26 66037.98
## 7 5.959723e-04 2339.916175 8564.854903 66037.98 66037.98 60585.59
## 8 6.666508e-05 6525.031651 19137.780882 66037.98 78869.38 66037.98
## 9 4.512997e-16 9585.105829 15680.674981 66037.98 78670.87 66037.98
## 10 4.464572e-07 2596.708842 5892.949167 66037.98 66039.25 66037.98
## 11 1.982240e-08 6098.155978 12640.870499 66037.98 75407.49 66037.98
## price lotsize bedrooms bathrms stories driveway recroom fullbase gashw airco
## 1 42000 5850 3 1 2 yes no yes no no
## 2 42000 5850 3 1 2 yes no yes no no
## 3 42000 5850 3 1 2 yes no yes no no
## 4 42000 5850 3 1 2 yes no yes no no
## 5 42000 5850 3 1 2 yes no yes no no
## 6 42000 5850 3 1 2 yes no yes no no
## 7 42000 5850 3 1 2 yes no yes no no
## 8 42000 5850 3 1 2 yes no yes no no
## 9 42000 5850 3 1 2 yes no yes no no
## 10 42000 5850 3 1 2 yes no yes no no
## 11 42000 5850 3 1 2 yes no yes no no
## garagepl prefarea eps
## 1 1 no 1.4550
## 2 1 no 0.0005
## 3 1 no 0.0003
## 4 1 no 0.0003
## 5 1 no NA
## 6 1 no NA
## 7 1 no NA
## 8 1 no NA
## 9 1 no NA
## 10 1 no 0.0003
## 11 1 no NA
```
`rowid` is column that identifies the houses in the original data set, so `rowid == 1` filters out
the first house. This shows you the marginal effects (column `dydx` computed for this house; but
remember, since we’re dealing with a linear model, the values of the marginal effects are constant.
If you don’t see the point of this discussion, don’t fret, the next example should make things
clearer.
Let’s estimate a logit model and compute the marginal effects. You might know logit models as
*logistic regression*. Logit models can be estimated using the `glm()` function, which stands for
generalized linear models.
As an example, we are going to use the `Participation` data, also from the `{Ecdat}` package:
```
data(Participation)
```
```
?Particpation
```
```
Participation package:Ecdat R Documentation
Labor Force Participation
Description:
a cross-section
_number of observations_ : 872
_observation_ : individuals
_country_ : Switzerland
Usage:
data(Participation)
Format:
A dataframe containing :
lfp labour force participation ?
lnnlinc the log of nonlabour income
age age in years divided by 10
educ years of formal education
nyc the number of young children (younger than 7)
noc number of older children
foreign foreigner ?
Source:
Gerfin, Michael (1996) “Parametric and semiparametric estimation
of the binary response”, _Journal of Applied Econometrics_,
*11(3)*, 321-340.
References:
Davidson, R. and James G. MacKinnon (2004) _Econometric Theory
and Methods_, New York, Oxford University Press, <URL:
http://www.econ.queensu.ca/ETM/>, chapter 11.
Journal of Applied Econometrics data archive : <URL:
http://qed.econ.queensu.ca/jae/>.
See Also:
‘Index.Source’, ‘Index.Economics’, ‘Index.Econometrics’,
‘Index.Observations’
```
The variable of interest is `lfp`: whether the individual participates in the labour force or not.
To know which variables are relevant in the decision to participate in the labour force, one could
train a logit model, using `glm()`:
```
logit_participation <- glm(lfp ~ ., data = Participation, family = "binomial")
broom::tidy(logit_participation)
```
```
## # A tibble: 7 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 10.4 2.17 4.79 1.69e- 6
## 2 lnnlinc -0.815 0.206 -3.97 7.31e- 5
## 3 age -0.510 0.0905 -5.64 1.72e- 8
## 4 educ 0.0317 0.0290 1.09 2.75e- 1
## 5 nyc -1.33 0.180 -7.39 1.51e-13
## 6 noc -0.0220 0.0738 -0.298 7.66e- 1
## 7 foreignyes 1.31 0.200 6.56 5.38e-11
```
From the results above, one can only interpret the sign of the coefficients. To know how much a
variable influences the labour force participation, one has to use `marginaleffects()`:
```
effects_logit_participation <- marginaleffects(logit_participation)
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
As you can see, the average marginal effects here are not equal to the estimated coefficients of the
model. Let’s take a look at the first row of the data:
```
Participation[1, ]
```
```
## lfp lnnlinc age educ nyc noc foreign
## 1 no 10.7875 3 8 1 1 no
```
and let’s now look at `rowid == 1` in the marginal effects data frame:
```
effects_logit_participation %>%
filter(rowid == 1)
```
```
## rowid type term contrast dydx std.error statistic
## 1 1 response lnnlinc dY/dX -0.156661756 0.038522800 -4.0667282
## 2 1 response age dY/dX -0.098097148 0.020123709 -4.8747052
## 3 1 response educ dY/dX 0.006099266 0.005367036 1.1364310
## 4 1 response nyc dY/dX -0.255784406 0.029367783 -8.7096942
## 5 1 response noc dY/dX -0.004226368 0.014167283 -0.2983189
## 6 1 response foreign yes - no 0.305630005 0.045174828 6.7654935
## p.value conf.low conf.high predicted predicted_hi predicted_lo lfp
## 1 4.767780e-05 -0.232165056 -0.08115846 0.2596523 0.2595710 0.2596523 no
## 2 1.089711e-06 -0.137538892 -0.05865540 0.2596523 0.2596111 0.2596523 no
## 3 2.557762e-01 -0.004419931 0.01661846 0.2596523 0.2596645 0.2596523 no
## 4 3.046958e-18 -0.313344203 -0.19822461 0.2596523 0.2595755 0.2596523 no
## 5 7.654598e-01 -0.031993732 0.02354100 0.2596523 0.2596497 0.2596523 no
## 6 1.328556e-11 0.217088969 0.39417104 0.2596523 0.5652823 0.2596523 no
## lnnlinc age educ nyc noc foreign eps
## 1 10.7875 3 8 1 1 no 0.0005188749
## 2 10.7875 3 8 1 1 no 0.0004200000
## 3 10.7875 3 8 1 1 no 0.0020000000
## 4 10.7875 3 8 1 1 no 0.0003000000
## 5 10.7875 3 8 1 1 no 0.0006000000
## 6 10.7875 3 8 1 1 no NA
```
Let’s focus on the first row, where `term` is `lnnlinc`. What we see here is the effect of an infinitesimal
increase in the variable `lnnlinc` on the participation, for an individual who has the following
characteristics: `lnnlinc = 10.7875`, `age = 3`, `educ = 8`, `nyc = 1`, `noc = 1` and `foreign = no`, which
are the characteristics of this first individual in our data.
So let’s look at the value of `dydx` for every individual:
```
dydx_lnnlinc <- effects_logit_participation %>%
filter(term == "lnnlinc")
head(dydx_lnnlinc)
```
```
## rowid type term contrast dydx std.error statistic p.value
## 1 1 response lnnlinc dY/dX -0.15666176 0.03852280 -4.066728 4.767780e-05
## 2 2 response lnnlinc dY/dX -0.20013939 0.05124543 -3.905507 9.402813e-05
## 3 3 response lnnlinc dY/dX -0.18493932 0.04319729 -4.281271 1.858287e-05
## 4 4 response lnnlinc dY/dX -0.05376281 0.01586468 -3.388837 7.018964e-04
## 5 5 response lnnlinc dY/dX -0.18709356 0.04502973 -4.154890 3.254439e-05
## 6 6 response lnnlinc dY/dX -0.19586185 0.04782143 -4.095692 4.209096e-05
## conf.low conf.high predicted predicted_hi predicted_lo lfp lnnlinc age
## 1 -0.23216506 -0.08115846 0.25965227 0.25957098 0.25965227 no 10.78750 3.0
## 2 -0.30057859 -0.09970018 0.43340025 0.43329640 0.43340025 yes 10.52425 4.5
## 3 -0.26960445 -0.10027418 0.34808777 0.34799181 0.34808777 no 10.96858 4.6
## 4 -0.08485701 -0.02266862 0.07101902 0.07099113 0.07101902 no 11.10500 3.1
## 5 -0.27535020 -0.09883692 0.35704926 0.35695218 0.35704926 no 11.10847 4.4
## 6 -0.28959014 -0.10213356 0.40160949 0.40150786 0.40160949 yes 11.02825 4.2
## educ nyc noc foreign eps
## 1 8 1 1 no 0.0005188749
## 2 8 0 1 no 0.0005188749
## 3 9 0 0 no 0.0005188749
## 4 11 2 0 no 0.0005188749
## 5 12 0 2 no 0.0005188749
## 6 12 0 1 no 0.0005188749
```
`dydx_lnnlinc` is a data frame with all individual marginal effect for the variable `lnnlinc`.
What if we compute the mean of this column?
```
dydx_lnnlinc %>%
summarise(mean(dydx))
```
```
## mean(dydx)
## 1 -0.1699405
```
Let’s compare this to the average marginal effects:
```
summary(effects_logit_participation)
```
```
## Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
## 1 lnnlinc dY/dX -0.169940 0.04151 -4.0939 4.2416e-05 -0.251300 -0.08858
## 2 age dY/dX -0.106407 0.01759 -6.0492 1.4560e-09 -0.140884 -0.07193
## 3 educ dY/dX 0.006616 0.00604 1.0954 0.27335 -0.005222 0.01845
## 4 nyc dY/dX -0.277463 0.03325 -8.3436 < 2.22e-16 -0.342642 -0.21229
## 5 noc dY/dX -0.004584 0.01538 -0.2981 0.76563 -0.034725 0.02556
## 6 foreign yes - no 0.283377 0.03984 7.1129 1.1361e-12 0.205292 0.36146
##
## Model type: glm
## Prediction type: response
```
Yep, it’s the same! This is why we speak of *average marginal effects*. Now that we know why
these are called average marginal effects, let’s go back to interpreting them. This time,
let’s plot them, because why not:
```
plot(effects_logit_participation)
```
So an infinitesimal increase, in say, non\-labour income (`lnnlinc`) of 0\.001 is associated with a
decrease of the probability of labour force participation by 0\.001\*17 percentage points.
This is just scratching the surface of interpreting these kinds of models. There are many more
types of effects that you can compute and look at. I highly recommend you read the documentation
of `{marginaleffects}` which you can find
[here](https://vincentarelbundock.github.io/marginaleffects/index.html). The author
of the package, Vincent Arel\-Bundock writes a lot of very helpful documentation for his packages,
so if model interpretation is important for your job, definitely take a look.
### 6\.4\.2 Explainability of *black\-box* models
Just read Christoph Molnar’s
[Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/).
Seriously, I cannot add anything meaningful to it. His book is brilliant.
6\.5 Comparing models
---------------------
Consider this section more as an illustration of what is possible with the knowledge you acquired
at this point. Imagine that the task at hand is to compare two models. We would like to select
the one which has the best fit to the data.
Let’s first estimate another model on the same data; prices are only positive, so a linear regression
might not be the best model, because the model could predict negative prices. Let’s look at the
distribution of prices:
```
ggplot(Housing) +
geom_density(aes(price))
```
it looks like modeling the log of `price` might provide a better fit:
```
model_log <- lm(log(price) ~ ., data = Housing)
result_log <- broom::tidy(model_log)
print(result_log)
```
```
## # A tibble: 12 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 10.0 0.0472 212. 0
## 2 lotsize 0.0000506 0.00000485 10.4 2.91e-23
## 3 bedrooms 0.0340 0.0145 2.34 1.94e- 2
## 4 bathrms 0.168 0.0206 8.13 3.10e-15
## 5 stories 0.0923 0.0128 7.20 2.10e-12
## 6 drivewayyes 0.131 0.0283 4.61 5.04e- 6
## 7 recroomyes 0.0735 0.0263 2.79 5.42e- 3
## 8 fullbaseyes 0.0994 0.0220 4.52 7.72e- 6
## 9 gashwyes 0.178 0.0446 4.00 7.22e- 5
## 10 aircoyes 0.178 0.0215 8.26 1.14e-15
## 11 garagepl 0.0508 0.0116 4.36 1.58e- 5
## 12 prefareayes 0.127 0.0231 5.50 6.02e- 8
```
Let’s take a look at the diagnostics:
```
glance(model_log)
```
```
## # A tibble: 1 × 12
## r.squared adj.r.squ…¹ sigma stati…² p.value df logLik AIC BIC devia…³
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.677 0.670 0.214 102. 3.67e-123 11 73.9 -122. -65.8 24.4
## # … with 2 more variables: df.residual <int>, nobs <int>, and abbreviated
## # variable names ¹adj.r.squared, ²statistic, ³deviance
```
Let’s compare these to the ones from the previous model:
```
diag_lm <- glance(model3)
diag_lm <- diag_lm %>%
mutate(model = "lin-lin model")
diag_log <- glance(model_log)
diag_log <- diag_log %>%
mutate(model = "log-lin model")
diagnostics_models <- full_join(diag_lm, diag_log) %>%
select(model, everything()) # put the `model` column first
```
```
## Joining, by = c("r.squared", "adj.r.squared", "sigma", "statistic", "p.value", "df", "logLik", "AIC", "BIC",
## "deviance", "df.residual", "nobs", "model")
```
```
print(diagnostics_models)
```
```
## # A tibble: 2 × 13
## model r.squ…¹ adj.r…² sigma stati…³ p.value df logLik AIC BIC
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 lin-li… 0.673 0.666 1.54e+4 100. 6.18e-122 11 -6034. 12094. 12150.
## 2 log-li… 0.677 0.670 2.14e-1 102. 3.67e-123 11 73.9 -122. -65.8
## # … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>, and
## # abbreviated variable names ¹r.squared, ²adj.r.squared, ³statistic
```
I saved the diagnostics in two different `data.frame` objects using the `glance()` function and added a
`model` column to indicate which model the diagnostics come from. Then I merged both datasets using
`full_join()`, a `{dplyr}` function. Using this approach, we can easily build a data frame with the
diagnostics of several models and compare them. The model using the logarithm of prices has lower
AIC and BIC (and this higher likelihood), so if you’re worried about selecting the model with the better
fit to the data, you’d go for this model.
6\.6 Using a model for prediction
---------------------------------
Once you estimated a model, you might want to use it for prediction. This is easily done using the
`predict()` function that works with most models. Prediction is also useful as a way to test the
accuracy of your model: split your data into a training set (used for training) and a testing
set (used for the pseudo\-prediction) and see if your model overfits the data. We are going to see
how to do that in a later section; for now, let’s just get acquainted with `predict()` and other
functions. I insist, keep in mind that this section is only to get acquainted with these functions.
We are going to explore prediction, overfitting and tuning of models in a later section.
Let’s go back to the models we trained in the previous section, `model3` and `model_log`. Let’s also
take a subsample of data, which we will be using for prediction:
```
set.seed(1234)
pred_set <- Housing %>%
sample_n(20)
```
In order to always get the same `pred_set`, I set the random seed first. Let’s take a look at the
data:
```
print(pred_set)
```
```
## price lotsize bedrooms bathrms stories driveway recroom fullbase gashw
## 284 45000 6750 2 1 1 yes no no no
## 101 57000 4500 3 2 2 no no yes no
## 400 85000 7231 3 1 2 yes yes yes no
## 98 59900 8250 3 1 1 yes no yes no
## 103 125000 4320 3 1 2 yes no yes yes
## 326 99000 8880 3 2 2 yes no yes no
## 79 55000 3180 2 2 1 yes no yes no
## 270 59000 4632 4 1 2 yes no no no
## 382 112500 6550 3 1 2 yes no yes no
## 184 63900 3510 3 1 2 yes no no no
## 4 60500 6650 3 1 2 yes yes no no
## 212 42000 2700 2 1 1 no no no no
## 195 33000 3180 2 1 1 yes no no no
## 511 70000 4646 3 1 2 yes yes yes no
## 479 88000 5450 4 2 1 yes no yes no
## 510 64000 4040 3 1 2 yes no no no
## 424 62900 2880 3 1 2 yes no no no
## 379 84000 7160 3 1 1 yes no yes no
## 108 58500 3680 3 2 2 yes no no no
## 131 35000 4840 2 1 2 yes no no no
## airco garagepl prefarea
## 284 no 0 no
## 101 yes 0 no
## 400 yes 0 yes
## 98 no 3 no
## 103 no 2 no
## 326 yes 1 no
## 79 no 2 no
## 270 yes 0 no
## 382 yes 0 yes
## 184 no 0 no
## 4 no 0 no
## 212 no 0 no
## 195 no 0 no
## 511 no 2 no
## 479 yes 0 yes
## 510 no 1 no
## 424 no 0 yes
## 379 no 2 yes
## 108 no 0 no
## 131 no 0 no
```
If we wish to use it for prediction, this is easily done with `predict()`:
```
predict(model3, pred_set)
```
```
## 284 101 400 98 103 326 79 270
## 51143.48 77286.31 93204.28 76481.82 77688.37 103751.72 66760.79 66486.26
## 382 184 4 212 195 511 479 510
## 86277.96 48042.41 63689.09 30093.18 38483.18 70524.34 91987.65 54166.78
## 424 379 108 131
## 55177.75 77741.03 62980.84 50926.99
```
This returns a vector of predicted prices. This can then be used to compute the Root Mean Squared Error
for instance. Let’s do it within a `tidyverse` pipeline:
```
rmse <- pred_set %>%
mutate(predictions = predict(model3, .)) %>%
summarise(sqrt(sum(predictions - price)**2/n()))
```
The root mean square error of `model3` is 3646\.0817347\.
I also used the `n()` function which returns the number of observations in a group (or all the
observations, if the data is not grouped). Let’s compare `model3` ’s RMSE with the one from
`model_log`:
```
rmse2 <- pred_set %>%
mutate(predictions = exp(predict(model_log, .))) %>%
summarise(sqrt(sum(predictions - price)**2/n()))
```
Don’t forget to exponentiate the predictions, remember you’re dealing with a log\-linear model! `model_log`’s
RMSE is 1\.2125133^{4} which is lower than `model3`’s. However, keep in mind that the model was trained
on the whole data, and then the prediction quality was assessed using a subsample of the data the
model was trained on… so actually we can’t really say if `model_log`’s predictions are very useful.
Of course, this is the same for `model3`.
In a later section we are going to learn how to do cross validation to avoid this issue.
Just as a side note, notice that I had to copy and paste basically the same lines twice to compute
the predictions for both models. That’s not much, but if I wanted to compare 10 models, copy and
paste mistakes could have sneaked in. Instead, it would have been nice to have a function that
computes the RMSE and then use it on my models. We are going to learn how to write our own functions
and use it just like if it was another built\-in R function.
6\.7 Beyond linear regression
-----------------------------
R has a lot of other built\-in functions for regression, such as `glm()` (for Generalized Linear
Models) and `nls()` for (for Nonlinear Least Squares). There are also functions and additional
packages for time series, panel data, machine learning, bayesian and nonparametric methods.
Presenting everything here would take too much space, and would be pretty useless as you can find
whatever you need using an internet search engine. What you have learned until now is quite general
and should work on many type of models. To help you out, here is a list of methods and the
recommended packages that you can use:
| Model | Package | Quick example |
| --- | --- | --- |
| Robust Linear Regression | `MASS` | `rlm(y ~ x, data = mydata)` |
| Nonlinear Least Squares | `stats`[2](#fn2) | `nls(y ~ x1 / (1 + x2), data = mydata)`[3](#fn3) |
| Logit | `stats` | `glm(y ~ x, data = mydata, family = "binomial")` |
| Probit | `stats` | `glm(y ~ x, data = mydata, family = binomial(link = "probit"))` |
| K\-Means | `stats` | `kmeans(data, n)`[4](#fn4) |
| PCA | `stats` | `prcomp(data, scale = TRUE, center = TRUE)`[5](#fn5) |
| Multinomial Logit | `mlogit` | Requires several steps of data pre\-processing and formula definition, refer to the [Vignette](https://cran.r-project.org/web/packages/mlogit/vignettes/mlogit.pdf) for more details. |
| Cox PH | `survival` | `coxph(Surv(y_time, y_status) ~ x, data = mydata)`[6](#fn6) |
| Time series | Several, depending on your needs. | Time series in R is a vast subject that would require a very thick book to cover. You can get started with the following series of blog articles, [Tidy time\-series, part 1](http://www.business-science.io/timeseries-analysis/2017/07/02/tidy-timeseries-analysis.html), [Tidy time\-series, part 2](http://www.business-science.io/timeseries-analysis/2017/07/23/tidy-timeseries-analysis-pt-2.html), [Tidy time\-series, part 3](http://www.business-science.io/timeseries-analysis/2017/07/30/tidy-timeseries-analysis-pt-3.html) and [Tidy time\-series, part 4](http://www.business-science.io/timeseries-analysis/2017/08/30/tidy-timeseries-analysis-pt-4.html) |
| Panel data | `plm` | `plm(y ~ x, data = mydata, model = "within|random")` |
| Machine learning | Several, depending on your needs. | R is a very popular programming language for machine learning. [This book](https://www.tmwr.org/) is a must read if you need to do machine learning with R. |
| Nonparametric regression | `np` | Several functions and options available, refer to the [Vignette](https://cran.r-project.org/web/packages/np/vignettes/np.pdf) for more details. |
This table is far from being complete. Should you be a Bayesian, you’d want to look at packages
such as `{rstan}`, which uses `STAN`, an external piece of software that must be installed on your
system. It is also possible to train models using Bayesian inference without the need of external
tools, with the `{bayesm}` package which estimates the usual micro\-econometric models. There really
are a lot of packages available for Bayesian inference, and you can find them all in the [related
CRAN Task View](https://cran.r-project.org/web/views/Bayesian.html).
6\.8 Hyper\-parameters
----------------------
Hyper\-parameters are parameters of the model that cannot be directly learned from the data.
A linear regression does not have any hyper\-parameters, but a random forest for instance has several.
You might have heard of ridge regression, lasso and elasticnet. These are
extensions of linear models that avoid over\-fitting by penalizing *large* models. These
extensions of the linear regression have hyper\-parameters that the practitioner has to tune. There
are several ways one can tune these parameters, for example, by doing a grid\-search, or a random
search over the grid or using more elaborate methods. To introduce hyper\-parameters, let’s get
to know ridge regression, also called Tikhonov regularization.
### 6\.8\.1 Ridge regression
Ridge regression is used when the data you are working with has a lot of explanatory variables,
or when there is a risk that a simple linear regression might overfit to the training data, because,
for example, your explanatory variables are collinear.
If you are training a linear model and then you notice that it generalizes very badly to new,
unseen data, it is very likely that the linear model you trained overfit the data.
In this case, ridge regression might prove useful. The way ridge regression works might seem
counter\-intuititive; it boils down to fitting a *worse* model to the training data, but in return,
this worse model will generalize better to new data.
The closed form solution of the ordinary least squares estimator is defined as:
\\\[
\\widehat{\\beta} \= (X'X)^{\-1}X'Y
\\]
where \\(X\\) is the design matrix (the matrix made up of the explanatory variables) and \\(Y\\) is the
dependent variable. For ridge regression, this closed form solution changes a little bit:
\\\[
\\widehat{\\beta} \= (X'X \+ \\lambda I\_p)^{\-1}X'Y
\\]
where \\(\\lambda \\in \\mathbb{R}\\) is an hyper\-parameter and \\(I\_p\\) is the identity matrix of dimension \\(p\\)
(\\(p\\) is the number of explanatory variables).
This formula above is the closed form solution to the following optimisation program:
\\\[
\\sum\_{i\=1}^n \\left(y\_i \- \\sum\_{j\=1}^px\_{ij}\\beta\_j\\right)^2
\\]
such that:
\\\[
\\sum\_{j\=1}^p(\\beta\_j)^2 \< c
\\]
for any strictly positive \\(c\\).
The `glmnet()` function from the `{glmnet}` package can be used for ridge regression, by setting
the `alpha` argument to 0 (setting it to 1 would do LASSO, and setting it to a number between
0 and 1 would do elasticnet). But in order to compare linear regression and ridge regression,
let me first divide the data into a training set and a testing set:
```
index <- 1:nrow(Housing)
set.seed(12345)
train_index <- sample(index, round(0.90*nrow(Housing)), replace = FALSE)
test_index <- setdiff(index, train_index)
train_x <- Housing[train_index, ] %>%
select(-price)
train_y <- Housing[train_index, ] %>%
pull(price)
test_x <- Housing[test_index, ] %>%
select(-price)
test_y <- Housing[test_index, ] %>%
pull(price)
```
I do the train/test split this way, because `glmnet()` requires a design matrix as input, and not
a formula. Design matrices can be created using the `model.matrix()` function:
```
library("glmnet")
train_matrix <- model.matrix(train_y ~ ., data = train_x)
test_matrix <- model.matrix(test_y ~ ., data = test_x)
```
Let’s now run a linear regression, by setting the penalty to 0:
```
model_lm_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 0)
```
The model above provides the same result as a linear regression, because I set `lambda` to 0\. Let’s
compare the coefficients between the two:
```
coef(model_lm_ridge)
```
```
## 13 x 1 sparse Matrix of class "dgCMatrix"
## s0
## (Intercept) -2667.542863
## (Intercept) .
## lotsize 3.397596
## bedrooms 2081.087654
## bathrms 13294.192823
## stories 6400.454580
## drivewayyes 6530.644895
## recroomyes 5389.856794
## fullbaseyes 4899.099463
## gashwyes 12575.611265
## aircoyes 13078.144146
## garagepl 4155.249461
## prefareayes 10260.781753
```
and now the coefficients of the linear regression (because I provide a design matrix, I have to use
`lm.fit()` instead of `lm()` which requires a formula, not a matrix.)
```
coef(lm.fit(x = train_matrix, y = train_y))
```
```
## (Intercept) lotsize bedrooms bathrms stories drivewayyes
## -2667.052098 3.397629 2081.344118 13293.707725 6400.416730 6529.972544
## recroomyes fullbaseyes gashwyes aircoyes garagepl prefareayes
## 5388.871137 4899.024787 12575.970220 13077.988867 4155.269629 10261.056772
```
as you can see, the coefficients are the same. Let’s compute the RMSE for the unpenalized linear
regression:
```
preds_lm <- predict(model_lm_ridge, test_matrix)
rmse_lm <- sqrt(mean(preds_lm - test_y)^2)
```
The RMSE for the linear unpenalized regression is equal to 1731\.5553157\.
Let’s now run a ridge regression, with `lambda` equal to 100, and see if the RMSE is smaller:
```
model_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 100)
```
and let’s compute the RMSE again:
```
preds <- predict(model_ridge, test_matrix)
rmse <- sqrt(mean(preds - test_y)^2)
```
The RMSE for the linear penalized regression is equal to 1726\.7632312, which is smaller than before.
But which value of `lambda` gives smallest RMSE? To find out, one must run model over a grid of
`lambda` values and pick the model with lowest RMSE. This procedure is available in the `cv.glmnet()`
function, which picks the best value for `lambda`:
```
best_model <- cv.glmnet(train_matrix, train_y)
# lambda that minimises the MSE
best_model$lambda.min
```
```
## [1] 61.42681
```
According to `cv.glmnet()` the best value for `lambda` is 61\.4268056\. In the
next section, we will implement cross validation ourselves, in order to find the hyper\-parameters
of a random forest.
### 6\.8\.1 Ridge regression
Ridge regression is used when the data you are working with has a lot of explanatory variables,
or when there is a risk that a simple linear regression might overfit to the training data, because,
for example, your explanatory variables are collinear.
If you are training a linear model and then you notice that it generalizes very badly to new,
unseen data, it is very likely that the linear model you trained overfit the data.
In this case, ridge regression might prove useful. The way ridge regression works might seem
counter\-intuititive; it boils down to fitting a *worse* model to the training data, but in return,
this worse model will generalize better to new data.
The closed form solution of the ordinary least squares estimator is defined as:
\\\[
\\widehat{\\beta} \= (X'X)^{\-1}X'Y
\\]
where \\(X\\) is the design matrix (the matrix made up of the explanatory variables) and \\(Y\\) is the
dependent variable. For ridge regression, this closed form solution changes a little bit:
\\\[
\\widehat{\\beta} \= (X'X \+ \\lambda I\_p)^{\-1}X'Y
\\]
where \\(\\lambda \\in \\mathbb{R}\\) is an hyper\-parameter and \\(I\_p\\) is the identity matrix of dimension \\(p\\)
(\\(p\\) is the number of explanatory variables).
This formula above is the closed form solution to the following optimisation program:
\\\[
\\sum\_{i\=1}^n \\left(y\_i \- \\sum\_{j\=1}^px\_{ij}\\beta\_j\\right)^2
\\]
such that:
\\\[
\\sum\_{j\=1}^p(\\beta\_j)^2 \< c
\\]
for any strictly positive \\(c\\).
The `glmnet()` function from the `{glmnet}` package can be used for ridge regression, by setting
the `alpha` argument to 0 (setting it to 1 would do LASSO, and setting it to a number between
0 and 1 would do elasticnet). But in order to compare linear regression and ridge regression,
let me first divide the data into a training set and a testing set:
```
index <- 1:nrow(Housing)
set.seed(12345)
train_index <- sample(index, round(0.90*nrow(Housing)), replace = FALSE)
test_index <- setdiff(index, train_index)
train_x <- Housing[train_index, ] %>%
select(-price)
train_y <- Housing[train_index, ] %>%
pull(price)
test_x <- Housing[test_index, ] %>%
select(-price)
test_y <- Housing[test_index, ] %>%
pull(price)
```
I do the train/test split this way, because `glmnet()` requires a design matrix as input, and not
a formula. Design matrices can be created using the `model.matrix()` function:
```
library("glmnet")
train_matrix <- model.matrix(train_y ~ ., data = train_x)
test_matrix <- model.matrix(test_y ~ ., data = test_x)
```
Let’s now run a linear regression, by setting the penalty to 0:
```
model_lm_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 0)
```
The model above provides the same result as a linear regression, because I set `lambda` to 0\. Let’s
compare the coefficients between the two:
```
coef(model_lm_ridge)
```
```
## 13 x 1 sparse Matrix of class "dgCMatrix"
## s0
## (Intercept) -2667.542863
## (Intercept) .
## lotsize 3.397596
## bedrooms 2081.087654
## bathrms 13294.192823
## stories 6400.454580
## drivewayyes 6530.644895
## recroomyes 5389.856794
## fullbaseyes 4899.099463
## gashwyes 12575.611265
## aircoyes 13078.144146
## garagepl 4155.249461
## prefareayes 10260.781753
```
and now the coefficients of the linear regression (because I provide a design matrix, I have to use
`lm.fit()` instead of `lm()` which requires a formula, not a matrix.)
```
coef(lm.fit(x = train_matrix, y = train_y))
```
```
## (Intercept) lotsize bedrooms bathrms stories drivewayyes
## -2667.052098 3.397629 2081.344118 13293.707725 6400.416730 6529.972544
## recroomyes fullbaseyes gashwyes aircoyes garagepl prefareayes
## 5388.871137 4899.024787 12575.970220 13077.988867 4155.269629 10261.056772
```
as you can see, the coefficients are the same. Let’s compute the RMSE for the unpenalized linear
regression:
```
preds_lm <- predict(model_lm_ridge, test_matrix)
rmse_lm <- sqrt(mean(preds_lm - test_y)^2)
```
The RMSE for the linear unpenalized regression is equal to 1731\.5553157\.
Let’s now run a ridge regression, with `lambda` equal to 100, and see if the RMSE is smaller:
```
model_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 100)
```
and let’s compute the RMSE again:
```
preds <- predict(model_ridge, test_matrix)
rmse <- sqrt(mean(preds - test_y)^2)
```
The RMSE for the linear penalized regression is equal to 1726\.7632312, which is smaller than before.
But which value of `lambda` gives smallest RMSE? To find out, one must run model over a grid of
`lambda` values and pick the model with lowest RMSE. This procedure is available in the `cv.glmnet()`
function, which picks the best value for `lambda`:
```
best_model <- cv.glmnet(train_matrix, train_y)
# lambda that minimises the MSE
best_model$lambda.min
```
```
## [1] 61.42681
```
According to `cv.glmnet()` the best value for `lambda` is 61\.4268056\. In the
next section, we will implement cross validation ourselves, in order to find the hyper\-parameters
of a random forest.
6\.9 Training, validating, and testing models
---------------------------------------------
Cross\-validation is an important procedure which is used to compare models but also to tune the
hyper\-parameters of a model. In this section, we are going to use several packages from the
[`{tidymodels}`](https://github.com/tidymodels) collection of packages, namely
[`{recipes}`](https://tidymodels.github.io/recipes/),
[`{rsample}`](https://tidymodels.github.io/rsample/) and
[`{parsnip}`](https://tidymodels.github.io/parsnip/) to train a random forest the tidy way. I will
also use [`{mlrMBO}`](http://mlrmbo.mlr-org.com/) to tune the hyper\-parameters of the random forest.
### 6\.9\.1 Set up
Let’s load the needed packages:
```
library("tidyverse")
library("recipes")
library("rsample")
library("parsnip")
library("yardstick")
library("brotools")
library("mlbench")
```
Load the data which is included in the `{mlrbench}` package:
```
data("BostonHousing2")
```
I will train a random forest to predict the housing prices, which is the `cmedv` column:
```
head(BostonHousing2)
```
```
## town tract lon lat medv cmedv crim zn indus chas nox
## 1 Nahant 2011 -70.9550 42.2550 24.0 24.0 0.00632 18 2.31 0 0.538
## 2 Swampscott 2021 -70.9500 42.2875 21.6 21.6 0.02731 0 7.07 0 0.469
## 3 Swampscott 2022 -70.9360 42.2830 34.7 34.7 0.02729 0 7.07 0 0.469
## 4 Marblehead 2031 -70.9280 42.2930 33.4 33.4 0.03237 0 2.18 0 0.458
## 5 Marblehead 2032 -70.9220 42.2980 36.2 36.2 0.06905 0 2.18 0 0.458
## 6 Marblehead 2033 -70.9165 42.3040 28.7 28.7 0.02985 0 2.18 0 0.458
## rm age dis rad tax ptratio b lstat
## 1 6.575 65.2 4.0900 1 296 15.3 396.90 4.98
## 2 6.421 78.9 4.9671 2 242 17.8 396.90 9.14
## 3 7.185 61.1 4.9671 2 242 17.8 392.83 4.03
## 4 6.998 45.8 6.0622 3 222 18.7 394.63 2.94
## 5 7.147 54.2 6.0622 3 222 18.7 396.90 5.33
## 6 6.430 58.7 6.0622 3 222 18.7 394.12 5.21
```
Only keep relevant columns:
```
boston <- BostonHousing2 %>%
select(-medv, -tract, -lon, -lat) %>%
rename(price = cmedv)
```
I remove `tract`, `lat` and `lon` because the information contained in the column `town` is enough.
To train and evaluate the model’s performance, I split the data in two.
One data set, called the training set, will be further split into two down below. I won’t
touch the second data set, the test set, until the very end, to finally assess the model’s
performance.
```
train_test_split <- initial_split(boston, prop = 0.9)
housing_train <- training(train_test_split)
housing_test <- testing(train_test_split)
```
`initial_split()`, `training()` and `testing()` are functions from the `{rsample}` package.
I will train a random forest on the training data, but the question, is *which* random forest?
Because random forests have several hyper\-parameters, and as explained in the intro these
hyper\-parameters cannot be directly learned from the data, which one should we choose? We could
train 6 random forests for instance and compare their performance, but why only 6? Why not 16?
In order to find the right hyper\-parameters, the practitioner can
use values from the literature that seemed to have worked well (like is done in Macro\-econometrics)
or you can further split the train set into two, create a grid of hyperparameter, train the model
on one part of the data for all values of the grid, and compare the predictions of the models on the
second part of the data. You then stick with the model that performed the best, for example, the
model with lowest RMSE. The thing is, you can’t estimate the true value of the RMSE with only
one value. It’s like if you wanted to estimate the height of the population by drawing one single
observation from the population. You need a bit more observations. To approach the true value of the
RMSE for a give set of hyperparameters, instead of doing one split, let’s do 30\. Then we
compute the average RMSE, which implies training 30 models for each combination of the values of the
hyperparameters.
First, let’s split the training data again, using the `mc_cv()` function from `{rsample}` package.
This function implements Monte Carlo cross\-validation:
```
validation_data <- mc_cv(housing_train, prop = 0.9, times = 30)
```
What does `validation_data` look like?
```
validation_data
```
```
## # Monte Carlo cross-validation (0.9/0.1) with 30 resamples
## # A tibble: 30 × 2
## splits id
## <list> <chr>
## 1 <split [409/46]> Resample01
## 2 <split [409/46]> Resample02
## 3 <split [409/46]> Resample03
## 4 <split [409/46]> Resample04
## 5 <split [409/46]> Resample05
## 6 <split [409/46]> Resample06
## 7 <split [409/46]> Resample07
## 8 <split [409/46]> Resample08
## 9 <split [409/46]> Resample09
## 10 <split [409/46]> Resample10
## # … with 20 more rows
```
Let’s look further down:
```
validation_data$splits[[1]]
```
```
## <Analysis/Assess/Total>
## <409/46/455>
```
The first value is the number of rows of the first set, the second value of the second, and the third
was the original amount of values in the training data, before splitting again.
How should we call these two new data sets? The author of `{rsample}`, Max Kuhn, talks about
the *analysis* and the *assessment* sets, and I’m going to use this terminology as well.
Now, in order to continue I need to pre\-process the data. I will do this in three steps.
The first and the second steps are used to center and scale the numeric variables and the third step
converts character and factor variables to dummy variables. This is needed because I will train a
random forest, which cannot handle factor variables directly. Let’s define a recipe to do that,
and start by pre\-processing the testing set. I write a wrapper function around the recipe,
because I will need to apply this recipe to various data sets:
```
simple_recipe <- function(dataset){
recipe(price ~ ., data = dataset) %>%
step_center(all_numeric()) %>%
step_scale(all_numeric()) %>%
step_dummy(all_nominal())
}
```
We have not learned yet about writing functions, and will do so in the next chapter. However, for
now, you only need to know that you can write your own functions, and that these functions can
take any arguments you need. In the case of the above function, which we called `simple_recipe()`,
we only need one argument, which is a dataset, and which we called `dataset`.
Once the recipe is defined, I can use the `prep()` function, which estimates the parameters from
the data which are needed to process the data. For example, for centering, `prep()` estimates
the mean which will then be subtracted from the variables. With `bake()` the estimates are then
applied on the data:
```
testing_rec <- prep(simple_recipe(housing_test), testing = housing_test)
test_data <- bake(testing_rec, new_data = housing_test)
```
It is important to split the data before using `prep()` and `bake()`, because if not, you will
use observations from the test set in the `prep()` step, and thus introduce knowledge from the test
set into the training data. This is called data leakage, and must be avoided. This is why it is
necessary to first split the training data into an analysis and an assessment set, and then also
pre\-process these sets separately. However, the `validation_data` object cannot now be used with
`recipe()`, because it is not a dataframe. No worries, I simply need to write a function that extracts
the analysis and assessment sets from the `validation_data` object, applies the pre\-processing, trains
the model, and returns the RMSE. This will be a big function, at the center of the analysis.
But before that, let’s run a simple linear regression, as a benchmark. For the linear regression, I will
not use any CV, so let’s pre\-process the training set:
```
trainlm_rec <- prep(simple_recipe(housing_train), testing = housing_train)
trainlm_data <- bake(trainlm_rec, new_data = housing_train)
linreg_model <- lm(price ~ ., data = trainlm_data)
broom::augment(linreg_model, newdata = test_data) %>%
yardstick::rmse(price, .fitted)
```
```
## Warning in predict.lm(x, newdata = newdata, na.action = na.pass, ...):
## prediction from a rank-deficient fit may be misleading
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.439
```
`broom::augment()` adds the predictions to the `test_data` in a new column, `.fitted`. I won’t
use this trick with the random forest, because there is no `augment()` method for random forests
from the `{ranger}` package which I’ll use. I’ll add the predictions to the data myself.
Ok, now let’s go back to the random forest and write the big function:
```
my_rf <- function(mtry, trees, split, id){
analysis_set <- analysis(split)
analysis_prep <- prep(simple_recipe(analysis_set), training = analysis_set)
analysis_processed <- bake(analysis_prep, new_data = analysis_set)
model <- rand_forest(mode = "regression", mtry = mtry, trees = trees) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = analysis_processed)
assessment_set <- assessment(split)
assessment_prep <- prep(simple_recipe(assessment_set), testing = assessment_set)
assessment_processed <- bake(assessment_prep, new_data = assessment_set)
tibble::tibble("id" = id,
"truth" = assessment_processed$price,
"prediction" = unlist(predict(model, new_data = assessment_processed)))
}
```
The `rand_forest()` function is available in the `{parsnip}` package. This package provides an
unified interface to a lot of other machine learning packages. This means that instead of having to
learn the syntax of `range()` and `randomForest()` and, and… you can simply use the `rand_forest()`
function and change the `engine` argument to the one you want (`ranger`, `randomForest`, etc).
Let’s try this function:
```
results_example <- map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = 3, trees = 200, split = .x, id = .y))
```
```
head(results_example)
```
```
## # A tibble: 6 × 3
## id truth prediction
## <chr> <dbl> <dbl>
## 1 Resample01 -0.328 -0.0274
## 2 Resample01 1.06 0.686
## 3 Resample01 1.04 0.726
## 4 Resample01 -0.418 -0.0190
## 5 Resample01 0.909 0.642
## 6 Resample01 0.0926 -0.134
```
I can now compute the RMSE when `mtry` \= 3 and `trees` \= 200:
```
results_example %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
```
```
## [1] 0.6305034
```
The random forest has already lower RMSE than the linear regression. The goal now is to lower this
RMSE by tuning the `mtry` and `trees` hyperparameters. For this, I will use Bayesian Optimization
methods implemented in the `{mlrMBO}` package.
### 6\.9\.2 Bayesian hyperparameter optimization
I will re\-use the code from above, and define a function that does everything from pre\-processing
to returning the metric I want to minimize by tuning the hyperparameters, the RMSE:
```
tuning <- function(param, validation_data){
mtry <- param[1]
trees <- param[2]
results <- purrr::map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = mtry, trees = trees, split = .x, id = .y))
results %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
}
```
This is exactly the code from before, but it now returns the RMSE. Let’s try the function
with the values from before:
```
tuning(c(3, 200), validation_data)
```
```
## [1] 0.6319843
```
I now follow the code that can be found in the [arxiv](https://arxiv.org/abs/1703.03373) paper to
run the optimization. A simpler model, called the surrogate model, is used to look for promising
points and to evaluate the value of the function at these points. This seems somewhat similar
(in spirit) to the *Indirect Inference* method as described in
[Gourieroux, Monfort, Renault](https://www.jstor.org/stable/2285076).
If you don’t really get what follows, no worries, it is not really important as such. The idea
is simply to look for hyper\-parameters in an efficient way, and bayesian optimisation provides
this efficient way. However, you could use another method, for example a grid search. This would not
change anything to the general approach. So I will not spend too much time explaining what is
going on below, as you can read the details in the paper cited above as well as the package’s
documentation. The focus here is not on this particular method, but rather showing you how you can
use various packages to solve a data science problem.
Let’s first load the package and create the function to optimize:
```
library("mlrMBO")
```
```
fn <- makeSingleObjectiveFunction(name = "tuning",
fn = tuning,
par.set = makeParamSet(makeIntegerParam("x1", lower = 3, upper = 8),
makeIntegerParam("x2", lower = 100, upper = 500)))
```
This function is based on the function I defined before. The parameters to optimize are also
defined as are their bounds. I will look for `mtry` between the values of 3 and 8, and `trees`
between 50 and 500\.
We still need to define some other objects before continuing:
```
# Create initial random Latin Hypercube Design of 10 points
library(lhs)# for randomLHS
des <- generateDesign(n = 5L * 2L, getParamSet(fn), fun = randomLHS)
```
Then we choose the surrogate model, a random forest too:
```
# Specify kriging model with standard error estimation
surrogate <- makeLearner("regr.ranger", predict.type = "se", keep.inbag = TRUE)
```
Here I define some options:
```
# Set general controls
ctrl <- makeMBOControl()
ctrl <- setMBOControlTermination(ctrl, iters = 10L)
ctrl <- setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI())
```
And this is the optimization part:
```
# Start optimization
result <- mbo(fn, des, surrogate, ctrl, more.args = list("validation_data" = validation_data))
```
```
result
```
```
## Recommended parameters:
## x1=8; x2=314
## Objective: y = 0.484
##
## Optimization path
## 10 + 10 entries in total, displaying last 10 (or less):
## x1 x2 y dob eol error.message exec.time ei error.model
## 11 8 283 0.4855415 1 NA <NA> 7.353 -3.276847e-04 <NA>
## 12 8 284 0.4852047 2 NA <NA> 7.321 -3.283713e-04 <NA>
## 13 8 314 0.4839817 3 NA <NA> 7.703 -3.828517e-04 <NA>
## 14 8 312 0.4841398 4 NA <NA> 7.633 -2.829713e-04 <NA>
## 15 8 318 0.4841066 5 NA <NA> 7.692 -2.668354e-04 <NA>
## 16 8 314 0.4845221 6 NA <NA> 7.574 -1.382333e-04 <NA>
## 17 8 321 0.4843018 7 NA <NA> 7.693 -3.828924e-05 <NA>
## 18 8 318 0.4868457 8 NA <NA> 7.696 -8.692828e-07 <NA>
## 19 8 310 0.4862687 9 NA <NA> 7.594 -1.061185e-07 <NA>
## 20 8 313 0.4878694 10 NA <NA> 7.628 -5.153015e-07 <NA>
## train.time prop.type propose.time se mean
## 11 0.011 infill_ei 0.450 0.0143886864 0.5075765
## 12 0.011 infill_ei 0.427 0.0090265872 0.4971003
## 13 0.012 infill_ei 0.443 0.0062693960 0.4916927
## 14 0.012 infill_ei 0.435 0.0037308971 0.4878950
## 15 0.012 infill_ei 0.737 0.0024446891 0.4860699
## 16 0.013 infill_ei 0.442 0.0012713838 0.4850705
## 17 0.012 infill_ei 0.444 0.0006371109 0.4847248
## 18 0.013 infill_ei 0.467 0.0002106381 0.4844576
## 19 0.014 infill_ei 0.435 0.0002182254 0.4846214
## 20 0.013 infill_ei 0.748 0.0002971160 0.4847383
```
So the recommended parameters are 8 for `mtry` and 314 for `trees`. The
user can access these recommended parameters with `result$x$x1` and `result$x$x2`.
The value of the RMSE is lower than before, and equals 0\.4839817\. It can be accessed with
`result$y`.
Let’s now train the random forest on the training data with this values. First, I pre\-process the
training data
```
training_rec <- prep(simple_recipe(housing_train), testing = housing_train)
train_data <- bake(training_rec, new_data = housing_train)
```
Let’s now train our final model and predict the prices:
```
final_model <- rand_forest(mode = "regression", mtry = result$x$x1, trees = result$x$x2) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = train_data)
price_predict <- predict(final_model, new_data = select(test_data, -price))
```
Let’s transform the data back and compare the predicted prices to the true ones visually:
```
cbind(price_predict * sd(housing_train$price) + mean(housing_train$price),
housing_test$price)
```
```
## .pred housing_test$price
## 1 16.76938 13.5
## 2 27.59510 30.8
## 3 23.14952 24.7
## 4 21.92390 21.2
## 5 21.35030 20.0
## 6 23.15809 22.9
## 7 23.00947 23.9
## 8 25.74268 26.6
## 9 24.13122 22.6
## 10 34.97671 43.8
## 11 19.30543 18.8
## 12 18.09146 15.7
## 13 18.82922 19.2
## 14 18.63397 13.3
## 15 19.14438 14.0
## 16 17.05549 15.6
## 17 23.79491 27.0
## 18 20.30125 17.4
## 19 22.99200 23.6
## 20 32.77092 33.3
## 21 31.66258 34.6
## 22 28.79583 34.9
## 23 39.02755 50.0
## 24 23.53336 21.7
## 25 24.66551 24.3
## 26 24.91737 24.0
## 27 25.11847 25.1
## 28 24.42518 23.7
## 29 24.59139 23.7
## 30 24.91760 26.2
## 31 38.73875 43.5
## 32 29.71848 35.1
## 33 36.89490 46.0
## 34 24.04041 26.4
## 35 20.91349 20.3
## 36 21.18602 23.1
## 37 22.57069 22.2
## 38 25.21751 23.9
## 39 28.55841 50.0
## 40 14.38216 7.2
## 41 12.76573 8.5
## 42 11.78237 9.5
## 43 13.29279 13.4
## 44 14.95076 16.4
## 45 15.79182 19.1
## 46 18.26510 19.6
## 47 14.84985 13.3
## 48 16.01508 16.7
## 49 24.09930 25.0
## 50 20.75357 21.8
## 51 19.49487 19.7
```
Let’s now compute the RMSE:
```
tibble::tibble("truth" = test_data$price,
"prediction" = unlist(price_predict)) %>%
yardstick::rmse(truth, prediction)
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.425
```
As I mentioned above, all the part about looking for hyper\-parameters could be changed to something
else. The general approach though remains what I have described, and can be applied for any models
that have hyper\-parameters.
### 6\.9\.1 Set up
Let’s load the needed packages:
```
library("tidyverse")
library("recipes")
library("rsample")
library("parsnip")
library("yardstick")
library("brotools")
library("mlbench")
```
Load the data which is included in the `{mlrbench}` package:
```
data("BostonHousing2")
```
I will train a random forest to predict the housing prices, which is the `cmedv` column:
```
head(BostonHousing2)
```
```
## town tract lon lat medv cmedv crim zn indus chas nox
## 1 Nahant 2011 -70.9550 42.2550 24.0 24.0 0.00632 18 2.31 0 0.538
## 2 Swampscott 2021 -70.9500 42.2875 21.6 21.6 0.02731 0 7.07 0 0.469
## 3 Swampscott 2022 -70.9360 42.2830 34.7 34.7 0.02729 0 7.07 0 0.469
## 4 Marblehead 2031 -70.9280 42.2930 33.4 33.4 0.03237 0 2.18 0 0.458
## 5 Marblehead 2032 -70.9220 42.2980 36.2 36.2 0.06905 0 2.18 0 0.458
## 6 Marblehead 2033 -70.9165 42.3040 28.7 28.7 0.02985 0 2.18 0 0.458
## rm age dis rad tax ptratio b lstat
## 1 6.575 65.2 4.0900 1 296 15.3 396.90 4.98
## 2 6.421 78.9 4.9671 2 242 17.8 396.90 9.14
## 3 7.185 61.1 4.9671 2 242 17.8 392.83 4.03
## 4 6.998 45.8 6.0622 3 222 18.7 394.63 2.94
## 5 7.147 54.2 6.0622 3 222 18.7 396.90 5.33
## 6 6.430 58.7 6.0622 3 222 18.7 394.12 5.21
```
Only keep relevant columns:
```
boston <- BostonHousing2 %>%
select(-medv, -tract, -lon, -lat) %>%
rename(price = cmedv)
```
I remove `tract`, `lat` and `lon` because the information contained in the column `town` is enough.
To train and evaluate the model’s performance, I split the data in two.
One data set, called the training set, will be further split into two down below. I won’t
touch the second data set, the test set, until the very end, to finally assess the model’s
performance.
```
train_test_split <- initial_split(boston, prop = 0.9)
housing_train <- training(train_test_split)
housing_test <- testing(train_test_split)
```
`initial_split()`, `training()` and `testing()` are functions from the `{rsample}` package.
I will train a random forest on the training data, but the question, is *which* random forest?
Because random forests have several hyper\-parameters, and as explained in the intro these
hyper\-parameters cannot be directly learned from the data, which one should we choose? We could
train 6 random forests for instance and compare their performance, but why only 6? Why not 16?
In order to find the right hyper\-parameters, the practitioner can
use values from the literature that seemed to have worked well (like is done in Macro\-econometrics)
or you can further split the train set into two, create a grid of hyperparameter, train the model
on one part of the data for all values of the grid, and compare the predictions of the models on the
second part of the data. You then stick with the model that performed the best, for example, the
model with lowest RMSE. The thing is, you can’t estimate the true value of the RMSE with only
one value. It’s like if you wanted to estimate the height of the population by drawing one single
observation from the population. You need a bit more observations. To approach the true value of the
RMSE for a give set of hyperparameters, instead of doing one split, let’s do 30\. Then we
compute the average RMSE, which implies training 30 models for each combination of the values of the
hyperparameters.
First, let’s split the training data again, using the `mc_cv()` function from `{rsample}` package.
This function implements Monte Carlo cross\-validation:
```
validation_data <- mc_cv(housing_train, prop = 0.9, times = 30)
```
What does `validation_data` look like?
```
validation_data
```
```
## # Monte Carlo cross-validation (0.9/0.1) with 30 resamples
## # A tibble: 30 × 2
## splits id
## <list> <chr>
## 1 <split [409/46]> Resample01
## 2 <split [409/46]> Resample02
## 3 <split [409/46]> Resample03
## 4 <split [409/46]> Resample04
## 5 <split [409/46]> Resample05
## 6 <split [409/46]> Resample06
## 7 <split [409/46]> Resample07
## 8 <split [409/46]> Resample08
## 9 <split [409/46]> Resample09
## 10 <split [409/46]> Resample10
## # … with 20 more rows
```
Let’s look further down:
```
validation_data$splits[[1]]
```
```
## <Analysis/Assess/Total>
## <409/46/455>
```
The first value is the number of rows of the first set, the second value of the second, and the third
was the original amount of values in the training data, before splitting again.
How should we call these two new data sets? The author of `{rsample}`, Max Kuhn, talks about
the *analysis* and the *assessment* sets, and I’m going to use this terminology as well.
Now, in order to continue I need to pre\-process the data. I will do this in three steps.
The first and the second steps are used to center and scale the numeric variables and the third step
converts character and factor variables to dummy variables. This is needed because I will train a
random forest, which cannot handle factor variables directly. Let’s define a recipe to do that,
and start by pre\-processing the testing set. I write a wrapper function around the recipe,
because I will need to apply this recipe to various data sets:
```
simple_recipe <- function(dataset){
recipe(price ~ ., data = dataset) %>%
step_center(all_numeric()) %>%
step_scale(all_numeric()) %>%
step_dummy(all_nominal())
}
```
We have not learned yet about writing functions, and will do so in the next chapter. However, for
now, you only need to know that you can write your own functions, and that these functions can
take any arguments you need. In the case of the above function, which we called `simple_recipe()`,
we only need one argument, which is a dataset, and which we called `dataset`.
Once the recipe is defined, I can use the `prep()` function, which estimates the parameters from
the data which are needed to process the data. For example, for centering, `prep()` estimates
the mean which will then be subtracted from the variables. With `bake()` the estimates are then
applied on the data:
```
testing_rec <- prep(simple_recipe(housing_test), testing = housing_test)
test_data <- bake(testing_rec, new_data = housing_test)
```
It is important to split the data before using `prep()` and `bake()`, because if not, you will
use observations from the test set in the `prep()` step, and thus introduce knowledge from the test
set into the training data. This is called data leakage, and must be avoided. This is why it is
necessary to first split the training data into an analysis and an assessment set, and then also
pre\-process these sets separately. However, the `validation_data` object cannot now be used with
`recipe()`, because it is not a dataframe. No worries, I simply need to write a function that extracts
the analysis and assessment sets from the `validation_data` object, applies the pre\-processing, trains
the model, and returns the RMSE. This will be a big function, at the center of the analysis.
But before that, let’s run a simple linear regression, as a benchmark. For the linear regression, I will
not use any CV, so let’s pre\-process the training set:
```
trainlm_rec <- prep(simple_recipe(housing_train), testing = housing_train)
trainlm_data <- bake(trainlm_rec, new_data = housing_train)
linreg_model <- lm(price ~ ., data = trainlm_data)
broom::augment(linreg_model, newdata = test_data) %>%
yardstick::rmse(price, .fitted)
```
```
## Warning in predict.lm(x, newdata = newdata, na.action = na.pass, ...):
## prediction from a rank-deficient fit may be misleading
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.439
```
`broom::augment()` adds the predictions to the `test_data` in a new column, `.fitted`. I won’t
use this trick with the random forest, because there is no `augment()` method for random forests
from the `{ranger}` package which I’ll use. I’ll add the predictions to the data myself.
Ok, now let’s go back to the random forest and write the big function:
```
my_rf <- function(mtry, trees, split, id){
analysis_set <- analysis(split)
analysis_prep <- prep(simple_recipe(analysis_set), training = analysis_set)
analysis_processed <- bake(analysis_prep, new_data = analysis_set)
model <- rand_forest(mode = "regression", mtry = mtry, trees = trees) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = analysis_processed)
assessment_set <- assessment(split)
assessment_prep <- prep(simple_recipe(assessment_set), testing = assessment_set)
assessment_processed <- bake(assessment_prep, new_data = assessment_set)
tibble::tibble("id" = id,
"truth" = assessment_processed$price,
"prediction" = unlist(predict(model, new_data = assessment_processed)))
}
```
The `rand_forest()` function is available in the `{parsnip}` package. This package provides an
unified interface to a lot of other machine learning packages. This means that instead of having to
learn the syntax of `range()` and `randomForest()` and, and… you can simply use the `rand_forest()`
function and change the `engine` argument to the one you want (`ranger`, `randomForest`, etc).
Let’s try this function:
```
results_example <- map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = 3, trees = 200, split = .x, id = .y))
```
```
head(results_example)
```
```
## # A tibble: 6 × 3
## id truth prediction
## <chr> <dbl> <dbl>
## 1 Resample01 -0.328 -0.0274
## 2 Resample01 1.06 0.686
## 3 Resample01 1.04 0.726
## 4 Resample01 -0.418 -0.0190
## 5 Resample01 0.909 0.642
## 6 Resample01 0.0926 -0.134
```
I can now compute the RMSE when `mtry` \= 3 and `trees` \= 200:
```
results_example %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
```
```
## [1] 0.6305034
```
The random forest has already lower RMSE than the linear regression. The goal now is to lower this
RMSE by tuning the `mtry` and `trees` hyperparameters. For this, I will use Bayesian Optimization
methods implemented in the `{mlrMBO}` package.
### 6\.9\.2 Bayesian hyperparameter optimization
I will re\-use the code from above, and define a function that does everything from pre\-processing
to returning the metric I want to minimize by tuning the hyperparameters, the RMSE:
```
tuning <- function(param, validation_data){
mtry <- param[1]
trees <- param[2]
results <- purrr::map2_df(.x = validation_data$splits,
.y = validation_data$id,
~my_rf(mtry = mtry, trees = trees, split = .x, id = .y))
results %>%
group_by(id) %>%
yardstick::rmse(truth, prediction) %>%
summarise(mean_rmse = mean(.estimate)) %>%
pull
}
```
This is exactly the code from before, but it now returns the RMSE. Let’s try the function
with the values from before:
```
tuning(c(3, 200), validation_data)
```
```
## [1] 0.6319843
```
I now follow the code that can be found in the [arxiv](https://arxiv.org/abs/1703.03373) paper to
run the optimization. A simpler model, called the surrogate model, is used to look for promising
points and to evaluate the value of the function at these points. This seems somewhat similar
(in spirit) to the *Indirect Inference* method as described in
[Gourieroux, Monfort, Renault](https://www.jstor.org/stable/2285076).
If you don’t really get what follows, no worries, it is not really important as such. The idea
is simply to look for hyper\-parameters in an efficient way, and bayesian optimisation provides
this efficient way. However, you could use another method, for example a grid search. This would not
change anything to the general approach. So I will not spend too much time explaining what is
going on below, as you can read the details in the paper cited above as well as the package’s
documentation. The focus here is not on this particular method, but rather showing you how you can
use various packages to solve a data science problem.
Let’s first load the package and create the function to optimize:
```
library("mlrMBO")
```
```
fn <- makeSingleObjectiveFunction(name = "tuning",
fn = tuning,
par.set = makeParamSet(makeIntegerParam("x1", lower = 3, upper = 8),
makeIntegerParam("x2", lower = 100, upper = 500)))
```
This function is based on the function I defined before. The parameters to optimize are also
defined as are their bounds. I will look for `mtry` between the values of 3 and 8, and `trees`
between 50 and 500\.
We still need to define some other objects before continuing:
```
# Create initial random Latin Hypercube Design of 10 points
library(lhs)# for randomLHS
des <- generateDesign(n = 5L * 2L, getParamSet(fn), fun = randomLHS)
```
Then we choose the surrogate model, a random forest too:
```
# Specify kriging model with standard error estimation
surrogate <- makeLearner("regr.ranger", predict.type = "se", keep.inbag = TRUE)
```
Here I define some options:
```
# Set general controls
ctrl <- makeMBOControl()
ctrl <- setMBOControlTermination(ctrl, iters = 10L)
ctrl <- setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI())
```
And this is the optimization part:
```
# Start optimization
result <- mbo(fn, des, surrogate, ctrl, more.args = list("validation_data" = validation_data))
```
```
result
```
```
## Recommended parameters:
## x1=8; x2=314
## Objective: y = 0.484
##
## Optimization path
## 10 + 10 entries in total, displaying last 10 (or less):
## x1 x2 y dob eol error.message exec.time ei error.model
## 11 8 283 0.4855415 1 NA <NA> 7.353 -3.276847e-04 <NA>
## 12 8 284 0.4852047 2 NA <NA> 7.321 -3.283713e-04 <NA>
## 13 8 314 0.4839817 3 NA <NA> 7.703 -3.828517e-04 <NA>
## 14 8 312 0.4841398 4 NA <NA> 7.633 -2.829713e-04 <NA>
## 15 8 318 0.4841066 5 NA <NA> 7.692 -2.668354e-04 <NA>
## 16 8 314 0.4845221 6 NA <NA> 7.574 -1.382333e-04 <NA>
## 17 8 321 0.4843018 7 NA <NA> 7.693 -3.828924e-05 <NA>
## 18 8 318 0.4868457 8 NA <NA> 7.696 -8.692828e-07 <NA>
## 19 8 310 0.4862687 9 NA <NA> 7.594 -1.061185e-07 <NA>
## 20 8 313 0.4878694 10 NA <NA> 7.628 -5.153015e-07 <NA>
## train.time prop.type propose.time se mean
## 11 0.011 infill_ei 0.450 0.0143886864 0.5075765
## 12 0.011 infill_ei 0.427 0.0090265872 0.4971003
## 13 0.012 infill_ei 0.443 0.0062693960 0.4916927
## 14 0.012 infill_ei 0.435 0.0037308971 0.4878950
## 15 0.012 infill_ei 0.737 0.0024446891 0.4860699
## 16 0.013 infill_ei 0.442 0.0012713838 0.4850705
## 17 0.012 infill_ei 0.444 0.0006371109 0.4847248
## 18 0.013 infill_ei 0.467 0.0002106381 0.4844576
## 19 0.014 infill_ei 0.435 0.0002182254 0.4846214
## 20 0.013 infill_ei 0.748 0.0002971160 0.4847383
```
So the recommended parameters are 8 for `mtry` and 314 for `trees`. The
user can access these recommended parameters with `result$x$x1` and `result$x$x2`.
The value of the RMSE is lower than before, and equals 0\.4839817\. It can be accessed with
`result$y`.
Let’s now train the random forest on the training data with this values. First, I pre\-process the
training data
```
training_rec <- prep(simple_recipe(housing_train), testing = housing_train)
train_data <- bake(training_rec, new_data = housing_train)
```
Let’s now train our final model and predict the prices:
```
final_model <- rand_forest(mode = "regression", mtry = result$x$x1, trees = result$x$x2) %>%
set_engine("ranger", importance = 'impurity') %>%
fit(price ~ ., data = train_data)
price_predict <- predict(final_model, new_data = select(test_data, -price))
```
Let’s transform the data back and compare the predicted prices to the true ones visually:
```
cbind(price_predict * sd(housing_train$price) + mean(housing_train$price),
housing_test$price)
```
```
## .pred housing_test$price
## 1 16.76938 13.5
## 2 27.59510 30.8
## 3 23.14952 24.7
## 4 21.92390 21.2
## 5 21.35030 20.0
## 6 23.15809 22.9
## 7 23.00947 23.9
## 8 25.74268 26.6
## 9 24.13122 22.6
## 10 34.97671 43.8
## 11 19.30543 18.8
## 12 18.09146 15.7
## 13 18.82922 19.2
## 14 18.63397 13.3
## 15 19.14438 14.0
## 16 17.05549 15.6
## 17 23.79491 27.0
## 18 20.30125 17.4
## 19 22.99200 23.6
## 20 32.77092 33.3
## 21 31.66258 34.6
## 22 28.79583 34.9
## 23 39.02755 50.0
## 24 23.53336 21.7
## 25 24.66551 24.3
## 26 24.91737 24.0
## 27 25.11847 25.1
## 28 24.42518 23.7
## 29 24.59139 23.7
## 30 24.91760 26.2
## 31 38.73875 43.5
## 32 29.71848 35.1
## 33 36.89490 46.0
## 34 24.04041 26.4
## 35 20.91349 20.3
## 36 21.18602 23.1
## 37 22.57069 22.2
## 38 25.21751 23.9
## 39 28.55841 50.0
## 40 14.38216 7.2
## 41 12.76573 8.5
## 42 11.78237 9.5
## 43 13.29279 13.4
## 44 14.95076 16.4
## 45 15.79182 19.1
## 46 18.26510 19.6
## 47 14.84985 13.3
## 48 16.01508 16.7
## 49 24.09930 25.0
## 50 20.75357 21.8
## 51 19.49487 19.7
```
Let’s now compute the RMSE:
```
tibble::tibble("truth" = test_data$price,
"prediction" = unlist(price_predict)) %>%
yardstick::rmse(truth, prediction)
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 0.425
```
As I mentioned above, all the part about looking for hyper\-parameters could be changed to something
else. The general approach though remains what I have described, and can be applied for any models
that have hyper\-parameters.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/defining-your-own-functions.html |
Chapter 7 Defining your own functions
=====================================
In this section we are going to learn some advanced concepts that are going to make you into a
full\-fledged R programmer. Before this chapter you only used whatever R came with, as well as the
functions contained in packages. We did define some functions ourselves in Chapter 6 already, but
without going into many details. In this chapter, we will learn about building functions ourselves,
and do so in greater detail than what we did before.
7\.1 Control flow
-----------------
Knowing about control flow is essential to build your own functions. Without control flow statements,
such as if\-else statements or loops (or, in the case of pure functional programming languages, recursion),
programming languages would be very limited.
### 7\.1\.1 If\-else
Imagine you want a variable to be equal to a certain value if a condition is met. This is a typical
problem that requires the `if ... else ...` construct. For instance:
```
a <- 4
b <- 5
```
Suppose that if `a > b` then `f` should be equal to 20, else `f` should be equal to 10\. Using `if ... else ...` you can achieve this like so:
```
if (a > b) {
f <- 20
} else {
f <- 10
}
```
Obviously, here `f = 10`. Another way to achieve this is by using the `ifelse()` function:
```
f <- ifelse(a > b, 20, 10)
```
`if...else...` and `ifelse()` might seem interchangeable, but they’re not. `ifelse()` is vectorized, while
`if...else..` is not. Let’s try the following:
```
ifelse(c(1,2,4) > c(3, 1, 0), "yes", "no")
```
```
## [1] "no" "yes" "yes"
```
The result is a vector. Now, let’s see what happens if we use `if...else...` instead of `ifelse()`:
```
if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no")
```
```
> Error in if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no") :
the condition has length > 1
```
This results in an error (in previous R version, only the first element of the vector would get used).
We have already discussed this in Chapter 2, remember? If you want to make sure that such an expression
evaluates to `TRUE`, then you need to use `all()`:
```
ifelse(all(c(1,2,4) > c(3, 1, 0)), "all elements are greater", "not all elements are greater")
```
```
## [1] "not all elements are greater"
```
You may also remember the `any()` function:
```
ifelse(any(c(1,2,4) > c(3, 1, 0)), "at least one element is greater", "no element greater")
```
```
## [1] "at least one element is greater"
```
These are the basics. But sometimes, you might need to test for more complex conditions, which can
lead to using nested `if...else...` constructs. These, however, can get messy:
```
if (10 %% 3 == 0) {
print("10 is divisible by 3")
} else if (10 %% 2 == 0) {
print("10 is divisible by 2")
}
```
```
## [1] "10 is divisible by 2"
```
10 being obviously divisible by 2 and not 3, it is the second sentence that will be printed. The
`%%` operator is the modulus operator, which gives the rest of the division of 10 by 2\. In such
cases, it is easier to use `dplyr::case_when()`:
```
case_when(10 %% 3 == 0 ~ "10 is divisible by 3",
10 %% 2 == 0 ~ "10 is divisible by 2")
```
```
## [1] "10 is divisible by 2"
```
We have already encountered this function in Chapter 4, inside a `dplyr::mutate()` call to create a new column.
Let’s now discuss loops.
### 7\.1\.2 For loops
For loops make it possible to repeat a set of instructions `i` times. For example, try the following:
```
for (i in 1:10){
print("hello")
}
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
It is also possible to do computations using for loops. Let’s compute the sum of the first
100 integers:
```
result <- 0
for (i in 1:100){
result <- result + i
}
print(result)
```
```
## [1] 5050
```
`result` is equal to 5050, the expected result. What happened in that loop? First, we defined a
variable called `result` and set it to 0\. Then, when the loops starts, `i` equals 1, so we add
`result` to `1`, which is 1\. Then, `i` equals 2, and again, we add `result` to `i`. But this time,
`result` equals 1 and `i` equals 2, so now `result` equals 3, and we repeat this until `i`
equals 100\. If you know a programming language like C, this probably looks familiar. However, R is
not C, and you should, if possible, avoid writing code that looks like this. You should always
ask yourself the following questions:
* Is there an inbuilt function to achieve what I need? In this case we have `sum()`, so we could use `sum(seq(1, 100))`.
* Is there a way to use matrix algebra? This can sometimes make things easier, but it depends how comfortable
you are with matrix algebra. This would be the solution with matrix algebra: `rep(1, 100) %*% seq(1, 100)`.
* Is there a way to use building blocks that are already available? For instance, suppose that `sum()`
would not be a function available in R. Another way to solve this issue would be to use the following
building blocks: `+`, which computes the sum of two numbers and `Reduce()`, which *reduces* a list
of elements using an operator. Sounds complicated? Let’s see how `Reduce()` works. First, let me show you how
I combine these two functions to achieve the same result as when using `sum()`:
```
Reduce(`+`, seq(1, 100))
```
```
## [1] 5050
```
We will see how `Reduce()` works in greater detail in the next chapter, but what happened was something like this:
```
Reduce(`+`, seq(1, 100)) =
1 + Reduce(`+`, seq(2, 100)) =
1 + 2 + Reduce(`+`, seq(3, 100)) =
1 + 2 + 3 + Reduce(`+`, seq(4, 100)) =
....
```
If you ask yourself these questions, it turns out that you only rarely actually need to write loops, but loops are
still important, because sometimes there simply isn’t an alternative. Also, there are other situations where loops
are also important, so I refer you to the following [section](http://adv-r.had.co.nz/Functionals.html#functionals-not)
of Hadley Wickham’s *Advanced R* for an in\-depth discussion on situations where loops make more
sense than using functions such as `Reduce()`.
### 7\.1\.3 While loops
While loops are very similar to for loops. The instructions inside a while loop are repeated while a
certain condition holds true. Let’s consider the sum of the first 100 integers again:
```
result <- 0
i <- 1
while (i<=100){
result = result + i
i = i + 1
}
print(result)
```
```
## [1] 5050
```
Here, we first set `result` and `i` to 0\. Then, while `i` is less than, or equal to 100, we add `i`
to `result`. Notice that there is one more line than in the for loop version of this code: we need
to increment the value of `i` at each iteration, if not, `i` would stay equal to 1, and the
condition would always be fulfilled, and the loop would run forever (not really, only until your
computer runs out of memory, or until the heat death of the universe, whichever comes first).
Now that we know how to write loops, and know about `if...else...` constructs, we have (almost) all
the ingredients to write our own functions.
7\.2 Writing your own functions
-------------------------------
As you have seen by now, R includes a very large amount of in\-built functions, but also many
more functions are available in packages. However, there will be a lot of situations where you will
need to write your own. In this section we are going to learn how to write our own functions.
### 7\.2\.1 Declaring functions in R
Suppose you want to create the following function: \\(f(x) \= \\dfrac{1}{\\sqrt{x}}\\).
Writing this in R is quite simple:
```
my_function <- function(x){
1/sqrt(x)
}
```
The argument of the function, `x`, gets passed to the `function()` function and the *body* of
the function (more on that in the next Chapter) contains the function definition. Of course,
you could define functions that use more than one input:
```
my_function <- function(x, y){
1/sqrt(x + y)
}
```
or inputs with names longer than one character:
```
my_function <- function(argument1, argument2){
1/sqrt(argument1 + argument2)
}
```
Functions written by the user get called just the same way as functions included in R:
```
my_function(1, 10)
```
```
## [1] 0.3015113
```
It is also possible to provide default values to the function’s arguments, which are values that are used
if the user omits them:
```
my_function <- function(argument1, argument2 = 10){
1/sqrt(argument1 + argument2)
}
```
```
my_function(1)
```
```
## [1] 0.3015113
```
This is especially useful for functions with many arguments. Consider also the following example,
where the function has a default method:
```
my_function <- function(argument1, argument2, method = "foo"){
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
As you see, depending on the “method” chosen, the returned result is either a numeric, or a string.
What happens if the user provides a “method” that is neither “foo” nor “bar”?
```
my_function(10, 11, "spam")
```
As you can see nothing happens. It is possible to add safeguards to your function to avoid such
situations:
```
my_function <- function(argument1, argument2, method = "foo"){
if(!(method %in% c("foo", "bar"))){
return("Method must be either 'foo' or 'bar'")
}
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
```
my_function(10, 11, "foobar")
```
```
## [1] "Method must be either 'foo' or 'bar'"
```
Notice that I have used `return()` inside my first `if` statement. This is to immediately stop
evaluation of the function and return a value. If I had omitted it, evaluation would have
continued, as it is always the last expression that gets evaluated. Remove `return()` and run the
function again, and see what happens. Later, we are going to learn how to add better safeguards to
your functions and to avoid runtime errors.
While in general, it is a good idea to add comments to your functions to explain what they do, I
would avoid adding comments to functions that do things that are very obvious, such as with this
one. Function names should be of the form: `function_name()`. Always give your function very
explicit names! In mathematics it is standard to give functions just one letter as a name, but I
would advise against doing that in your code. Functions that you write are not special in any way;
this means that R will treat them the same way, and they will work in conjunction with any other
function just as if it was built\-in into R.
They have one limitation though (which is shared with R’s native function): just like in math,
they can only return one value. However, sometimes, you may need to return more than one value.
To be able to do this, you must put your values in a list, and return the list of values. For example:
```
average_and_sd <- function(x){
c(mean(x), sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
You’re still returning a single object, but it’s a vector. You can also return a named list:
```
average_and_sd <- function(x){
list("mean_x" = mean(x), "sd_x" = sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## $mean_x
## [1] 7.166667
##
## $sd_x
## [1] 4.262237
```
As described before, you can use `return()` at the end of your functions:
```
average_and_sd <- function(x){
result <- c(mean(x), sd(x))
return(result)
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
But this is only needed if you need to return a value early:
```
average_and_sd <- function(x){
if(any(is.na(x))){
return(NA)
} else {
c(mean(x), sd(x))
}
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
```
average_and_sd(c(1, 3, NA, 9, 10, 12))
```
```
## [1] NA
```
If you need to use a function from a package inside your function use `::`:
```
my_sum <- function(a_vector){
purrr::reduce(a_vector, `+`)
}
```
However, if you need to use more than one function, this can become tedious. A quick and dirty
way of doing that, is to use `library(package_name)`, inside the function:
```
my_sum <- function(a_vector){
library(purrr)
reduce(a_vector, `+`)
}
```
Loading the library inside the function has the advantage that you will be sure that the package
upon which your function depends will be loaded. If the package is already loaded, it will not be
loaded again, thus not impact performance, but if you forgot to load it at the beginning of your
script, then, no worries, your function will load it the first time you use it! However, you should
avoid doing this, because the resulting function is now not pure. It has a side effect, which is
loading a library. This could result in problems, especially if several functions load several
different packages that have functions with the same name. Depending on which function runs first,
a function with the same name but coming from the same package will be available in the global
environment. The very best way would be to write your own package and declare the packages upon
which your functions depend as dependencies. This is something we are going to explore in Chapter
9\.
You can put a lot of instructions inside a function, such as loops. Let’s create the function that
returns Fionacci numbers.
### 7\.2\.2 Fibonacci numbers
The Fibonacci sequence is the following:
\\\[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...\\]
Each subsequent number is composed of the sum of the two preceding ones. In R, it is possible to define a function that returns the \\(n^{th}\\) fibonacci number:
```
my_fibo <- function(n){
a <- 0
b <- 1
for (i in 1:n){
temp <- b
b <- a
a <- a + temp
}
a
}
```
Inside the loop, we defined a variable called `temp`. Defining temporary variables is usually very
useful. Let’s try to understand what happens inside this loop:
* First, we assign the value 0 to variable `a` and value 1 to variable `b`.
* We start a loop, that goes from 1 to `n`.
* We assign the value inside of `b` to a temporary variable, called `temp`.
* `b` becomes `a`.
* We assign the sum of `a` and `temp` to `a`.
* When the loop is finished, we return `a`.
What happens if we want the 3rd fibonacci number? At `n = 1` we have first `a = 0` and `b = 1`,
then `temp = 1`, `b = 0` and `a = 0 + 1`. Then `n = 2`. Now `b = 0` and `temp = 0`. The previous
result, `a = 0 + 1` is now assigned to `b`, so `b = 1`. Then, `a = 1 + 0`. Finally, `n = 3`. `temp = 1` (because `b = 1`), the previous result `a = 1` is assigned to `b` and finally, `a = 1 + 1`. So
the third fibonacci number equals 2\. Reading this might be a bit confusing; I strongly advise you
to run the algorithm on a sheet of paper, step by step.
The above algorithm is called an iterative algorithm, because it uses a loop to compute the result.
Let’s look at another way to think about the problem, with a so\-called recursive function:
```
fibo_recur <- function(n){
if (n == 0 || n == 1){
return(n)
} else {
fibo_recur(n-1) + fibo_recur(n-2)
}
}
```
This algorithm should be easier to understand: if `n = 0` or `n = 1` the function should return `n`
(0 or 1\). If `n` is strictly bigger than `1`, `fibo_recur()` should return the sum of
`fibo_recur(n-1)` and `fibo_recur(n-2)`. This version of the function is very much the same as the
mathematical definition of the fibonacci sequence. So why not use only recursive algorithms
then? Try to run the following:
```
system.time(my_fibo(30))
```
```
## user system elapsed
## 0.007 0.000 0.007
```
The result should be printed very fast (the `system.time()` function returns the time that it took
to execute `my_fibo(30)`). Let’s try with the recursive version:
```
system.time(fibo_recur(30))
```
```
## user system elapsed
## 1.482 0.080 1.574
```
It takes much longer to execute! Recursive algorithms are very CPU demanding, so if speed is
critical, it’s best to avoid recursive algorithms. Also, in `fibo_recur()` try to remove this line:
`if (n == 0 || n == 1)` and try to run `fibo_recur(5)` and see what happens. You should
get an error: this is because for recursive algorithms you need a stopping condition, or else,
it would run forever. This is not the case for iterative algorithms, because the stopping
condition is the last step of the loop.
So as you can see, for recursive relationships, for or while loops are the way to go in R, whether
you’re writing these loops inside functions or not.
7\.3 Exercises
--------------
### Exercise 1
In this exercise, you will write a function to compute the sum of the n first integers. Combine the
algorithm we saw in section about while loops and what you learned about functions
in this section.
### Exercise 2
Write a function called `my_fact()` that computes the factorial of a number `n`. Do it using a
loop, using a recursive function, and using a functional:
### Exercise 3
Write a function to find the roots of quadratic functions. Your function should take 3 arguments,
`a`, `b` and `c` and return the two roots. Only consider the case where there are two real roots
(delta \> 0\).
7\.4 Functions that take functions as arguments: writing your own higher\-order functions
-----------------------------------------------------------------------------------------
Functions that take functions as arguments are very powerful and useful tools.
Two very important functions, that we will discuss in chapter 8 are `purrr::map()`
and `purrr::reduce()`. But you can also write your own! A very simple example
would be the following:
```
my_func <- function(x, func){
func(x)
}
```
`my_func()` is a very simple function that takes `x` and `func()` as arguments and that simply
executes `func(x)`. This might not seem very useful (after all, you could simply use `func(x)!`) but
this is just for illustration purposes, in practice, your functions would be more useful than that!
Let’s try to use `my_func()`:
```
my_func(c(1, 8, 1, 0, 8), mean)
```
```
## [1] 3.6
```
As expected, this returns the mean of the given vector. But now suppose the following:
```
my_func(c(1, 8, 1, NA, 8), mean)
```
```
## [1] NA
```
Because one element of the list is `NA`, the whole mean is `NA`. `mean()` has a `na.rm` argument
that you can set to `TRUE` to ignore the `NA`s in the vector. However, here, there is no way to
provide this argument to the function `mean()`! Let’s see what happens when we try to:
```
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE)
```
```
Error in my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE) :
unused argument (na.rm = TRUE)
```
So what you could do is pass the value `TRUE` to the `na.rm` argument of `mean()` from your own
function:
```
my_func <- function(x, func, remove_na){
func(x, na.rm = remove_na)
}
my_func(c(1, 8, 1, NA, 8), mean, remove_na = TRUE)
```
```
## [1] 4.5
```
This is one solution, but `mean()` also has another argument called `trim`. What if some other
user needs this argument? Should you also add it to your function? Surely there’s a way to avoid
this problem? Yes, there is, and it by using the *dots*. The `...` simply mean “any other
argument as needed”, and it’s very easy to use:
```
my_func <- function(x, func, ...){
func(x, ...)
}
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE)
```
```
## [1] 4.5
```
or, now, if you need the `trim` argument:
```
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE, trim = 0.1)
```
```
## [1] 4.5
```
The `...` are very useful when writing higher\-order functions such as `my_func()`, because it allows
you to pass arguments *down* to the underlying functions.
7\.5 Functions that return functions
------------------------------------
The example from before, `my_func()` took three arguments, some `x`, a function `func`, and `...` (dots). `my_func()`
was a kind of wrapper that evaluated `func` on its arguments `x` and `...`. But sometimes this is not quite what you
need or want. It is sometimes useful to write a function that returns a modified function. This type of function
is called a function factory, as it *builds* functions. For instance, suppose that we want to time how long functions
take to run. An idea would be to proceed like this:
```
tic <- Sys.time()
very_slow_function(x)
toc <- Sys.time()
running_time <- toc - tic
```
but if you want to time several functions, this gets very tedious. It would be much easier if functions would
time *themselves*. We could achieve this by writing a wrapper, like this:
```
timed_very_slow_function <- function(...){
tic <- Sys.time()
result <- very_slow_function(x)
toc <- Sys.time()
running_time <- toc - tic
list("result" = result,
"running_time" = running_time)
}
```
The problem here is that we have to change each function we need to time. But thanks to the concept of function
factories, we can write a function that does this for us:
```
time_f <- function(.f, ...){
function(...){
tic <- Sys.time()
result <- .f(...)
toc <- Sys.time()
running_time <- toc - tic
list("result" = result,
"running_time" = running_time)
}
}
```
`time_f()` is a function that returns a function, a function factory. Calling it on a function returns, as expected,
a function:
```
t_mean <- time_f(mean)
t_mean
```
```
## function(...){
##
## tic <- Sys.time()
## result <- .f(...)
## toc <- Sys.time()
##
## running_time <- toc - tic
##
## list("result" = result,
## "running_time" = running_time)
##
## }
## <environment: 0x562c5699a6b8>
```
This function can now be used like any other function:
```
output <- t_mean(seq(-500000, 500000))
```
`output` is a list of two elements, the first being simply the result of `mean(seq(-500000, 500000))`, and the other
being the running time.
This approach is super flexible. For instance, imagine that there is an `NA` in the vector. This would result in
the mean of this vector being `NA`:
```
t_mean(c(NA, seq(-500000, 500000)))
```
```
## $result
## [1] NA
##
## $running_time
## Time difference of 0.006885529 secs
```
But because we use the `...` in the definition of `time_f()`, we can now simply pass `mean()`’s option down to it:
```
t_mean(c(NA, seq(-500000, 500000)), na.rm = TRUE)
```
```
## $result
## [1] 0
##
## $running_time
## Time difference of 0.01394773 secs
```
7\.6 Functions that take columns of data as arguments
-----------------------------------------------------
### 7\.6\.1 The `enquo() - !!()` approach
In many situations, you will want to write functions that look similar to this:
```
my_function(my_data, one_column_inside_data)
```
Such a function would be useful in situation where you have to apply a certain number of operations
to columns for different data frames. For example if you need to create tables of descriptive
statistics or graphs periodically, it might be very interesting to put these operations inside a
function and then call the function whenever you need it, on the fresh batch of data.
However, if you try to write something like that, something that might seem unexpected, at first,
will happen:
```
data(mtcars)
simple_function <- function(dataset, col_name){
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
The variable `col_name` is passed to `simple_function()` as a string, but `group_by()` requires a
variable name. So why not try to convert `col_name` to a name?
```
simple_function <- function(dataset, col_name){
col_name <- as.name(col_name)
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
This is because R is literally looking for the variable `"dist"` somewhere in the global
environment, and not as a column of the data. R does not understand that you are refering to the
column `"dist"` that is inside the dataset. So how can we make R understands what you mean?
To be able to do that, we need to use a framework that was introduced in the `{tidyverse}`,
called *tidy evaluation*. This framework can be used by installing the `{rlang}` package.
`{rlang}` is quite a technical package, so I will spare you the details. But you should at
the very least take a look at the following documents
[here](http://dplyr.tidyverse.org/articles/programming.html) and
[here](https://rlang.r-lib.org/reference/topic-data-mask.html). The
discussion can get complicated, but you don’t need to know everything about `{rlang}`.
As you will see, knowing some of the capabilities `{rlang}` provides can be incredibly useful.
Take a look at the code below:
```
simple_function <- function(dataset, col_name){
col_name <- enquo(col_name)
dataset %>%
group_by(!!col_name) %>%
summarise(mean_mpg = mean(mpg))
}
simple_function(mtcars, cyl)
```
```
## # A tibble: 3 × 2
## cyl mean_mpg
## <dbl> <dbl>
## 1 4 26.7
## 2 6 19.7
## 3 8 15.1
```
As you can see, the previous idea we had, which was using `as.name()` was not very far away from
the solution. The solution, with `{rlang}`, consists in using `enquo()`, which (for our purposes),
does something similar to `as.name()`. Now that `col_name` is (R programmers call it) quoted, or
*defused*, we need to tell `group_by()` to evaluate the input as is. This is done with `!!()`,
called the [injection operator](https://rlang.r-lib.org/reference/injection-operator.html), which
is another `{rlang}` function. I say it again; don’t worry if you don’t understand everything. Just
remember to use `enquo()` on your column names and then `!!()` inside the `{dplyr}` function you
want to use.
Let’s see some other examples:
```
simple_function <- function(dataset, col_name, value){
col_name <- enquo(col_name)
dataset %>%
filter((!!col_name) == value) %>%
summarise(mean_cyl = mean(cyl))
}
simple_function(mtcars, am, 1)
```
```
## mean_cyl
## 1 5.076923
```
Notice that I’ve written:
```
filter((!!col_name) == value)
```
and not:
```
filter(!!col_name == value)
```
I have enclosed `!!col_name` inside parentheses. This is because operators such as `==` have
precedence over `!!`, so you have to be explicit. Also, notice that I didn’t have to quote `1`.
This is because it’s *standard* variable, not a column inside the dataset. Let’s make this function
a bit more general. I hard\-coded the variable cyl inside the body of the function, but maybe you’d
like the mean of another variable?
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
dataset %>%
filter((!!filter_col) == value) %>%
summarise(mean((!!mean_col)))
}
simple_function(mtcars, am, cyl, 1)
```
```
## mean(cyl)
## 1 5.076923
```
Notice that I had to quote `mean_col` too.
Using the `...` that we discovered in the previous section, we can pass more than one column:
```
simple_function <- function(dataset, ...){
col_vars <- quos(...)
dataset %>%
summarise_at(vars(!!!col_vars), funs(mean, sd))
}
```
Because these *dots* contain more than one variable, you have to use `quos()` instead of `enquo()`.
This will put the arguments provided via the dots in a list. Then, because we have a list of
columns, we have to use `summarise_at()`, which you should know if you did the exercices of
Chapter 4\. So if you didn’t do them, go back to them and finish them first. Doing the exercise will
also teach you what `vars()` and `funs()` are. The last thing you have to pay attention to is to
use `!!!()` if you used `quos()`. So 3 `!` instead of only 2\. This allows you to then do things
like this:
```
simple_function(mtcars, am, cyl, mpg)
```
```
## Warning: `funs()` was deprecated in dplyr 0.8.0.
## Please use a list of either functions or lambdas:
##
## # Simple named list:
## list(mean = mean, median = median)
##
## # Auto named with `tibble::lst()`:
## tibble::lst(mean, median)
##
## # Using lambdas
## list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948
```
Using `...` with `!!!()` allows you to write very flexible functions.
If you need to be even more general, you can also provide the summary functions as arguments of
your function, but you have to rewrite your function a little bit:
```
simple_function <- function(dataset, cols, funcs){
dataset %>%
summarise_at(vars(!!!cols), funs(!!!funcs))
}
```
You might be wondering where the `quos()` went? Well because now we are passing two lists, a list of
columns that we have to quote, and a list of functions, that we also have to quote, we need to use `quos()`
when calling the function:
```
simple_function(mtcars, quos(am, cyl, mpg), quos(mean, sd, sum))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd am_sum cyl_sum mpg_sum
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948 13 198 642.9
```
This works, but I don’t think you’ll need to have that much flexibility; either the columns
are variables, or the functions, but rarely both at the same time.
To conclude this function, I should also talk about `as_label()` which allows you to change the
name of a variable, for instance if you want to call the resulting column `mean_mpg` when you
compute the mean of the `mpg` column:
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
mean_name <- paste0("mean_", as_label(mean_col))
dataset %>%
filter((!!filter_col) == value) %>%
summarise(!!(mean_name) := mean((!!mean_col)))
}
```
Pay attention to the `:=` operator in the last line. This is needed when using `as_label()`.
### 7\.6\.2 Curly Curly, a simplified approach to `enquo()` and `!!()`
The previous section might have been a bit difficult to grasp, but there is a simplified way of doing it,
which consists in using `{{}}`, introduced in `{rlang}` version 0\.4\.0\.
The suggested pronunciation of `{{}}` is *curly\-curly*, but there is no
[consensus yet](https://twitter.com/JonTheGeek/status/1144815369766547456).
Let’s suppose that I need to write a function that takes a data frame, as well as a column from
this data frame as arguments, just like before:
```
how_many_na <- function(dataframe, column_name){
dataframe %>%
filter(is.na(column_name)) %>%
count()
}
```
Let’s try this function out on the `starwars` data:
```
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
As you can see, there are missing values in the `hair_color` column. Let’s try to count how many
missing values are in this column:
```
how_many_na(starwars, hair_color)
```
```
Error: object 'hair_color' not found
```
Just as expected, this does not work. The issue is that the column is inside the dataframe,
but when calling the function with `hair_color` as the second argument, R is looking for a
variable called `hair_color` that does not exist. What about trying with `"hair_color"`?
```
how_many_na(starwars, "hair_color")
```
```
## # A tibble: 1 × 1
## n
## <int>
## 1 0
```
Now we get something, but something wrong!
One way to solve this issue, is to not use the `filter()` function, and instead rely on base R:
```
how_many_na_base <- function(dataframe, column_name){
na_index <- is.na(dataframe[, column_name])
nrow(dataframe[na_index, column_name])
}
how_many_na_base(starwars, "hair_color")
```
```
## [1] 5
```
This works, but not using the `{tidyverse}` at all is not always an option. For instance,
the next function, which uses a grouping variable, would be difficult to implement without the
`{tidyverse}`:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by(grouping_var) %>%
summarise(mean(column_name, na.rm = TRUE))
}
```
Calling this function results in the following error message, as expected:
```
Error: Column `grouping_var` is unknown
```
In the previous section, we solved the issue like so:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
grouping_var <- enquo(grouping_var)
column_name <- enquo(column_name)
mean_name <- paste0("mean_", as_label(column_name))
dataframe %>%
group_by(!!grouping_var) %>%
summarise(!!(mean_name) := mean(!!column_name, na.rm = TRUE))
}
```
The core of the function remained very similar to the version from before, but now one has to
use the `enquo()`\-`!!` syntax.
Now this can be simplified using the new `{{}}` syntax:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by({{grouping_var}}) %>%
summarise({{column_name}} := mean({{column_name}}, na.rm = TRUE))
}
```
Much easier and cleaner! You still have to use the `:=` operator instead of `=` for the column name
however, and if you want to modify the column names, for instance in this
case return `"mean_height"` instead of `height` you have to keep using the `enquo()`\-`!!` syntax.
7\.7 Functions that use loops
-----------------------------
It is entirely possible to put a loop inside a function. For example, consider the following
function that return the square root of a number using Newton’s algorithm:
```
sqrt_newton <- function(a, init = 1, eps = 0.01){
stopifnot(a >= 0)
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
}
init
}
```
This functions contains a while loop inside its body. Let’s see if it works:
```
sqrt_newton(16)
```
```
## [1] 4.000001
```
In the definition of the function, I wrote `init = 1` and `eps = 0.01` which means that this
argument can be omitted and will have the provided value (0\.01\) as the default. You can then use
this function as any other, for example with `map()`:
```
map(c(16, 7, 8, 9, 12), sqrt_newton)
```
```
## [[1]]
## [1] 4.000001
##
## [[2]]
## [1] 2.645767
##
## [[3]]
## [1] 2.828469
##
## [[4]]
## [1] 3.000092
##
## [[5]]
## [1] 3.464616
```
This is what I meant before with “your functions are nothing special”. Once the function is
defined, you can use it like any other base R function.
Notice the use of `stopifnot()` inside the body of the function. This is a way to return an error
in case a condition is not fulfilled. We are going to learn more about this type of functions
in the next chapter.
7\.8 Anonymous functions
------------------------
As the name implies, anonymous functions are functions that do not have a name. These are useful inside
functions that have functions as arguments, such as `purrr::map()` or `purrr::reduce()`:
```
map(c(1,2,3,4), function(x){1/sqrt(x)})
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
These anonymous functions get defined in a very similar way to regular functions, you just skip the
name and that’s it. `{tidyverse}` functions also support formulas; these get converted to anonymous functions:
```
map(c(1,2,3,4), ~{1/sqrt(.)})
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
Using a formula instead of an anonymous function is less verbose; you use `~` instead of `function(x)`
and a single dot `.` instead of `x`. What if you need an anonymous function that requires more than
one argument? This is not a problem:
```
map2(c(1, 2, 3, 4, 5), c(9, 8, 7, 6, 5), function(x, y){(x**2)/y})
```
```
## [[1]]
## [1] 0.1111111
##
## [[2]]
## [1] 0.5
##
## [[3]]
## [1] 1.285714
##
## [[4]]
## [1] 2.666667
##
## [[5]]
## [1] 5
```
or, using a formula:
```
map2(c(1, 2, 3, 4, 5), c(9, 8, 7, 6, 5), ~{(.x**2)/.y})
```
```
## [[1]]
## [1] 0.1111111
##
## [[2]]
## [1] 0.5
##
## [[3]]
## [1] 1.285714
##
## [[4]]
## [1] 2.666667
##
## [[5]]
## [1] 5
```
Because you have now two arguments, a single dot could not work, so instead you use `.x` and `.y` to
avoid confusion.
Since version 4\.1, R introduced a short\-hand for defining anonymous functions:
```
map(c(1,2,3,4), \(x)(1/sqrt(x)))
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
`\(x)` is supposed to look like this notation: \\(\\lambda(x)\\). This is a notation comes from lambda calculus, where functions
are defined like this:
\\\[
\\lambda(x).1/sqrt(x)
\\]
which is equivalent to \\(f(x) \= 1/sqrt(x)\\). You can use `\(x)` or `function(x)` interchangeably.
You now know a lot about writing your own functions. In the next chapter, we are going to learn
about functional programming, the programming paradigm I described in the introduction of this
book.
7\.9 Exercises
--------------
### Exercise 1
* Create the following vector:
\\\[a \= (1,6,7,8,8,9,2\)\\]
Using a for loop and a while loop, compute the sum of its elements. To avoid issues, use `i`
as the counter inside the for loop, and `j` as the counter for the while loop.
* How would you achieve that with a functional (a function that takes a function as an argument)?
### Exercise 2
* Let’s use a loop to get the matrix product of a matrix A and B. Follow these steps to create the loop:
1. Create matrix A:
\\\[A \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
2. Create matrix B:
\\\[B \= \\left(
\\begin{array}{cccc}
5 \& 4 \& 2 \& 5 \\\\
2 \& 7 \& 2 \& 1 \\\\
8 \& 3 \& 2 \& 6 \\\\
\\end{array} \\right)
\\]
3. Create a matrix C, with dimension 4x4 that will hold the result. Use this command: \`C \= matrix(rep(0,16\), nrow \= 4\)}
4. Using a for loop, loop over the rows of A first: \`for(i in 1:nrow(A))}
5. Inside this loop, loop over the columns of B: \`for(j in 1:ncol(B))}
6. Again, inside this loop, loop over the rows of B: \`for(k in 1:nrow(B))}
7. Inside this last loop, compute the result and save it inside C: \`C\[i,j] \= C\[i,j] \+ A\[i,k] \* B\[k,j]}
8. Now write a function that takes two matrices as arguments, and returns their product.
* R has a built\-in function to compute the dot product of 2 matrices. Which is it?
### Exercise 3
* Fizz Buzz: Print integers from 1 to 100\. If a number is divisible by 3, print the word `"Fizz"` if
it’s divisible by 5, print `"Buzz"`. Use a for loop and if statements.
* Write a function that takes an integer as arguments, and prints `"Fizz"` or `"Buzz"` up to that integer.
### Exercise 4
* Fizz Buzz 2: Same as above, but now add this third condition: if a number is both divisible by 3 and 5, print `"FizzBuzz"`.
* Write a function that takes an integer as argument, and prints `Fizz`, `Buzz` or `FizzBuzz` up to that integer.
7\.1 Control flow
-----------------
Knowing about control flow is essential to build your own functions. Without control flow statements,
such as if\-else statements or loops (or, in the case of pure functional programming languages, recursion),
programming languages would be very limited.
### 7\.1\.1 If\-else
Imagine you want a variable to be equal to a certain value if a condition is met. This is a typical
problem that requires the `if ... else ...` construct. For instance:
```
a <- 4
b <- 5
```
Suppose that if `a > b` then `f` should be equal to 20, else `f` should be equal to 10\. Using `if ... else ...` you can achieve this like so:
```
if (a > b) {
f <- 20
} else {
f <- 10
}
```
Obviously, here `f = 10`. Another way to achieve this is by using the `ifelse()` function:
```
f <- ifelse(a > b, 20, 10)
```
`if...else...` and `ifelse()` might seem interchangeable, but they’re not. `ifelse()` is vectorized, while
`if...else..` is not. Let’s try the following:
```
ifelse(c(1,2,4) > c(3, 1, 0), "yes", "no")
```
```
## [1] "no" "yes" "yes"
```
The result is a vector. Now, let’s see what happens if we use `if...else...` instead of `ifelse()`:
```
if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no")
```
```
> Error in if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no") :
the condition has length > 1
```
This results in an error (in previous R version, only the first element of the vector would get used).
We have already discussed this in Chapter 2, remember? If you want to make sure that such an expression
evaluates to `TRUE`, then you need to use `all()`:
```
ifelse(all(c(1,2,4) > c(3, 1, 0)), "all elements are greater", "not all elements are greater")
```
```
## [1] "not all elements are greater"
```
You may also remember the `any()` function:
```
ifelse(any(c(1,2,4) > c(3, 1, 0)), "at least one element is greater", "no element greater")
```
```
## [1] "at least one element is greater"
```
These are the basics. But sometimes, you might need to test for more complex conditions, which can
lead to using nested `if...else...` constructs. These, however, can get messy:
```
if (10 %% 3 == 0) {
print("10 is divisible by 3")
} else if (10 %% 2 == 0) {
print("10 is divisible by 2")
}
```
```
## [1] "10 is divisible by 2"
```
10 being obviously divisible by 2 and not 3, it is the second sentence that will be printed. The
`%%` operator is the modulus operator, which gives the rest of the division of 10 by 2\. In such
cases, it is easier to use `dplyr::case_when()`:
```
case_when(10 %% 3 == 0 ~ "10 is divisible by 3",
10 %% 2 == 0 ~ "10 is divisible by 2")
```
```
## [1] "10 is divisible by 2"
```
We have already encountered this function in Chapter 4, inside a `dplyr::mutate()` call to create a new column.
Let’s now discuss loops.
### 7\.1\.2 For loops
For loops make it possible to repeat a set of instructions `i` times. For example, try the following:
```
for (i in 1:10){
print("hello")
}
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
It is also possible to do computations using for loops. Let’s compute the sum of the first
100 integers:
```
result <- 0
for (i in 1:100){
result <- result + i
}
print(result)
```
```
## [1] 5050
```
`result` is equal to 5050, the expected result. What happened in that loop? First, we defined a
variable called `result` and set it to 0\. Then, when the loops starts, `i` equals 1, so we add
`result` to `1`, which is 1\. Then, `i` equals 2, and again, we add `result` to `i`. But this time,
`result` equals 1 and `i` equals 2, so now `result` equals 3, and we repeat this until `i`
equals 100\. If you know a programming language like C, this probably looks familiar. However, R is
not C, and you should, if possible, avoid writing code that looks like this. You should always
ask yourself the following questions:
* Is there an inbuilt function to achieve what I need? In this case we have `sum()`, so we could use `sum(seq(1, 100))`.
* Is there a way to use matrix algebra? This can sometimes make things easier, but it depends how comfortable
you are with matrix algebra. This would be the solution with matrix algebra: `rep(1, 100) %*% seq(1, 100)`.
* Is there a way to use building blocks that are already available? For instance, suppose that `sum()`
would not be a function available in R. Another way to solve this issue would be to use the following
building blocks: `+`, which computes the sum of two numbers and `Reduce()`, which *reduces* a list
of elements using an operator. Sounds complicated? Let’s see how `Reduce()` works. First, let me show you how
I combine these two functions to achieve the same result as when using `sum()`:
```
Reduce(`+`, seq(1, 100))
```
```
## [1] 5050
```
We will see how `Reduce()` works in greater detail in the next chapter, but what happened was something like this:
```
Reduce(`+`, seq(1, 100)) =
1 + Reduce(`+`, seq(2, 100)) =
1 + 2 + Reduce(`+`, seq(3, 100)) =
1 + 2 + 3 + Reduce(`+`, seq(4, 100)) =
....
```
If you ask yourself these questions, it turns out that you only rarely actually need to write loops, but loops are
still important, because sometimes there simply isn’t an alternative. Also, there are other situations where loops
are also important, so I refer you to the following [section](http://adv-r.had.co.nz/Functionals.html#functionals-not)
of Hadley Wickham’s *Advanced R* for an in\-depth discussion on situations where loops make more
sense than using functions such as `Reduce()`.
### 7\.1\.3 While loops
While loops are very similar to for loops. The instructions inside a while loop are repeated while a
certain condition holds true. Let’s consider the sum of the first 100 integers again:
```
result <- 0
i <- 1
while (i<=100){
result = result + i
i = i + 1
}
print(result)
```
```
## [1] 5050
```
Here, we first set `result` and `i` to 0\. Then, while `i` is less than, or equal to 100, we add `i`
to `result`. Notice that there is one more line than in the for loop version of this code: we need
to increment the value of `i` at each iteration, if not, `i` would stay equal to 1, and the
condition would always be fulfilled, and the loop would run forever (not really, only until your
computer runs out of memory, or until the heat death of the universe, whichever comes first).
Now that we know how to write loops, and know about `if...else...` constructs, we have (almost) all
the ingredients to write our own functions.
### 7\.1\.1 If\-else
Imagine you want a variable to be equal to a certain value if a condition is met. This is a typical
problem that requires the `if ... else ...` construct. For instance:
```
a <- 4
b <- 5
```
Suppose that if `a > b` then `f` should be equal to 20, else `f` should be equal to 10\. Using `if ... else ...` you can achieve this like so:
```
if (a > b) {
f <- 20
} else {
f <- 10
}
```
Obviously, here `f = 10`. Another way to achieve this is by using the `ifelse()` function:
```
f <- ifelse(a > b, 20, 10)
```
`if...else...` and `ifelse()` might seem interchangeable, but they’re not. `ifelse()` is vectorized, while
`if...else..` is not. Let’s try the following:
```
ifelse(c(1,2,4) > c(3, 1, 0), "yes", "no")
```
```
## [1] "no" "yes" "yes"
```
The result is a vector. Now, let’s see what happens if we use `if...else...` instead of `ifelse()`:
```
if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no")
```
```
> Error in if (c(1, 2, 4) > c(3, 1, 0)) print("yes") else print("no") :
the condition has length > 1
```
This results in an error (in previous R version, only the first element of the vector would get used).
We have already discussed this in Chapter 2, remember? If you want to make sure that such an expression
evaluates to `TRUE`, then you need to use `all()`:
```
ifelse(all(c(1,2,4) > c(3, 1, 0)), "all elements are greater", "not all elements are greater")
```
```
## [1] "not all elements are greater"
```
You may also remember the `any()` function:
```
ifelse(any(c(1,2,4) > c(3, 1, 0)), "at least one element is greater", "no element greater")
```
```
## [1] "at least one element is greater"
```
These are the basics. But sometimes, you might need to test for more complex conditions, which can
lead to using nested `if...else...` constructs. These, however, can get messy:
```
if (10 %% 3 == 0) {
print("10 is divisible by 3")
} else if (10 %% 2 == 0) {
print("10 is divisible by 2")
}
```
```
## [1] "10 is divisible by 2"
```
10 being obviously divisible by 2 and not 3, it is the second sentence that will be printed. The
`%%` operator is the modulus operator, which gives the rest of the division of 10 by 2\. In such
cases, it is easier to use `dplyr::case_when()`:
```
case_when(10 %% 3 == 0 ~ "10 is divisible by 3",
10 %% 2 == 0 ~ "10 is divisible by 2")
```
```
## [1] "10 is divisible by 2"
```
We have already encountered this function in Chapter 4, inside a `dplyr::mutate()` call to create a new column.
Let’s now discuss loops.
### 7\.1\.2 For loops
For loops make it possible to repeat a set of instructions `i` times. For example, try the following:
```
for (i in 1:10){
print("hello")
}
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
It is also possible to do computations using for loops. Let’s compute the sum of the first
100 integers:
```
result <- 0
for (i in 1:100){
result <- result + i
}
print(result)
```
```
## [1] 5050
```
`result` is equal to 5050, the expected result. What happened in that loop? First, we defined a
variable called `result` and set it to 0\. Then, when the loops starts, `i` equals 1, so we add
`result` to `1`, which is 1\. Then, `i` equals 2, and again, we add `result` to `i`. But this time,
`result` equals 1 and `i` equals 2, so now `result` equals 3, and we repeat this until `i`
equals 100\. If you know a programming language like C, this probably looks familiar. However, R is
not C, and you should, if possible, avoid writing code that looks like this. You should always
ask yourself the following questions:
* Is there an inbuilt function to achieve what I need? In this case we have `sum()`, so we could use `sum(seq(1, 100))`.
* Is there a way to use matrix algebra? This can sometimes make things easier, but it depends how comfortable
you are with matrix algebra. This would be the solution with matrix algebra: `rep(1, 100) %*% seq(1, 100)`.
* Is there a way to use building blocks that are already available? For instance, suppose that `sum()`
would not be a function available in R. Another way to solve this issue would be to use the following
building blocks: `+`, which computes the sum of two numbers and `Reduce()`, which *reduces* a list
of elements using an operator. Sounds complicated? Let’s see how `Reduce()` works. First, let me show you how
I combine these two functions to achieve the same result as when using `sum()`:
```
Reduce(`+`, seq(1, 100))
```
```
## [1] 5050
```
We will see how `Reduce()` works in greater detail in the next chapter, but what happened was something like this:
```
Reduce(`+`, seq(1, 100)) =
1 + Reduce(`+`, seq(2, 100)) =
1 + 2 + Reduce(`+`, seq(3, 100)) =
1 + 2 + 3 + Reduce(`+`, seq(4, 100)) =
....
```
If you ask yourself these questions, it turns out that you only rarely actually need to write loops, but loops are
still important, because sometimes there simply isn’t an alternative. Also, there are other situations where loops
are also important, so I refer you to the following [section](http://adv-r.had.co.nz/Functionals.html#functionals-not)
of Hadley Wickham’s *Advanced R* for an in\-depth discussion on situations where loops make more
sense than using functions such as `Reduce()`.
### 7\.1\.3 While loops
While loops are very similar to for loops. The instructions inside a while loop are repeated while a
certain condition holds true. Let’s consider the sum of the first 100 integers again:
```
result <- 0
i <- 1
while (i<=100){
result = result + i
i = i + 1
}
print(result)
```
```
## [1] 5050
```
Here, we first set `result` and `i` to 0\. Then, while `i` is less than, or equal to 100, we add `i`
to `result`. Notice that there is one more line than in the for loop version of this code: we need
to increment the value of `i` at each iteration, if not, `i` would stay equal to 1, and the
condition would always be fulfilled, and the loop would run forever (not really, only until your
computer runs out of memory, or until the heat death of the universe, whichever comes first).
Now that we know how to write loops, and know about `if...else...` constructs, we have (almost) all
the ingredients to write our own functions.
7\.2 Writing your own functions
-------------------------------
As you have seen by now, R includes a very large amount of in\-built functions, but also many
more functions are available in packages. However, there will be a lot of situations where you will
need to write your own. In this section we are going to learn how to write our own functions.
### 7\.2\.1 Declaring functions in R
Suppose you want to create the following function: \\(f(x) \= \\dfrac{1}{\\sqrt{x}}\\).
Writing this in R is quite simple:
```
my_function <- function(x){
1/sqrt(x)
}
```
The argument of the function, `x`, gets passed to the `function()` function and the *body* of
the function (more on that in the next Chapter) contains the function definition. Of course,
you could define functions that use more than one input:
```
my_function <- function(x, y){
1/sqrt(x + y)
}
```
or inputs with names longer than one character:
```
my_function <- function(argument1, argument2){
1/sqrt(argument1 + argument2)
}
```
Functions written by the user get called just the same way as functions included in R:
```
my_function(1, 10)
```
```
## [1] 0.3015113
```
It is also possible to provide default values to the function’s arguments, which are values that are used
if the user omits them:
```
my_function <- function(argument1, argument2 = 10){
1/sqrt(argument1 + argument2)
}
```
```
my_function(1)
```
```
## [1] 0.3015113
```
This is especially useful for functions with many arguments. Consider also the following example,
where the function has a default method:
```
my_function <- function(argument1, argument2, method = "foo"){
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
As you see, depending on the “method” chosen, the returned result is either a numeric, or a string.
What happens if the user provides a “method” that is neither “foo” nor “bar”?
```
my_function(10, 11, "spam")
```
As you can see nothing happens. It is possible to add safeguards to your function to avoid such
situations:
```
my_function <- function(argument1, argument2, method = "foo"){
if(!(method %in% c("foo", "bar"))){
return("Method must be either 'foo' or 'bar'")
}
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
```
my_function(10, 11, "foobar")
```
```
## [1] "Method must be either 'foo' or 'bar'"
```
Notice that I have used `return()` inside my first `if` statement. This is to immediately stop
evaluation of the function and return a value. If I had omitted it, evaluation would have
continued, as it is always the last expression that gets evaluated. Remove `return()` and run the
function again, and see what happens. Later, we are going to learn how to add better safeguards to
your functions and to avoid runtime errors.
While in general, it is a good idea to add comments to your functions to explain what they do, I
would avoid adding comments to functions that do things that are very obvious, such as with this
one. Function names should be of the form: `function_name()`. Always give your function very
explicit names! In mathematics it is standard to give functions just one letter as a name, but I
would advise against doing that in your code. Functions that you write are not special in any way;
this means that R will treat them the same way, and they will work in conjunction with any other
function just as if it was built\-in into R.
They have one limitation though (which is shared with R’s native function): just like in math,
they can only return one value. However, sometimes, you may need to return more than one value.
To be able to do this, you must put your values in a list, and return the list of values. For example:
```
average_and_sd <- function(x){
c(mean(x), sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
You’re still returning a single object, but it’s a vector. You can also return a named list:
```
average_and_sd <- function(x){
list("mean_x" = mean(x), "sd_x" = sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## $mean_x
## [1] 7.166667
##
## $sd_x
## [1] 4.262237
```
As described before, you can use `return()` at the end of your functions:
```
average_and_sd <- function(x){
result <- c(mean(x), sd(x))
return(result)
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
But this is only needed if you need to return a value early:
```
average_and_sd <- function(x){
if(any(is.na(x))){
return(NA)
} else {
c(mean(x), sd(x))
}
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
```
average_and_sd(c(1, 3, NA, 9, 10, 12))
```
```
## [1] NA
```
If you need to use a function from a package inside your function use `::`:
```
my_sum <- function(a_vector){
purrr::reduce(a_vector, `+`)
}
```
However, if you need to use more than one function, this can become tedious. A quick and dirty
way of doing that, is to use `library(package_name)`, inside the function:
```
my_sum <- function(a_vector){
library(purrr)
reduce(a_vector, `+`)
}
```
Loading the library inside the function has the advantage that you will be sure that the package
upon which your function depends will be loaded. If the package is already loaded, it will not be
loaded again, thus not impact performance, but if you forgot to load it at the beginning of your
script, then, no worries, your function will load it the first time you use it! However, you should
avoid doing this, because the resulting function is now not pure. It has a side effect, which is
loading a library. This could result in problems, especially if several functions load several
different packages that have functions with the same name. Depending on which function runs first,
a function with the same name but coming from the same package will be available in the global
environment. The very best way would be to write your own package and declare the packages upon
which your functions depend as dependencies. This is something we are going to explore in Chapter
9\.
You can put a lot of instructions inside a function, such as loops. Let’s create the function that
returns Fionacci numbers.
### 7\.2\.2 Fibonacci numbers
The Fibonacci sequence is the following:
\\\[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...\\]
Each subsequent number is composed of the sum of the two preceding ones. In R, it is possible to define a function that returns the \\(n^{th}\\) fibonacci number:
```
my_fibo <- function(n){
a <- 0
b <- 1
for (i in 1:n){
temp <- b
b <- a
a <- a + temp
}
a
}
```
Inside the loop, we defined a variable called `temp`. Defining temporary variables is usually very
useful. Let’s try to understand what happens inside this loop:
* First, we assign the value 0 to variable `a` and value 1 to variable `b`.
* We start a loop, that goes from 1 to `n`.
* We assign the value inside of `b` to a temporary variable, called `temp`.
* `b` becomes `a`.
* We assign the sum of `a` and `temp` to `a`.
* When the loop is finished, we return `a`.
What happens if we want the 3rd fibonacci number? At `n = 1` we have first `a = 0` and `b = 1`,
then `temp = 1`, `b = 0` and `a = 0 + 1`. Then `n = 2`. Now `b = 0` and `temp = 0`. The previous
result, `a = 0 + 1` is now assigned to `b`, so `b = 1`. Then, `a = 1 + 0`. Finally, `n = 3`. `temp = 1` (because `b = 1`), the previous result `a = 1` is assigned to `b` and finally, `a = 1 + 1`. So
the third fibonacci number equals 2\. Reading this might be a bit confusing; I strongly advise you
to run the algorithm on a sheet of paper, step by step.
The above algorithm is called an iterative algorithm, because it uses a loop to compute the result.
Let’s look at another way to think about the problem, with a so\-called recursive function:
```
fibo_recur <- function(n){
if (n == 0 || n == 1){
return(n)
} else {
fibo_recur(n-1) + fibo_recur(n-2)
}
}
```
This algorithm should be easier to understand: if `n = 0` or `n = 1` the function should return `n`
(0 or 1\). If `n` is strictly bigger than `1`, `fibo_recur()` should return the sum of
`fibo_recur(n-1)` and `fibo_recur(n-2)`. This version of the function is very much the same as the
mathematical definition of the fibonacci sequence. So why not use only recursive algorithms
then? Try to run the following:
```
system.time(my_fibo(30))
```
```
## user system elapsed
## 0.007 0.000 0.007
```
The result should be printed very fast (the `system.time()` function returns the time that it took
to execute `my_fibo(30)`). Let’s try with the recursive version:
```
system.time(fibo_recur(30))
```
```
## user system elapsed
## 1.482 0.080 1.574
```
It takes much longer to execute! Recursive algorithms are very CPU demanding, so if speed is
critical, it’s best to avoid recursive algorithms. Also, in `fibo_recur()` try to remove this line:
`if (n == 0 || n == 1)` and try to run `fibo_recur(5)` and see what happens. You should
get an error: this is because for recursive algorithms you need a stopping condition, or else,
it would run forever. This is not the case for iterative algorithms, because the stopping
condition is the last step of the loop.
So as you can see, for recursive relationships, for or while loops are the way to go in R, whether
you’re writing these loops inside functions or not.
### 7\.2\.1 Declaring functions in R
Suppose you want to create the following function: \\(f(x) \= \\dfrac{1}{\\sqrt{x}}\\).
Writing this in R is quite simple:
```
my_function <- function(x){
1/sqrt(x)
}
```
The argument of the function, `x`, gets passed to the `function()` function and the *body* of
the function (more on that in the next Chapter) contains the function definition. Of course,
you could define functions that use more than one input:
```
my_function <- function(x, y){
1/sqrt(x + y)
}
```
or inputs with names longer than one character:
```
my_function <- function(argument1, argument2){
1/sqrt(argument1 + argument2)
}
```
Functions written by the user get called just the same way as functions included in R:
```
my_function(1, 10)
```
```
## [1] 0.3015113
```
It is also possible to provide default values to the function’s arguments, which are values that are used
if the user omits them:
```
my_function <- function(argument1, argument2 = 10){
1/sqrt(argument1 + argument2)
}
```
```
my_function(1)
```
```
## [1] 0.3015113
```
This is especially useful for functions with many arguments. Consider also the following example,
where the function has a default method:
```
my_function <- function(argument1, argument2, method = "foo"){
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
As you see, depending on the “method” chosen, the returned result is either a numeric, or a string.
What happens if the user provides a “method” that is neither “foo” nor “bar”?
```
my_function(10, 11, "spam")
```
As you can see nothing happens. It is possible to add safeguards to your function to avoid such
situations:
```
my_function <- function(argument1, argument2, method = "foo"){
if(!(method %in% c("foo", "bar"))){
return("Method must be either 'foo' or 'bar'")
}
x <- argument1 + argument2
if(method == "foo"){
1/sqrt(x)
} else if (method == "bar"){
"this is a string"
}
}
my_function(10, 11)
```
```
## [1] 0.2182179
```
```
my_function(10, 11, "bar")
```
```
## [1] "this is a string"
```
```
my_function(10, 11, "foobar")
```
```
## [1] "Method must be either 'foo' or 'bar'"
```
Notice that I have used `return()` inside my first `if` statement. This is to immediately stop
evaluation of the function and return a value. If I had omitted it, evaluation would have
continued, as it is always the last expression that gets evaluated. Remove `return()` and run the
function again, and see what happens. Later, we are going to learn how to add better safeguards to
your functions and to avoid runtime errors.
While in general, it is a good idea to add comments to your functions to explain what they do, I
would avoid adding comments to functions that do things that are very obvious, such as with this
one. Function names should be of the form: `function_name()`. Always give your function very
explicit names! In mathematics it is standard to give functions just one letter as a name, but I
would advise against doing that in your code. Functions that you write are not special in any way;
this means that R will treat them the same way, and they will work in conjunction with any other
function just as if it was built\-in into R.
They have one limitation though (which is shared with R’s native function): just like in math,
they can only return one value. However, sometimes, you may need to return more than one value.
To be able to do this, you must put your values in a list, and return the list of values. For example:
```
average_and_sd <- function(x){
c(mean(x), sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
You’re still returning a single object, but it’s a vector. You can also return a named list:
```
average_and_sd <- function(x){
list("mean_x" = mean(x), "sd_x" = sd(x))
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## $mean_x
## [1] 7.166667
##
## $sd_x
## [1] 4.262237
```
As described before, you can use `return()` at the end of your functions:
```
average_and_sd <- function(x){
result <- c(mean(x), sd(x))
return(result)
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
But this is only needed if you need to return a value early:
```
average_and_sd <- function(x){
if(any(is.na(x))){
return(NA)
} else {
c(mean(x), sd(x))
}
}
average_and_sd(c(1, 3, 8, 9, 10, 12))
```
```
## [1] 7.166667 4.262237
```
```
average_and_sd(c(1, 3, NA, 9, 10, 12))
```
```
## [1] NA
```
If you need to use a function from a package inside your function use `::`:
```
my_sum <- function(a_vector){
purrr::reduce(a_vector, `+`)
}
```
However, if you need to use more than one function, this can become tedious. A quick and dirty
way of doing that, is to use `library(package_name)`, inside the function:
```
my_sum <- function(a_vector){
library(purrr)
reduce(a_vector, `+`)
}
```
Loading the library inside the function has the advantage that you will be sure that the package
upon which your function depends will be loaded. If the package is already loaded, it will not be
loaded again, thus not impact performance, but if you forgot to load it at the beginning of your
script, then, no worries, your function will load it the first time you use it! However, you should
avoid doing this, because the resulting function is now not pure. It has a side effect, which is
loading a library. This could result in problems, especially if several functions load several
different packages that have functions with the same name. Depending on which function runs first,
a function with the same name but coming from the same package will be available in the global
environment. The very best way would be to write your own package and declare the packages upon
which your functions depend as dependencies. This is something we are going to explore in Chapter
9\.
You can put a lot of instructions inside a function, such as loops. Let’s create the function that
returns Fionacci numbers.
### 7\.2\.2 Fibonacci numbers
The Fibonacci sequence is the following:
\\\[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...\\]
Each subsequent number is composed of the sum of the two preceding ones. In R, it is possible to define a function that returns the \\(n^{th}\\) fibonacci number:
```
my_fibo <- function(n){
a <- 0
b <- 1
for (i in 1:n){
temp <- b
b <- a
a <- a + temp
}
a
}
```
Inside the loop, we defined a variable called `temp`. Defining temporary variables is usually very
useful. Let’s try to understand what happens inside this loop:
* First, we assign the value 0 to variable `a` and value 1 to variable `b`.
* We start a loop, that goes from 1 to `n`.
* We assign the value inside of `b` to a temporary variable, called `temp`.
* `b` becomes `a`.
* We assign the sum of `a` and `temp` to `a`.
* When the loop is finished, we return `a`.
What happens if we want the 3rd fibonacci number? At `n = 1` we have first `a = 0` and `b = 1`,
then `temp = 1`, `b = 0` and `a = 0 + 1`. Then `n = 2`. Now `b = 0` and `temp = 0`. The previous
result, `a = 0 + 1` is now assigned to `b`, so `b = 1`. Then, `a = 1 + 0`. Finally, `n = 3`. `temp = 1` (because `b = 1`), the previous result `a = 1` is assigned to `b` and finally, `a = 1 + 1`. So
the third fibonacci number equals 2\. Reading this might be a bit confusing; I strongly advise you
to run the algorithm on a sheet of paper, step by step.
The above algorithm is called an iterative algorithm, because it uses a loop to compute the result.
Let’s look at another way to think about the problem, with a so\-called recursive function:
```
fibo_recur <- function(n){
if (n == 0 || n == 1){
return(n)
} else {
fibo_recur(n-1) + fibo_recur(n-2)
}
}
```
This algorithm should be easier to understand: if `n = 0` or `n = 1` the function should return `n`
(0 or 1\). If `n` is strictly bigger than `1`, `fibo_recur()` should return the sum of
`fibo_recur(n-1)` and `fibo_recur(n-2)`. This version of the function is very much the same as the
mathematical definition of the fibonacci sequence. So why not use only recursive algorithms
then? Try to run the following:
```
system.time(my_fibo(30))
```
```
## user system elapsed
## 0.007 0.000 0.007
```
The result should be printed very fast (the `system.time()` function returns the time that it took
to execute `my_fibo(30)`). Let’s try with the recursive version:
```
system.time(fibo_recur(30))
```
```
## user system elapsed
## 1.482 0.080 1.574
```
It takes much longer to execute! Recursive algorithms are very CPU demanding, so if speed is
critical, it’s best to avoid recursive algorithms. Also, in `fibo_recur()` try to remove this line:
`if (n == 0 || n == 1)` and try to run `fibo_recur(5)` and see what happens. You should
get an error: this is because for recursive algorithms you need a stopping condition, or else,
it would run forever. This is not the case for iterative algorithms, because the stopping
condition is the last step of the loop.
So as you can see, for recursive relationships, for or while loops are the way to go in R, whether
you’re writing these loops inside functions or not.
7\.3 Exercises
--------------
### Exercise 1
In this exercise, you will write a function to compute the sum of the n first integers. Combine the
algorithm we saw in section about while loops and what you learned about functions
in this section.
### Exercise 2
Write a function called `my_fact()` that computes the factorial of a number `n`. Do it using a
loop, using a recursive function, and using a functional:
### Exercise 3
Write a function to find the roots of quadratic functions. Your function should take 3 arguments,
`a`, `b` and `c` and return the two roots. Only consider the case where there are two real roots
(delta \> 0\).
### Exercise 1
In this exercise, you will write a function to compute the sum of the n first integers. Combine the
algorithm we saw in section about while loops and what you learned about functions
in this section.
### Exercise 2
Write a function called `my_fact()` that computes the factorial of a number `n`. Do it using a
loop, using a recursive function, and using a functional:
### Exercise 3
Write a function to find the roots of quadratic functions. Your function should take 3 arguments,
`a`, `b` and `c` and return the two roots. Only consider the case where there are two real roots
(delta \> 0\).
7\.4 Functions that take functions as arguments: writing your own higher\-order functions
-----------------------------------------------------------------------------------------
Functions that take functions as arguments are very powerful and useful tools.
Two very important functions, that we will discuss in chapter 8 are `purrr::map()`
and `purrr::reduce()`. But you can also write your own! A very simple example
would be the following:
```
my_func <- function(x, func){
func(x)
}
```
`my_func()` is a very simple function that takes `x` and `func()` as arguments and that simply
executes `func(x)`. This might not seem very useful (after all, you could simply use `func(x)!`) but
this is just for illustration purposes, in practice, your functions would be more useful than that!
Let’s try to use `my_func()`:
```
my_func(c(1, 8, 1, 0, 8), mean)
```
```
## [1] 3.6
```
As expected, this returns the mean of the given vector. But now suppose the following:
```
my_func(c(1, 8, 1, NA, 8), mean)
```
```
## [1] NA
```
Because one element of the list is `NA`, the whole mean is `NA`. `mean()` has a `na.rm` argument
that you can set to `TRUE` to ignore the `NA`s in the vector. However, here, there is no way to
provide this argument to the function `mean()`! Let’s see what happens when we try to:
```
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE)
```
```
Error in my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE) :
unused argument (na.rm = TRUE)
```
So what you could do is pass the value `TRUE` to the `na.rm` argument of `mean()` from your own
function:
```
my_func <- function(x, func, remove_na){
func(x, na.rm = remove_na)
}
my_func(c(1, 8, 1, NA, 8), mean, remove_na = TRUE)
```
```
## [1] 4.5
```
This is one solution, but `mean()` also has another argument called `trim`. What if some other
user needs this argument? Should you also add it to your function? Surely there’s a way to avoid
this problem? Yes, there is, and it by using the *dots*. The `...` simply mean “any other
argument as needed”, and it’s very easy to use:
```
my_func <- function(x, func, ...){
func(x, ...)
}
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE)
```
```
## [1] 4.5
```
or, now, if you need the `trim` argument:
```
my_func(c(1, 8, 1, NA, 8), mean, na.rm = TRUE, trim = 0.1)
```
```
## [1] 4.5
```
The `...` are very useful when writing higher\-order functions such as `my_func()`, because it allows
you to pass arguments *down* to the underlying functions.
7\.5 Functions that return functions
------------------------------------
The example from before, `my_func()` took three arguments, some `x`, a function `func`, and `...` (dots). `my_func()`
was a kind of wrapper that evaluated `func` on its arguments `x` and `...`. But sometimes this is not quite what you
need or want. It is sometimes useful to write a function that returns a modified function. This type of function
is called a function factory, as it *builds* functions. For instance, suppose that we want to time how long functions
take to run. An idea would be to proceed like this:
```
tic <- Sys.time()
very_slow_function(x)
toc <- Sys.time()
running_time <- toc - tic
```
but if you want to time several functions, this gets very tedious. It would be much easier if functions would
time *themselves*. We could achieve this by writing a wrapper, like this:
```
timed_very_slow_function <- function(...){
tic <- Sys.time()
result <- very_slow_function(x)
toc <- Sys.time()
running_time <- toc - tic
list("result" = result,
"running_time" = running_time)
}
```
The problem here is that we have to change each function we need to time. But thanks to the concept of function
factories, we can write a function that does this for us:
```
time_f <- function(.f, ...){
function(...){
tic <- Sys.time()
result <- .f(...)
toc <- Sys.time()
running_time <- toc - tic
list("result" = result,
"running_time" = running_time)
}
}
```
`time_f()` is a function that returns a function, a function factory. Calling it on a function returns, as expected,
a function:
```
t_mean <- time_f(mean)
t_mean
```
```
## function(...){
##
## tic <- Sys.time()
## result <- .f(...)
## toc <- Sys.time()
##
## running_time <- toc - tic
##
## list("result" = result,
## "running_time" = running_time)
##
## }
## <environment: 0x562c5699a6b8>
```
This function can now be used like any other function:
```
output <- t_mean(seq(-500000, 500000))
```
`output` is a list of two elements, the first being simply the result of `mean(seq(-500000, 500000))`, and the other
being the running time.
This approach is super flexible. For instance, imagine that there is an `NA` in the vector. This would result in
the mean of this vector being `NA`:
```
t_mean(c(NA, seq(-500000, 500000)))
```
```
## $result
## [1] NA
##
## $running_time
## Time difference of 0.006885529 secs
```
But because we use the `...` in the definition of `time_f()`, we can now simply pass `mean()`’s option down to it:
```
t_mean(c(NA, seq(-500000, 500000)), na.rm = TRUE)
```
```
## $result
## [1] 0
##
## $running_time
## Time difference of 0.01394773 secs
```
7\.6 Functions that take columns of data as arguments
-----------------------------------------------------
### 7\.6\.1 The `enquo() - !!()` approach
In many situations, you will want to write functions that look similar to this:
```
my_function(my_data, one_column_inside_data)
```
Such a function would be useful in situation where you have to apply a certain number of operations
to columns for different data frames. For example if you need to create tables of descriptive
statistics or graphs periodically, it might be very interesting to put these operations inside a
function and then call the function whenever you need it, on the fresh batch of data.
However, if you try to write something like that, something that might seem unexpected, at first,
will happen:
```
data(mtcars)
simple_function <- function(dataset, col_name){
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
The variable `col_name` is passed to `simple_function()` as a string, but `group_by()` requires a
variable name. So why not try to convert `col_name` to a name?
```
simple_function <- function(dataset, col_name){
col_name <- as.name(col_name)
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
This is because R is literally looking for the variable `"dist"` somewhere in the global
environment, and not as a column of the data. R does not understand that you are refering to the
column `"dist"` that is inside the dataset. So how can we make R understands what you mean?
To be able to do that, we need to use a framework that was introduced in the `{tidyverse}`,
called *tidy evaluation*. This framework can be used by installing the `{rlang}` package.
`{rlang}` is quite a technical package, so I will spare you the details. But you should at
the very least take a look at the following documents
[here](http://dplyr.tidyverse.org/articles/programming.html) and
[here](https://rlang.r-lib.org/reference/topic-data-mask.html). The
discussion can get complicated, but you don’t need to know everything about `{rlang}`.
As you will see, knowing some of the capabilities `{rlang}` provides can be incredibly useful.
Take a look at the code below:
```
simple_function <- function(dataset, col_name){
col_name <- enquo(col_name)
dataset %>%
group_by(!!col_name) %>%
summarise(mean_mpg = mean(mpg))
}
simple_function(mtcars, cyl)
```
```
## # A tibble: 3 × 2
## cyl mean_mpg
## <dbl> <dbl>
## 1 4 26.7
## 2 6 19.7
## 3 8 15.1
```
As you can see, the previous idea we had, which was using `as.name()` was not very far away from
the solution. The solution, with `{rlang}`, consists in using `enquo()`, which (for our purposes),
does something similar to `as.name()`. Now that `col_name` is (R programmers call it) quoted, or
*defused*, we need to tell `group_by()` to evaluate the input as is. This is done with `!!()`,
called the [injection operator](https://rlang.r-lib.org/reference/injection-operator.html), which
is another `{rlang}` function. I say it again; don’t worry if you don’t understand everything. Just
remember to use `enquo()` on your column names and then `!!()` inside the `{dplyr}` function you
want to use.
Let’s see some other examples:
```
simple_function <- function(dataset, col_name, value){
col_name <- enquo(col_name)
dataset %>%
filter((!!col_name) == value) %>%
summarise(mean_cyl = mean(cyl))
}
simple_function(mtcars, am, 1)
```
```
## mean_cyl
## 1 5.076923
```
Notice that I’ve written:
```
filter((!!col_name) == value)
```
and not:
```
filter(!!col_name == value)
```
I have enclosed `!!col_name` inside parentheses. This is because operators such as `==` have
precedence over `!!`, so you have to be explicit. Also, notice that I didn’t have to quote `1`.
This is because it’s *standard* variable, not a column inside the dataset. Let’s make this function
a bit more general. I hard\-coded the variable cyl inside the body of the function, but maybe you’d
like the mean of another variable?
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
dataset %>%
filter((!!filter_col) == value) %>%
summarise(mean((!!mean_col)))
}
simple_function(mtcars, am, cyl, 1)
```
```
## mean(cyl)
## 1 5.076923
```
Notice that I had to quote `mean_col` too.
Using the `...` that we discovered in the previous section, we can pass more than one column:
```
simple_function <- function(dataset, ...){
col_vars <- quos(...)
dataset %>%
summarise_at(vars(!!!col_vars), funs(mean, sd))
}
```
Because these *dots* contain more than one variable, you have to use `quos()` instead of `enquo()`.
This will put the arguments provided via the dots in a list. Then, because we have a list of
columns, we have to use `summarise_at()`, which you should know if you did the exercices of
Chapter 4\. So if you didn’t do them, go back to them and finish them first. Doing the exercise will
also teach you what `vars()` and `funs()` are. The last thing you have to pay attention to is to
use `!!!()` if you used `quos()`. So 3 `!` instead of only 2\. This allows you to then do things
like this:
```
simple_function(mtcars, am, cyl, mpg)
```
```
## Warning: `funs()` was deprecated in dplyr 0.8.0.
## Please use a list of either functions or lambdas:
##
## # Simple named list:
## list(mean = mean, median = median)
##
## # Auto named with `tibble::lst()`:
## tibble::lst(mean, median)
##
## # Using lambdas
## list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948
```
Using `...` with `!!!()` allows you to write very flexible functions.
If you need to be even more general, you can also provide the summary functions as arguments of
your function, but you have to rewrite your function a little bit:
```
simple_function <- function(dataset, cols, funcs){
dataset %>%
summarise_at(vars(!!!cols), funs(!!!funcs))
}
```
You might be wondering where the `quos()` went? Well because now we are passing two lists, a list of
columns that we have to quote, and a list of functions, that we also have to quote, we need to use `quos()`
when calling the function:
```
simple_function(mtcars, quos(am, cyl, mpg), quos(mean, sd, sum))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd am_sum cyl_sum mpg_sum
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948 13 198 642.9
```
This works, but I don’t think you’ll need to have that much flexibility; either the columns
are variables, or the functions, but rarely both at the same time.
To conclude this function, I should also talk about `as_label()` which allows you to change the
name of a variable, for instance if you want to call the resulting column `mean_mpg` when you
compute the mean of the `mpg` column:
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
mean_name <- paste0("mean_", as_label(mean_col))
dataset %>%
filter((!!filter_col) == value) %>%
summarise(!!(mean_name) := mean((!!mean_col)))
}
```
Pay attention to the `:=` operator in the last line. This is needed when using `as_label()`.
### 7\.6\.2 Curly Curly, a simplified approach to `enquo()` and `!!()`
The previous section might have been a bit difficult to grasp, but there is a simplified way of doing it,
which consists in using `{{}}`, introduced in `{rlang}` version 0\.4\.0\.
The suggested pronunciation of `{{}}` is *curly\-curly*, but there is no
[consensus yet](https://twitter.com/JonTheGeek/status/1144815369766547456).
Let’s suppose that I need to write a function that takes a data frame, as well as a column from
this data frame as arguments, just like before:
```
how_many_na <- function(dataframe, column_name){
dataframe %>%
filter(is.na(column_name)) %>%
count()
}
```
Let’s try this function out on the `starwars` data:
```
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
As you can see, there are missing values in the `hair_color` column. Let’s try to count how many
missing values are in this column:
```
how_many_na(starwars, hair_color)
```
```
Error: object 'hair_color' not found
```
Just as expected, this does not work. The issue is that the column is inside the dataframe,
but when calling the function with `hair_color` as the second argument, R is looking for a
variable called `hair_color` that does not exist. What about trying with `"hair_color"`?
```
how_many_na(starwars, "hair_color")
```
```
## # A tibble: 1 × 1
## n
## <int>
## 1 0
```
Now we get something, but something wrong!
One way to solve this issue, is to not use the `filter()` function, and instead rely on base R:
```
how_many_na_base <- function(dataframe, column_name){
na_index <- is.na(dataframe[, column_name])
nrow(dataframe[na_index, column_name])
}
how_many_na_base(starwars, "hair_color")
```
```
## [1] 5
```
This works, but not using the `{tidyverse}` at all is not always an option. For instance,
the next function, which uses a grouping variable, would be difficult to implement without the
`{tidyverse}`:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by(grouping_var) %>%
summarise(mean(column_name, na.rm = TRUE))
}
```
Calling this function results in the following error message, as expected:
```
Error: Column `grouping_var` is unknown
```
In the previous section, we solved the issue like so:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
grouping_var <- enquo(grouping_var)
column_name <- enquo(column_name)
mean_name <- paste0("mean_", as_label(column_name))
dataframe %>%
group_by(!!grouping_var) %>%
summarise(!!(mean_name) := mean(!!column_name, na.rm = TRUE))
}
```
The core of the function remained very similar to the version from before, but now one has to
use the `enquo()`\-`!!` syntax.
Now this can be simplified using the new `{{}}` syntax:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by({{grouping_var}}) %>%
summarise({{column_name}} := mean({{column_name}}, na.rm = TRUE))
}
```
Much easier and cleaner! You still have to use the `:=` operator instead of `=` for the column name
however, and if you want to modify the column names, for instance in this
case return `"mean_height"` instead of `height` you have to keep using the `enquo()`\-`!!` syntax.
### 7\.6\.1 The `enquo() - !!()` approach
In many situations, you will want to write functions that look similar to this:
```
my_function(my_data, one_column_inside_data)
```
Such a function would be useful in situation where you have to apply a certain number of operations
to columns for different data frames. For example if you need to create tables of descriptive
statistics or graphs periodically, it might be very interesting to put these operations inside a
function and then call the function whenever you need it, on the fresh batch of data.
However, if you try to write something like that, something that might seem unexpected, at first,
will happen:
```
data(mtcars)
simple_function <- function(dataset, col_name){
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
The variable `col_name` is passed to `simple_function()` as a string, but `group_by()` requires a
variable name. So why not try to convert `col_name` to a name?
```
simple_function <- function(dataset, col_name){
col_name <- as.name(col_name)
dataset %>%
group_by(col_name) %>%
summarise(mean_speed = mean(speed))
}
simple_function(cars, "dist")
```
```
Error: unknown variable to group by : col_name
```
This is because R is literally looking for the variable `"dist"` somewhere in the global
environment, and not as a column of the data. R does not understand that you are refering to the
column `"dist"` that is inside the dataset. So how can we make R understands what you mean?
To be able to do that, we need to use a framework that was introduced in the `{tidyverse}`,
called *tidy evaluation*. This framework can be used by installing the `{rlang}` package.
`{rlang}` is quite a technical package, so I will spare you the details. But you should at
the very least take a look at the following documents
[here](http://dplyr.tidyverse.org/articles/programming.html) and
[here](https://rlang.r-lib.org/reference/topic-data-mask.html). The
discussion can get complicated, but you don’t need to know everything about `{rlang}`.
As you will see, knowing some of the capabilities `{rlang}` provides can be incredibly useful.
Take a look at the code below:
```
simple_function <- function(dataset, col_name){
col_name <- enquo(col_name)
dataset %>%
group_by(!!col_name) %>%
summarise(mean_mpg = mean(mpg))
}
simple_function(mtcars, cyl)
```
```
## # A tibble: 3 × 2
## cyl mean_mpg
## <dbl> <dbl>
## 1 4 26.7
## 2 6 19.7
## 3 8 15.1
```
As you can see, the previous idea we had, which was using `as.name()` was not very far away from
the solution. The solution, with `{rlang}`, consists in using `enquo()`, which (for our purposes),
does something similar to `as.name()`. Now that `col_name` is (R programmers call it) quoted, or
*defused*, we need to tell `group_by()` to evaluate the input as is. This is done with `!!()`,
called the [injection operator](https://rlang.r-lib.org/reference/injection-operator.html), which
is another `{rlang}` function. I say it again; don’t worry if you don’t understand everything. Just
remember to use `enquo()` on your column names and then `!!()` inside the `{dplyr}` function you
want to use.
Let’s see some other examples:
```
simple_function <- function(dataset, col_name, value){
col_name <- enquo(col_name)
dataset %>%
filter((!!col_name) == value) %>%
summarise(mean_cyl = mean(cyl))
}
simple_function(mtcars, am, 1)
```
```
## mean_cyl
## 1 5.076923
```
Notice that I’ve written:
```
filter((!!col_name) == value)
```
and not:
```
filter(!!col_name == value)
```
I have enclosed `!!col_name` inside parentheses. This is because operators such as `==` have
precedence over `!!`, so you have to be explicit. Also, notice that I didn’t have to quote `1`.
This is because it’s *standard* variable, not a column inside the dataset. Let’s make this function
a bit more general. I hard\-coded the variable cyl inside the body of the function, but maybe you’d
like the mean of another variable?
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
dataset %>%
filter((!!filter_col) == value) %>%
summarise(mean((!!mean_col)))
}
simple_function(mtcars, am, cyl, 1)
```
```
## mean(cyl)
## 1 5.076923
```
Notice that I had to quote `mean_col` too.
Using the `...` that we discovered in the previous section, we can pass more than one column:
```
simple_function <- function(dataset, ...){
col_vars <- quos(...)
dataset %>%
summarise_at(vars(!!!col_vars), funs(mean, sd))
}
```
Because these *dots* contain more than one variable, you have to use `quos()` instead of `enquo()`.
This will put the arguments provided via the dots in a list. Then, because we have a list of
columns, we have to use `summarise_at()`, which you should know if you did the exercices of
Chapter 4\. So if you didn’t do them, go back to them and finish them first. Doing the exercise will
also teach you what `vars()` and `funs()` are. The last thing you have to pay attention to is to
use `!!!()` if you used `quos()`. So 3 `!` instead of only 2\. This allows you to then do things
like this:
```
simple_function(mtcars, am, cyl, mpg)
```
```
## Warning: `funs()` was deprecated in dplyr 0.8.0.
## Please use a list of either functions or lambdas:
##
## # Simple named list:
## list(mean = mean, median = median)
##
## # Auto named with `tibble::lst()`:
## tibble::lst(mean, median)
##
## # Using lambdas
## list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948
```
Using `...` with `!!!()` allows you to write very flexible functions.
If you need to be even more general, you can also provide the summary functions as arguments of
your function, but you have to rewrite your function a little bit:
```
simple_function <- function(dataset, cols, funcs){
dataset %>%
summarise_at(vars(!!!cols), funs(!!!funcs))
}
```
You might be wondering where the `quos()` went? Well because now we are passing two lists, a list of
columns that we have to quote, and a list of functions, that we also have to quote, we need to use `quos()`
when calling the function:
```
simple_function(mtcars, quos(am, cyl, mpg), quos(mean, sd, sum))
```
```
## am_mean cyl_mean mpg_mean am_sd cyl_sd mpg_sd am_sum cyl_sum mpg_sum
## 1 0.40625 6.1875 20.09062 0.4989909 1.785922 6.026948 13 198 642.9
```
This works, but I don’t think you’ll need to have that much flexibility; either the columns
are variables, or the functions, but rarely both at the same time.
To conclude this function, I should also talk about `as_label()` which allows you to change the
name of a variable, for instance if you want to call the resulting column `mean_mpg` when you
compute the mean of the `mpg` column:
```
simple_function <- function(dataset, filter_col, mean_col, value){
filter_col <- enquo(filter_col)
mean_col <- enquo(mean_col)
mean_name <- paste0("mean_", as_label(mean_col))
dataset %>%
filter((!!filter_col) == value) %>%
summarise(!!(mean_name) := mean((!!mean_col)))
}
```
Pay attention to the `:=` operator in the last line. This is needed when using `as_label()`.
### 7\.6\.2 Curly Curly, a simplified approach to `enquo()` and `!!()`
The previous section might have been a bit difficult to grasp, but there is a simplified way of doing it,
which consists in using `{{}}`, introduced in `{rlang}` version 0\.4\.0\.
The suggested pronunciation of `{{}}` is *curly\-curly*, but there is no
[consensus yet](https://twitter.com/JonTheGeek/status/1144815369766547456).
Let’s suppose that I need to write a function that takes a data frame, as well as a column from
this data frame as arguments, just like before:
```
how_many_na <- function(dataframe, column_name){
dataframe %>%
filter(is.na(column_name)) %>%
count()
}
```
Let’s try this function out on the `starwars` data:
```
data(starwars)
head(starwars)
```
```
## # A tibble: 6 × 14
## name height mass hair_…¹ skin_…² eye_c…³ birth…⁴ sex gender homew…⁵
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
## 1 Luke Skywal… 172 77 blond fair blue 19 male mascu… Tatooi…
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu… Tatooi…
## 3 R2-D2 96 32 <NA> white,… red 33 none mascu… Naboo
## 4 Darth Vader 202 136 none white yellow 41.9 male mascu… Tatooi…
## 5 Leia Organa 150 49 brown light brown 19 fema… femin… Aldera…
## 6 Owen Lars 178 120 brown,… light blue 52 male mascu… Tatooi…
## # … with 4 more variables: species <chr>, films <list>, vehicles <list>,
## # starships <list>, and abbreviated variable names ¹hair_color, ²skin_color,
## # ³eye_color, ⁴birth_year, ⁵homeworld
```
As you can see, there are missing values in the `hair_color` column. Let’s try to count how many
missing values are in this column:
```
how_many_na(starwars, hair_color)
```
```
Error: object 'hair_color' not found
```
Just as expected, this does not work. The issue is that the column is inside the dataframe,
but when calling the function with `hair_color` as the second argument, R is looking for a
variable called `hair_color` that does not exist. What about trying with `"hair_color"`?
```
how_many_na(starwars, "hair_color")
```
```
## # A tibble: 1 × 1
## n
## <int>
## 1 0
```
Now we get something, but something wrong!
One way to solve this issue, is to not use the `filter()` function, and instead rely on base R:
```
how_many_na_base <- function(dataframe, column_name){
na_index <- is.na(dataframe[, column_name])
nrow(dataframe[na_index, column_name])
}
how_many_na_base(starwars, "hair_color")
```
```
## [1] 5
```
This works, but not using the `{tidyverse}` at all is not always an option. For instance,
the next function, which uses a grouping variable, would be difficult to implement without the
`{tidyverse}`:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by(grouping_var) %>%
summarise(mean(column_name, na.rm = TRUE))
}
```
Calling this function results in the following error message, as expected:
```
Error: Column `grouping_var` is unknown
```
In the previous section, we solved the issue like so:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
grouping_var <- enquo(grouping_var)
column_name <- enquo(column_name)
mean_name <- paste0("mean_", as_label(column_name))
dataframe %>%
group_by(!!grouping_var) %>%
summarise(!!(mean_name) := mean(!!column_name, na.rm = TRUE))
}
```
The core of the function remained very similar to the version from before, but now one has to
use the `enquo()`\-`!!` syntax.
Now this can be simplified using the new `{{}}` syntax:
```
summarise_groups <- function(dataframe, grouping_var, column_name){
dataframe %>%
group_by({{grouping_var}}) %>%
summarise({{column_name}} := mean({{column_name}}, na.rm = TRUE))
}
```
Much easier and cleaner! You still have to use the `:=` operator instead of `=` for the column name
however, and if you want to modify the column names, for instance in this
case return `"mean_height"` instead of `height` you have to keep using the `enquo()`\-`!!` syntax.
7\.7 Functions that use loops
-----------------------------
It is entirely possible to put a loop inside a function. For example, consider the following
function that return the square root of a number using Newton’s algorithm:
```
sqrt_newton <- function(a, init = 1, eps = 0.01){
stopifnot(a >= 0)
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
}
init
}
```
This functions contains a while loop inside its body. Let’s see if it works:
```
sqrt_newton(16)
```
```
## [1] 4.000001
```
In the definition of the function, I wrote `init = 1` and `eps = 0.01` which means that this
argument can be omitted and will have the provided value (0\.01\) as the default. You can then use
this function as any other, for example with `map()`:
```
map(c(16, 7, 8, 9, 12), sqrt_newton)
```
```
## [[1]]
## [1] 4.000001
##
## [[2]]
## [1] 2.645767
##
## [[3]]
## [1] 2.828469
##
## [[4]]
## [1] 3.000092
##
## [[5]]
## [1] 3.464616
```
This is what I meant before with “your functions are nothing special”. Once the function is
defined, you can use it like any other base R function.
Notice the use of `stopifnot()` inside the body of the function. This is a way to return an error
in case a condition is not fulfilled. We are going to learn more about this type of functions
in the next chapter.
7\.8 Anonymous functions
------------------------
As the name implies, anonymous functions are functions that do not have a name. These are useful inside
functions that have functions as arguments, such as `purrr::map()` or `purrr::reduce()`:
```
map(c(1,2,3,4), function(x){1/sqrt(x)})
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
These anonymous functions get defined in a very similar way to regular functions, you just skip the
name and that’s it. `{tidyverse}` functions also support formulas; these get converted to anonymous functions:
```
map(c(1,2,3,4), ~{1/sqrt(.)})
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
Using a formula instead of an anonymous function is less verbose; you use `~` instead of `function(x)`
and a single dot `.` instead of `x`. What if you need an anonymous function that requires more than
one argument? This is not a problem:
```
map2(c(1, 2, 3, 4, 5), c(9, 8, 7, 6, 5), function(x, y){(x**2)/y})
```
```
## [[1]]
## [1] 0.1111111
##
## [[2]]
## [1] 0.5
##
## [[3]]
## [1] 1.285714
##
## [[4]]
## [1] 2.666667
##
## [[5]]
## [1] 5
```
or, using a formula:
```
map2(c(1, 2, 3, 4, 5), c(9, 8, 7, 6, 5), ~{(.x**2)/.y})
```
```
## [[1]]
## [1] 0.1111111
##
## [[2]]
## [1] 0.5
##
## [[3]]
## [1] 1.285714
##
## [[4]]
## [1] 2.666667
##
## [[5]]
## [1] 5
```
Because you have now two arguments, a single dot could not work, so instead you use `.x` and `.y` to
avoid confusion.
Since version 4\.1, R introduced a short\-hand for defining anonymous functions:
```
map(c(1,2,3,4), \(x)(1/sqrt(x)))
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 0.7071068
##
## [[3]]
## [1] 0.5773503
##
## [[4]]
## [1] 0.5
```
`\(x)` is supposed to look like this notation: \\(\\lambda(x)\\). This is a notation comes from lambda calculus, where functions
are defined like this:
\\\[
\\lambda(x).1/sqrt(x)
\\]
which is equivalent to \\(f(x) \= 1/sqrt(x)\\). You can use `\(x)` or `function(x)` interchangeably.
You now know a lot about writing your own functions. In the next chapter, we are going to learn
about functional programming, the programming paradigm I described in the introduction of this
book.
7\.9 Exercises
--------------
### Exercise 1
* Create the following vector:
\\\[a \= (1,6,7,8,8,9,2\)\\]
Using a for loop and a while loop, compute the sum of its elements. To avoid issues, use `i`
as the counter inside the for loop, and `j` as the counter for the while loop.
* How would you achieve that with a functional (a function that takes a function as an argument)?
### Exercise 2
* Let’s use a loop to get the matrix product of a matrix A and B. Follow these steps to create the loop:
1. Create matrix A:
\\\[A \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
2. Create matrix B:
\\\[B \= \\left(
\\begin{array}{cccc}
5 \& 4 \& 2 \& 5 \\\\
2 \& 7 \& 2 \& 1 \\\\
8 \& 3 \& 2 \& 6 \\\\
\\end{array} \\right)
\\]
3. Create a matrix C, with dimension 4x4 that will hold the result. Use this command: \`C \= matrix(rep(0,16\), nrow \= 4\)}
4. Using a for loop, loop over the rows of A first: \`for(i in 1:nrow(A))}
5. Inside this loop, loop over the columns of B: \`for(j in 1:ncol(B))}
6. Again, inside this loop, loop over the rows of B: \`for(k in 1:nrow(B))}
7. Inside this last loop, compute the result and save it inside C: \`C\[i,j] \= C\[i,j] \+ A\[i,k] \* B\[k,j]}
8. Now write a function that takes two matrices as arguments, and returns their product.
* R has a built\-in function to compute the dot product of 2 matrices. Which is it?
### Exercise 3
* Fizz Buzz: Print integers from 1 to 100\. If a number is divisible by 3, print the word `"Fizz"` if
it’s divisible by 5, print `"Buzz"`. Use a for loop and if statements.
* Write a function that takes an integer as arguments, and prints `"Fizz"` or `"Buzz"` up to that integer.
### Exercise 4
* Fizz Buzz 2: Same as above, but now add this third condition: if a number is both divisible by 3 and 5, print `"FizzBuzz"`.
* Write a function that takes an integer as argument, and prints `Fizz`, `Buzz` or `FizzBuzz` up to that integer.
### Exercise 1
* Create the following vector:
\\\[a \= (1,6,7,8,8,9,2\)\\]
Using a for loop and a while loop, compute the sum of its elements. To avoid issues, use `i`
as the counter inside the for loop, and `j` as the counter for the while loop.
* How would you achieve that with a functional (a function that takes a function as an argument)?
### Exercise 2
* Let’s use a loop to get the matrix product of a matrix A and B. Follow these steps to create the loop:
1. Create matrix A:
\\\[A \= \\left(
\\begin{array}{ccc}
9 \& 4 \& 12 \\\\
5 \& 0 \& 7 \\\\
2 \& 6 \& 8 \\\\
9 \& 2 \& 9
\\end{array} \\right)
\\]
2. Create matrix B:
\\\[B \= \\left(
\\begin{array}{cccc}
5 \& 4 \& 2 \& 5 \\\\
2 \& 7 \& 2 \& 1 \\\\
8 \& 3 \& 2 \& 6 \\\\
\\end{array} \\right)
\\]
3. Create a matrix C, with dimension 4x4 that will hold the result. Use this command: \`C \= matrix(rep(0,16\), nrow \= 4\)}
4. Using a for loop, loop over the rows of A first: \`for(i in 1:nrow(A))}
5. Inside this loop, loop over the columns of B: \`for(j in 1:ncol(B))}
6. Again, inside this loop, loop over the rows of B: \`for(k in 1:nrow(B))}
7. Inside this last loop, compute the result and save it inside C: \`C\[i,j] \= C\[i,j] \+ A\[i,k] \* B\[k,j]}
8. Now write a function that takes two matrices as arguments, and returns their product.
* R has a built\-in function to compute the dot product of 2 matrices. Which is it?
### Exercise 3
* Fizz Buzz: Print integers from 1 to 100\. If a number is divisible by 3, print the word `"Fizz"` if
it’s divisible by 5, print `"Buzz"`. Use a for loop and if statements.
* Write a function that takes an integer as arguments, and prints `"Fizz"` or `"Buzz"` up to that integer.
### Exercise 4
* Fizz Buzz 2: Same as above, but now add this third condition: if a number is both divisible by 3 and 5, print `"FizzBuzz"`.
* Write a function that takes an integer as argument, and prints `Fizz`, `Buzz` or `FizzBuzz` up to that integer.
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/functional-programming.html |
Chapter 8 Functional programming
================================
Functional programming is a paradigm that I find very suitable for data science. In functional
programming, your code is organised into functions that perform the operations you need. Your scripts
will only be a sequence of calls to these functions, making them easier to understand. R is not a pure
functional programming language, so we need some self\-discipline to apply pure functional programming
principles. However, these efforts are worth it, because pure functions are easier to debug, extend
and document. In this chapter, we are going to learn about functional programming principles that you
can adopt and start using to make your code better.
8\.1 Function definitions
-------------------------
You should now be familiar with function definitions in R. Let’s suppose you want to write a function
to compute the square root of a number and want to do so using Newton’s algorithm:
```
sqrt_newton <- function(a, init, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
}
init
}
```
You can then use this function to get the square root of a number:
```
sqrt_newton(16, 2)
```
```
## [1] 4.00122
```
We are using a `while` loop inside the body of the function. The *body* of a function are the
instructions that define the function. You can get the body of a function with `body(some_func)`.
In *pure* functional programming languages, like Haskell, loops do not exist. How can you
program without loops, you may ask? In functional programming, loops are replaced by recursion,
which we already discussed in the previous chapter. Let’s rewrite our little example above
with recursion:
```
sqrt_newton_recur <- function(a, init, eps = 0.01){
if(abs(init**2 - a) < eps){
result <- init
} else {
init <- 1/2 * (init + a/init)
result <- sqrt_newton_recur(a, init, eps)
}
result
}
```
```
sqrt_newton_recur(16, 2)
```
```
## [1] 4.00122
```
R is not a pure functional programming language though, so we can still use loops (be it `while` or
`for` loops) in the bodies of our functions. As discussed in the previous chapter, it is actually
better, performance\-wise, to use loops instead of recursion, because R is not tail\-call optimized.
I won’t got into the details of what tail\-call optimization is but just remember that if
performance is important a loop will be faster. However, sometimes, it is easier to write a
function using recursion. I personally tend to avoid loops if performance is not important,
because I find that code that avoids loops is easier to read and debug. However, knowing that
you can use loops is reassuring, and encapsulating loops inside functions gives you the benefits of
both using functions, and loops. In the coming sections I will show you some built\-in functions
that make it possible to avoid writing loops and that don’t rely on recursion, so performance
won’t be penalized.
8\.2 Properties of functions
----------------------------
Mathematical functions have a nice property: we always get the same output for a given input. This
is called referential transparency and we should aim to write our R functions in such a way.
For example, the following function:
```
increment <- function(x){
x + 1
}
```
Is a referential transparent function. We always get the same result for any `x` that we give to
this function.
This:
```
increment(10)
```
```
## [1] 11
```
will always produce `11`.
However, this one:
```
increment_opaque <- function(x){
x + spam
}
```
is not a referential transparent function, because its value depends on the global variable `spam`.
```
spam <- 1
increment_opaque(10)
```
```
## [1] 11
```
will produce `11` if `spam = 1`. But what if `spam = 19`?
```
spam <- 19
increment_opaque(10)
```
```
## [1] 29
```
To make `increment_opaque()` a referential transparent function, it is enough to make `spam` an
argument:
```
increment_not_opaque <- function(x, spam){
x + spam
}
```
Now even if there is a global variable called `spam`, this will not influence our function:
```
spam <- 19
increment_not_opaque(10, 34)
```
```
## [1] 44
```
This is because the variable `spam` defined in the body of the function is a local variable. It
could have been called anything else, really. Avoiding opaque functions makes our life easier.
Another property that adepts of functional programming value is that functions should have no, or
very limited, side\-effects. This means that functions should not change the state of your program.
For example this function (which is not a referential transparent function):
```
count_iter <- 0
sqrt_newton_side_effect <- function(a, init, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
count_iter <<- count_iter + 1 # The "<<-" symbol means that we assign the
} # RHS value in a variable inside the global environment
init
}
```
If you look in the environment pane, you will see that `count_iter` equals 0\. Now call this
function with the following arguments:
```
sqrt_newton_side_effect(16000, 2)
```
```
## [1] 126.4911
```
```
print(count_iter)
```
```
## [1] 9
```
If you check the value of `count_iter` now, you will see that it increased! This is a side effect,
because the function changed something outside of its scope. It changed a value in the global
environment. In general, it is good practice to avoid side\-effects. For example, we could make the
above function not have any side effects like this:
```
sqrt_newton_count <- function(a, init, count_iter = 0, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
count_iter <- count_iter + 1
}
c(init, count_iter)
}
```
Now, this function returns a list with two elements, the result, and the number of iterations it
took to get the result:
```
sqrt_newton_count(16000, 2)
```
```
## [1] 126.4911 9.0000
```
Writing to disk is also considered a side effect, because the function changes something (a file)
outside its scope. But this cannot be avoided since you *want* to write to disk.
Just remember: try to avoid having functions changing variables in the global environment unless
you have a very good reason of doing so.
Very long scripts that don’t use functions and use a lot of global variables with loops changing
the values of global variables are a nightmare to debug. If something goes wrong, it might be very
difficult to pinpoint where the problem is. Is there an error in one of the loops?
Is your code running for a particular value of a particular variable in the global environment, but
not for other values? Which values? And of which variables? It can be very difficult to know what
is wrong with such a script.
With functional programming, you can avoid a lot of this pain for free (well not entirely for free,
it still requires some effort, since R is not a pure functional language). Writing functions also
makes it easier to parallelize your code. We are going to learn about that later in this chapter too.
Finally, another property of mathematical functions, is that they do one single thing. Functional
programming purists also program their functions to do one single task. This has benefits, but
can complicate things. The function we wrote previously does two things: it computes the square
root of a number and also returns the number of iterations it took to compute the result. However,
this is not a bad thing; the function is doing two tasks, but these tasks are related to each other
and it makes sense to have them together. My piece of advice: avoid having functions that do
many *unrelated* things. This makes debugging harder.
In conclusion: you should strive for referential transparency, try to avoid side effects unless you
have a good reason to have them and try to keep your functions short and do as little tasks as
possible. This makes testing and debugging easier, as you will see in the next chapter, but also
improves readability and maintainability of your code.
8\.3 Functional programming with `{purrr}`
------------------------------------------
I mentioned it several times already, but R is not a pure functional programming language. It is
possible to write R code using the functional programming paradigm, but some effort is required.
The `{purrr}` package extends R’s base functional programming capabilities with some very interesting
functions. We have already seen `map()` and `reduce()`, which we are going to see in more detail now.
Then, we are going to learn about some other functions included in `{purrr}` that make functional
programming easier in R.
### 8\.3\.1 Doing away with loops: the `map*()` family of functions
Instead of using loops, pure functional programming languages use functions that achieve
the same result. These functions are often called `Map` or `Reduce` (also called `Fold`). R comes
with the `*apply()` family of functions (which are implementations of `Map`),
as well as `Reduce()` for functional programming.
Within this family, you can find `lapply()`, `sapply()`, `vapply()`, `tapply()`, `mapply()`, `rapply()`,
`eapply()` and `apply()` (I might have forgotten one or the other, but that’s not important).
Each version of an `*apply()` function has a different purpose, but it is not very easy to
remember which does what exactly. To add even more confusion, the arguments are sometimes different between
each of these.
In the `{purrr}` package, these functions are replaced by the `map*()` family of functions. As you will
shortly see, they are very consistent, and thus easier to use.
The first part of these functions’ names all start with `map_` and the second part tells you what
this function is going to return. For example, if you want `double`s out, you would use `map_dbl()`.
If you are working on data frames and want a data frame back, you would use `map_df()`. Let’s start
with the basic `map()` function. The following gif
(source: [Wikipedia](https://en.wikipedia.org/wiki/Map_(higher-order_function))) illustrates
what `map()` does fairly well:
\\(X\\) is a vector composed of the following scalars: \\((0, 5, 8, 3, 2, 1\)\\). The function we want to
map to each element of \\(X\\) is \\(f(x) \= x \+ 1\\). \\(X'\\) is the result of this operation. Using R, we
would do the following:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
Using a loop, you would write:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- vector("list", 6)
for(number in seq_along(numbers)){
my_results[[number]] <- plus_one(number)
}
my_results
```
```
## [[1]]
## [1] 2
##
## [[2]]
## [1] 3
##
## [[3]]
## [1] 4
##
## [[4]]
## [1] 5
##
## [[5]]
## [1] 6
##
## [[6]]
## [1] 7
```
Now I don’t know about you, but I prefer the first option. Using functional programming, you don’t
need to create an empty list to hold your results, and the code is more concise. Plus,
it is less error prone. I had to try several times to get the loop right
(and I’ve using R for almost 10 years now). Why? Well, first of all I used `%in%` instead of `in`.
Then, I forgot about `seq_along()`. After that, I made a typo, `plos_one()` instead of `plus_one()`
(ok, that one is unrelated to the loop). Let’s also see how this works using base R:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- lapply(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
So what is the added value of using `{purrr}`, you might ask. Well, imagine that instead of a list,
I need to an atomic vector of `numeric`s. This is fairly easy with `{purrr}`:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map_dbl(numbers, plus_one)
my_results
```
```
## [1] 1 6 9 4 3 2
```
We’re going to discuss these functions below, but know that in base R, outputting something else
involves more effort.
Let’s go back to our `sqrt_newton()` function. This function has more than one parameter. Often,
we would like to map functions with more than one parameter to a list, while holding constant
some of the functions parameters. This is easily achieved like so:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, sqrt_newton, init = 1)
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
It is also possible to use a formula:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, ~sqrt_newton(., init = 1))
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
Another function that is similar to `map()` is `rerun()`. You guessed it, this one simply
reruns an expression:
```
rerun(10, "hello")
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rerun()` simply runs an expression (which can be arbitrarily complex) `n` times, whereas `map()`
maps a function to a list of inputs, so to achieve the same with `map()`, you need to map the `print()`
function to a vector of characters:
```
map(rep("hello", 10), print)
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rep()` is a function that creates a vector by repeating something, in this case the string “hello”,
as many times as needed, here 10\. The output here is a bit different that before though, because first
you will see “hello” printed 10 times and then the list where each element is “hello”.
This is because the `print()` function has a side effect, which is, well printing to the console.
We see this side effect 10 times, plus then the list created with `map()`.
`rerun()` is useful if you want to run simulation. For instance, let’s suppose that I perform a simulation
where I throw a die 5 times, and compute the mean of the points obtained, as well as the variance:
```
mean_var_throws <- function(n){
throws <- sample(1:6, n, replace = TRUE)
mean_throws <- mean(throws)
var_throws <- var(throws)
tibble::tribble(~mean_throws, ~var_throws,
mean_throws, var_throws)
}
mean_var_throws(5)
```
```
## # A tibble: 1 × 2
## mean_throws var_throws
## <dbl> <dbl>
## 1 2.2 1.7
```
`mean_var_throws()` returns a `tibble` object with mean of points and the variance of the points. Now suppose
I want to compute the expected value of the distribution of throwing dice. We know from theory that it should
be equal to \\(3\.5 (\= 1\*1/6 \+ 2\*1/6 \+ 3\*1/6 \+ 4\*1/6 \+ 5\*1/6 \+ 6\*1/6\)\\).
Let’s rerun the simulation 50 times:
```
simulations <- rerun(50, mean_var_throws(5))
```
Let’s see what the `simulations` object is made of:
```
str(simulations)
```
```
## List of 50
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2
## ..$ var_throws : num 3
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.2
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.7
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 1.7
.....
```
`simulations` is a list of 50 data frames. We can easily combine them into a single data frame, and compute the
mean of the means, which should return something close to the expected value of 3\.5:
```
bind_rows(simulations) %>%
summarise(expected_value = mean(mean_throws))
```
```
## # A tibble: 1 × 1
## expected_value
## <dbl>
## 1 3.44
```
Pretty close! Now of course, one could have simply done something like this:
```
mean(sample(1:6, 1000, replace = TRUE))
```
```
## [1] 3.481
```
but the point was to illustrate that `rerun()` can run any arbitrarily complex expression, and that it is good
practice to put the result in a data frame or list, for easier further manipulation.
You now know the standard `map()` function, and also `rerun()`, which return lists, but there are a
number of variants of this function. `map_dbl()` returns an atomic vector of doubles, as seen
we’ve seen before. A little reminder below:
```
map_dbl(numbers, sqrt_newton, init = 1)
```
```
## [1] 2.645767 2.828469 4.358902 8.000002
```
In a similar fashion, `map_chr()` returns an atomic vector of strings:
```
map_chr(numbers, sqrt_newton, init = 1)
```
```
## [1] "2.645767" "2.828469" "4.358902" "8.000002"
```
`map_lgl()` returns an atomic vector of `TRUE` or `FALSE`:
```
divisible <- function(x, y){
if_else(x %% y == 0, TRUE, FALSE)
}
map_lgl(seq(1:100), divisible, 3)
```
```
## [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [13] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [25] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [37] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [49] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [61] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [73] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [85] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [97] FALSE FALSE TRUE FALSE
```
There are also other interesting variants, such as `map_if()`:
```
a <- seq(1,10)
map_if(a, (function(x) divisible(x, 2)), sqrt)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 1.414214
##
## [[3]]
## [1] 3
##
## [[4]]
## [1] 2
##
## [[5]]
## [1] 5
##
## [[6]]
## [1] 2.44949
##
## [[7]]
## [1] 7
##
## [[8]]
## [1] 2.828427
##
## [[9]]
## [1] 9
##
## [[10]]
## [1] 3.162278
```
I used `map_if()` to take the square root of only those numbers in vector `a` that are divisble by 2,
by using an anonymous function that checks if a number is divisible by 2 (by wrapping `divisible()`).
`map_at()` is similar to `map_if()` but maps the function at a position specified by the user:
```
map_at(numbers, c(1, 3), sqrt)
```
```
## [[1]]
## [1] 2.645751
##
## [[2]]
## [1] 8
##
## [[3]]
## [1] 4.358899
##
## [[4]]
## [1] 64
```
or if you have a named list:
```
recipe <- list("spam" = 1, "eggs" = 3, "bacon" = 10)
map_at(recipe, "bacon", `*`, 2)
```
```
## $spam
## [1] 1
##
## $eggs
## [1] 3
##
## $bacon
## [1] 20
```
I used `map_at()` to double the quantity of bacon in the recipe (by using the `*` function, and specifying
its second argument, `2`. Try the following in the command prompt: ``*`(3, 4)`).
`map2()` is the equivalent of `mapply()` and `pmap()` is the generalisation of `map2()` for more
than 2 arguments:
```
print(a)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b <- seq(1, 2, length.out = 10)
print(b)
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
map2(a, b, `*`)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 2.222222
##
## [[3]]
## [1] 3.666667
##
## [[4]]
## [1] 5.333333
##
## [[5]]
## [1] 7.222222
##
## [[6]]
## [1] 9.333333
##
## [[7]]
## [1] 11.66667
##
## [[8]]
## [1] 14.22222
##
## [[9]]
## [1] 17
##
## [[10]]
## [1] 20
```
Each element of `a` gets multiplied by the element of `b` that is in the same position.
Let’s see what `pmap()` does. Can you guess from the code below what is going on? I will print
`a` and `b` again for clarity:
```
a
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
n <- seq(1:10)
pmap(list(a, b, n), rnorm)
```
```
## [[1]]
## [1] -0.1758315
##
## [[2]]
## [1] -0.2162863 1.1033912
##
## [[3]]
## [1] 4.5731231 -0.3743379 6.8130737
##
## [[4]]
## [1] 0.8933089 4.1930837 7.5276030 -2.3575522
##
## [[5]]
## [1] 2.1814981 -1.7455750 5.0548288 2.7848458 0.9230675
##
## [[6]]
## [1] 2.806217 5.667499 -5.032922 6.741065 -2.757928 12.414101
##
## [[7]]
## [1] -3.314145 -7.912019 -3.865292 4.307842 18.022049 1.278158 1.083208
##
## [[8]]
## [1] 6.2629161 2.1213552 0.3543566 2.1041606 -0.2643654 8.7600450 3.3616206
## [8] -7.7446668
##
## [[9]]
## [1] -7.609538 5.472267 -4.869374 -11.943063 4.707929 -7.730088 13.431771
## [8] 1.606800 -6.578745
##
## [[10]]
## [1] -9.101480 4.404571 -16.071437 1.110689 7.168097 15.848579
## [7] 16.710863 1.998482 -17.856521 -2.021087
```
Let’s take a closer look at what `a`, `b` and `n` look like, when they are place next to each other:
```
cbind(a, b, n)
```
```
## a b n
## [1,] 1 1.000000 1
## [2,] 2 1.111111 2
## [3,] 3 1.222222 3
## [4,] 4 1.333333 4
## [5,] 5 1.444444 5
## [6,] 6 1.555556 6
## [7,] 7 1.666667 7
## [8,] 8 1.777778 8
## [9,] 9 1.888889 9
## [10,] 10 2.000000 10
```
`rnorm()` gets first called with the parameters from the first line, meaning
`rnorm(a[1], b[1], n[1])`. The second time `rnorm()` gets called, you guessed it,
it with the parameters on the second line of the array above,
`rnorm(a[2], b[2], n[2])`, etc.
There are other functions in the `map()` family of functions, but we will discover them in the
exercises!
The `map()` family of functions does not have any more secrets for you. Let’s now take a look at
the `reduce()` family of functions.
### 8\.3\.2 Reducing with `purrr`
Reducing is another important concept in functional programming. It allows going from a list of
elements, to a single element, by somehow *combining* the elements into one. For instance, using
the base R `Reduce()` function, you can sum the elements of a list like so:
```
Reduce(`+`, seq(1:100))
```
```
## [1] 5050
```
using `purrr::reduce()`, this becomes:
```
reduce(seq(1:100), `+`)
```
```
## [1] 5050
```
If you don’t really get what happening, don’t worry. Things should get clearer once I’ll introduce
another version of `reduce()`, called `accumulate()`, which we will see below.
Sometimes, the direction from which we start to reduce is quite important. You can “start from the
end” of the list by using the `.dir` argument:
```
reduce(seq(1:100), `+`, .dir = "backward")
```
```
## [1] 5050
```
Of course, for commutative operations, direction does not matter. But it does matter for non\-commutative
operations:
```
reduce(seq(1:100), `-`)
```
```
## [1] -5048
```
```
reduce(seq(1:100), `-`, .dir = "backward")
```
```
## [1] -50
```
Let’s now take a look at `accumulate()`. `accumulate()` is very similar to `map()`, but keeps the
intermediary results. Which intermediary results? Let’s try and see what happens:
```
a <- seq(1, 10)
accumulate(a, `-`)
```
```
## [1] 1 -1 -4 -8 -13 -19 -26 -34 -43 -53
```
`accumulate()` illustrates pretty well what is happening; the first element, `1`, is simply the
first element of `seq(1, 10)`. The second element of the result however, is the difference between
`1` and `2`, `-1`. The next element in `a` is `3`. Thus the next result is `-1-3`, `-4`, and so
on until we run out of elements in `a`.
The below illustration shows the algorithm step\-by\-step:
```
(1-2-3-4-5-6-7-8-9-10)
((1)-2-3-4-5-6-7-8-9-10)
((1-2)-3-4-5-6-7-8-9-10)
((-1-3)-4-5-6-7-8-9-10)
((-4-4)-5-6-7-8-9-10)
((-8-5)-6-7-8-9-10)
((-13-6)-7-8-9-10)
((-19-7)-8-9-10)
((-26-8)-9-10)
((-34-9)-10)
(-43-10)
-53
```
`reduce()` only shows the final result of all these operations. `accumulate()` and `reduce()` also
have an `.init` argument, that makes it possible to start the reducing procedure from an initial
value that is different from the first element of the vector:
```
reduce(a, `+`, .init = 1000)
accumulate(a, `-`, .init = 1000, .dir = "backward")
```
```
## [1] 1055
```
```
## [1] 995 -994 996 -993 997 -992 998 -991 999 -990 1000
```
`reduce()` generalizes functions that only take two arguments. If you were to write a function that returns
the minimum between two numbers:
```
my_min <- function(a, b){
if(a < b){
return(a)
} else {
return(b)
}
}
```
You could use `reduce()` to get the minimum of a list of numbers:
```
numbers2 <- c(3, 1, -8, 9)
reduce(numbers2, my_min)
```
```
## [1] -8
```
`map()` and `reduce()` are arguably the most useful higher\-order functions, and perhaps also the
most famous one, true ambassadors of functional programming. You might have read about
[MapReduce](https://en.wikipedia.org/wiki/MapReduce), a programming model for processing big
data in parallel. The way MapReduce works is inspired by both these `map()` and `reduce()` functions,
which are always included in functional programming languages. This illustrates that the functional
programming paradigm is very well suited to parallel computing.
Something else that is very important to understand at this point; up until now, we only used these
functions on lists, or atomic vectors, of numbers. However, `map()` and `reduce()`, and other
higher\-order functions for that matter, do not care about the contents of the list. What these
functions do, is take another functions, and make it do something to the elements of the list.
It does not matter if it’s a list of numbers, of characters, of data frames, even of models. All that
matters is that the function that will be applied to these elements, can operate on them.
So if you have a list of fitted models, you can map `summary()` on this list to get summaries of
each model. Or if you have a list of data frames, you can map a function that performs several
cleaning steps. This will be explored in a future section, but it is important to keep this in mind.
### 8\.3\.3 Error handling with `safely()` and `possibly()`
`safely()` and `possibly()` are very useful functions. Consider the following situation:
```
a <- list("a", 4, 5)
sqrt(a)
```
```
Error in sqrt(a) : non-numeric argument to mathematical function
```
Using `map()` or `Map()` will result in a similar error. `safely()` is an higher\-order function that
takes one function as an argument and executes it… *safely*, meaning the execution of the function
will not stop if there is an error. The error message gets captured alongside valid results.
```
a <- list("a", 4, 5)
safe_sqrt <- safely(sqrt)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## NULL
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
`possibly()` works similarly, but also allows you to specify a return value in case of an error:
```
possible_sqrt <- possibly(sqrt, otherwise = NA_real_)
map(a, possible_sqrt)
```
```
## [[1]]
## [1] NA
##
## [[2]]
## [1] 2
##
## [[3]]
## [1] 2.236068
```
Of course, in this particular example, the same effect could be obtained way more easily:
```
sqrt(as.numeric(a))
```
```
## Warning: NAs introduced by coercion
```
```
## [1] NA 2.000000 2.236068
```
However, in some situations, this trick does not work as intended (or at all). `possibly()` and
`safely()` allow the programmer to model errors explicitly, and to then provide a consistent way
of dealing with them. For instance, consider the following example:
```
data(mtcars)
write.csv(mtcars, "my_data/mtcars.csv")
```
```
Error in file(file, ifelse(append, "a", "w")) :
cannot open the connection
In addition: Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The folder `path/to/save/` does not exist, and as such this code produces an error. You might
want to catch this error, and create the directory for instance:
```
possibly_write.csv <- possibly(write.csv, otherwise = NULL)
if(is.null(possibly_write.csv(mtcars, "my_data/mtcars.csv"))) {
print("Creating folder...")
dir.create("my_data/")
print("Saving file...")
write.csv(mtcars, "my_data/mtcars.csv")
}
```
```
[1] "Creating folder..."
[1] "Saving file..."
Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The warning message comes from the first time we try to write the `.csv`, inside the `if`
statement. Because this fails, we create the directory and then actually save the file.
In the exercises, you’ll discover `quietly()`, which also captures warnings and messages.
To conclude this section: remember function factories? Turns out that `safely()`, `purely()` and `quietly()` are
function factories.
### 8\.3\.4 Partial applications with `partial()`
Consider the following simple function:
```
add <- function(a, b) a+b
```
It is possible to create a new function, where one of the parameters is fixed, for instance, where
`a = 10`:
```
add_to_10 <- partial(add, a = 10)
```
```
add_to_10(12)
```
```
## [1] 22
```
This is equivalent to the following:
```
add_to_10_2 <- function(b){
add(a = 10, b)
}
```
Using `partial()` is much less verbose however, and allowing you to define new functions very quickly:
```
head10 <- partial(head, n = 10)
head10(mtcars)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
```
### 8\.3\.5 Function composition using `compose`
Function composition is another handy tool, which makes chaining equation much more elegant:
```
compose(sqrt, log10, exp)(10)
```
```
## [1] 2.083973
```
You can read this expression as *`exp()` after `log10()` after `sqrt()`* and is equivalent to:
```
sqrt(log10(exp(10)))
```
```
## [1] 2.083973
```
It is also possible to reverse the order the functions get called using the `.dir =` option:
```
compose(sqrt, log10, exp, .dir = "forward")(10)
```
```
## [1] 1.648721
```
One could also use the `%>%` operator to achieve the same result:
```
10 %>%
sqrt %>%
log10 %>%
exp
```
```
## [1] 1.648721
```
but strictly speaking, this is not function composition.
### 8\.3\.6 «Transposing lists»
Another interesting function is `transpose()`. It is not an alternative to the function `t()` from
`base` but, has a similar effect. `transpose()` works on lists. Let’s take a look at the example
from before:
```
safe_sqrt <- safely(sqrt, otherwise = NA_real_)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## [1] NA
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
The output is a list with the first element being a list with a result and an error message. One
might want to have all the results in a single list, and all the error messages in another list.
This is possible with `transpose()`:
```
purrr::transpose(map(a, safe_sqrt))
```
```
## $result
## $result[[1]]
## [1] NA
##
## $result[[2]]
## [1] 2
##
## $result[[3]]
## [1] 2.236068
##
##
## $error
## $error[[1]]
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
## $error[[2]]
## NULL
##
## $error[[3]]
## NULL
```
I explicitely call `purrr::transpose()` because there is also a `data.table::transpose()`, which
is not the same function. You have to be careful about that sort of thing, because it can cause
errors in your programs and debuging this type of error is a nightmare.
Now that we are familiar with functional programming, let’s try to apply some of its principles
to data manipulation.
8\.4 List\-based workflows for efficiency
-----------------------------------------
You can use your own functions in pipe workflows:
```
double_number <- function(x){
x+x
}
```
```
mtcars %>%
head() %>%
mutate(double_mpg = double_number(mpg))
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb double_mpg
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 42.0
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 42.0
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 45.6
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 42.8
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 37.4
## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 36.2
```
It is important to understand that your functions, and functions that are built\-in into R, or that
come from packages, are exactly the same thing. Every function is a first\-class object in R, no
matter where they come from. The consequence of functions being first\-class objects is that
functions can take functions as arguments, functions can return functions (the function factories
from the previous chapter) and can be assigned to any variable:
```
plop <- sqrt
plop(4)
```
```
## [1] 2
```
```
bacon <- function(.f){
message("Bacon is tasty")
.f
}
bacon(sqrt) # `bacon` is a function factory, as it returns a function (alongside an informative message)
```
```
## Bacon is tasty
```
```
## function (x) .Primitive("sqrt")
```
```
# To actually call it:
bacon(sqrt)(4)
```
```
## Bacon is tasty
```
```
## [1] 2
```
Now, let’s step back for a bit and think about what we learned up until now, and especially
the `map()` family of functions.
Let’s read the list of datasets from the previous chapter:
```
paths <- Sys.glob("datasets/unemployment/*.csv")
all_datasets <- import_list(paths)
str(all_datasets)
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of which: Wage-earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of which: Non-wage-earners: int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ Unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ Active population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ Year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of which: Wage-earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of which: Non-wage-earners: int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ Unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ Active population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ Year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of which: Wage-earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of which: Non-wage-earners: int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ Active population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ Year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of which: Wage-earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of which: Non-wage-earners: int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ Active population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ Year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
`all_datasets` is a list with 4 elements, each of them is a `data.frame`.
The first thing we are going to do is use a function to clean the names of the datasets. These
names are not very easy to work with; there are spaces, and it would be better if the names of the
columns would be all lowercase. For this we are going to use the function `clean_names()` from the
`janitor` package. For a single dataset, I would write this:
```
library(janitor)
one_dataset <- one_dataset %>%
clean_names()
```
and I would get a dataset with column names in lowercase and spaces replaced by `_` (and other
corrections). How can I apply, or map, this function to each dataset in the list? To do this I need
to use `purrr::map()`, which we’ve seen in the previous section:
```
library(purrr)
all_datasets <- all_datasets %>%
map(clean_names)
all_datasets %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of_which_wage_earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of_which_non_wage_earners : int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ active_population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of_which_wage_earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of_which_non_wage_earners : int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ active_population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of_which_wage_earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of_which_non_wage_earners : int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ active_population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of_which_wage_earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of_which_non_wage_earners : int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ active_population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
Remember that `map(list, function)` simply evaluates `function` to each element of `list`.
So now, what if I want to know, for each dataset, which *communes* have an unemployment rate that is
less than, say, 3%? For a single dataset I would do something like this:
```
one_dataset %>%
filter(unemployment_rate_in_percent < 3)
```
but since we’re dealing with a list of data sets, we cannot simply use `filter()` on it. This is because
`filter()` expects a data frame, not a list of data frames. The way around this is to use `map()`.
```
all_datasets %>%
map(~filter(., unemployment_rate_in_percent < 3))
```
```
## $unemp_2013
## commune total_employed_population of_which_wage_earners
## 1 Garnich 844 750
## 2 Leudelange 1064 937
## 3 Bech 526 463
## of_which_non_wage_earners unemployed active_population
## 1 94 25 869
## 2 127 32 1096
## 3 63 16 542
## unemployment_rate_in_percent year
## 1 2.876870 2013
## 2 2.919708 2013
## 3 2.952030 2013
##
## $unemp_2014
## commune total_employed_population of_which_wage_earners
## 1 Garnich 845 757
## 2 Leudelange 1102 965
## 3 Bech 543 476
## 4 Flaxweiler 879 789
## of_which_non_wage_earners unemployed active_population
## 1 88 19 864
## 2 137 34 1136
## 3 67 15 558
## 4 90 27 906
## unemployment_rate_in_percent year
## 1 2.199074 2014
## 2 2.992958 2014
## 3 2.688172 2014
## 4 2.980132 2014
##
## $unemp_2015
## commune total_employed_population of_which_wage_earners
## 1 Bech 520 450
## 2 Bous 750 680
## of_which_non_wage_earners unemployed active_population
## 1 70 14 534
## 2 70 22 772
## unemployment_rate_in_percent year
## 1 2.621723 2015
## 2 2.849741 2015
##
## $unemp_2016
## commune total_employed_population of_which_wage_earners
## 1 Reckange-sur-Mess 980 850
## 2 Bech 520 450
## 3 Betzdorf 1500 1350
## 4 Flaxweiler 910 820
## of_which_non_wage_earners unemployed active_population
## 1 130 30 1010
## 2 70 11 531
## 3 150 45 1545
## 4 90 24 934
## unemployment_rate_in_percent year
## 1 2.970297 2016
## 2 2.071563 2016
## 3 2.912621 2016
## 4 2.569593 2016
```
`map()` needs a function to map to each element of the list. `all_datasets` is the list to which I
want to map the function. But what function? `filter()` is the function I need, so why doesn’t:
```
all_datasets %>%
map(filter(unemployment_rate_in_percent < 3))
```
work? This is what happens if we try it:
```
Error in filter(unemployment_rate_in_percent < 3) :
object 'unemployment_rate_in_percent' not found
```
This is because `filter()` needs both the data set, and a so\-called predicate (a predicate
is an expression that evaluates to `TRUE` or `FALSE`). But you need to make more explicit
what is the dataset and what is the predicate, because here, `filter()` thinks that the
dataset is `unemployment_rate_in_percent`. The way to do this is to use an anonymous
function (discussed in Chapter 7\), which allows you to explicitely state what is the
dataset, and what is the predicate. As we’ve seen, there’s three ways to define
anonymous functions:
* Using a formula (only works within `{tidyverse}` functions):
```
all_datasets %>%
map(~filter(., unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
(notice the `.` in the formula, making the position of the dataset as the first argument to `filter()`
explicit) or
* using an anonymous function (using the `function(x)` keyword):
```
all_datasets %>%
map(function(x)filter(x, unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
* or, since R 4\.1, using the shorthand `\(x)`:
```
all_datasets %>%
map(\(x)filter(x, unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
As you see, everything is starting to come together: lists, to hold complex objects, over which anonymous
functions are mapped using higher\-order functions. Let’s continue cleaning this dataset.
Before merging these datasets together, we would need them to have a `year` column indicating the
year the data was measured in each data frame. It would also be helpful if gave names to these datasets, meaning
converting the list to a named list. For this task, we can use `purrr::set_names()`:
```
all_datasets <- set_names(all_datasets, as.character(seq(2013, 2016)))
```
Let’s take a look at the list now:
```
str(all_datasets)
```
As you can see, each `data.frame` object contained in the list has been renamed. You can thus
access them with the `$` operator:
Using `map()` we now know how to apply a function to each dataset of a list. But maybe it would be
easier to merge all the datasets first, and then manipulate them? This can be the case sometimes,
but not always.
As long as you provide a function and a list of elements to `reduce()`, you will get a single
output. So how could `reduce()` help us with merging all the datasets that are in the list? `dplyr`
comes with a lot of function to merge *two* datasets. Remember that I said before that `reduce()`
allows you to generalize a function of two arguments? Let’s try it with our list of datasets:
```
unemp_lux <- reduce(all_datasets, full_join)
```
```
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
```
```
glimpse(unemp_lux)
```
```
## Rows: 472
## Columns: 8
## $ commune <chr> "Grand-Duche de Luxembourg", "Canton Cape…
## $ total_employed_population <int> 223407, 17802, 1703, 844, 1431, 4094, 214…
## $ of_which_wage_earners <int> 203535, 15993, 1535, 750, 1315, 3800, 187…
## $ of_which_non_wage_earners <int> 19872, 1809, 168, 94, 116, 294, 272, 113,…
## $ unemployed <int> 19287, 1071, 114, 25, 74, 261, 98, 45, 66…
## $ active_population <int> 242694, 18873, 1817, 869, 1505, 4355, 224…
## $ unemployment_rate_in_percent <dbl> 7.947044, 5.674773, 6.274078, 2.876870, 4…
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013,…
```
`full_join()` is one of the `dplyr` function that merges data. There are others that might be
useful depending on the kind of join operation you need. Let’s write this data to disk as we’re
going to keep using it for the next chapters:
```
export(unemp_lux, "datasets/unemp_lux.csv")
```
### 8\.4\.1 Functional programming and plotting
In this section, we are going to learn how to use the possibilities offered by the `purrr` package
and how it can work together with `ggplot2` to generate many plots. This is a more advanced topic,
but what comes next is also what makes R, and the functional programming paradigm so powerful.
For example, suppose that instead of wanting a single plot with the unemployment rate of each
commune, you need one unemployment plot, per commune:
```
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Luxembourg", x = "Year", y = "Rate") +
geom_line()
```
and then you would write the same for “Esch\-sur\-Alzette” and also for “Wiltz”. If you only have to
make to make these 3 plots, copy and pasting the above lines is no big deal:
```
unemp_lux_data %>%
filter(division == "Esch-sur-Alzette") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
```
unemp_lux_data %>%
filter(division == "Wiltz") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
But copy and pasting is error prone. Can you spot the copy\-paste mistake I made? And what if you
have to create the above plots for all 108 Luxembourguish communes? That’s a lot of copy pasting.
What if, once you are done copy pasting, you have to change something, for example, the theme? You
could use the search and replace function of RStudio, true, but sometimes search and replace can
also introduce bugs and typos. You can avoid all these issues by using `purrr::map()`. What do you
need to map over? The commune names. So let’s create a vector of commune names:
```
communes <- list("Luxembourg", "Esch-sur-Alzette", "Wiltz")
```
Now we can create the graphs using `map()`, or `map2()` to be exact:
```
plots_tibble <- unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest() %>%
mutate(plot = map2(.x = data, .y = division, ~ggplot(data = .x) +
theme_minimal() +
geom_line(aes(year, unemployment_rate_in_percent, group = 1)) +
labs(title = paste("Unemployment in", .y))))
```
Let’s study this line by line: the first line is easy, we simply use `filter()` to keep only the
communes we are interested in. Then we group by `division` and use `tidyr::nest()`. As a refresher,
let’s take a look at what this does:
```
unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest()
```
```
## # A tibble: 3 × 2
## # Groups: division [3]
## division data
## <chr> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]>
## 2 Luxembourg <tibble [15 × 7]>
## 3 Wiltz <tibble [15 × 7]>
```
This creates a tibble with two columns, `division` and `data`, where each individual (or
commune in this case) is another tibble with all the original variables. This is very useful,
because now we can pass these tibbles to `map2()`, to generate the plots. But why `map2()` and
what’s the difference with `map()`? `map2()` works the same way as `map()`, but maps over two
inputs:
```
numbers1 <- list(1, 2, 3, 4, 5)
numbers2 <- list(9, 8, 7, 6, 5)
map2(numbers1, numbers2, `*`)
```
```
## [[1]]
## [1] 9
##
## [[2]]
## [1] 16
##
## [[3]]
## [1] 21
##
## [[4]]
## [1] 24
##
## [[5]]
## [1] 25
```
In our example with the graphs, the two inputs are the data, and the names of the communes. This is
useful to create the title with `labs(title = paste("Unemployment in", .y))))` where `.y` is the
second input of `map2()`, the commune names contained in variable `division`.
So what happened? We now have a tibble called `plots_tibble` that looks like this:
```
print(plots_tibble)
```
```
## # A tibble: 3 × 3
## # Groups: division [3]
## division data plot
## <chr> <list> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]> <gg>
## 2 Luxembourg <tibble [15 × 7]> <gg>
## 3 Wiltz <tibble [15 × 7]> <gg>
```
This tibble contains three columns, `division`, `data` and now a new one called `plot`, that we
created before using the last line `mutate(plot = ...)` (remember that `mutate()` adds columns to
tibbles). `plot` is a list\-column, with elements… being plots! Yes you read that right, the
elements of the column `plot` are literally plots. This is what I meant with list columns.
Let’s see what is inside the `data` and the `plot` columns exactly:
```
plots_tibble %>%
pull(data)
```
```
## [[1]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 11.3 665 10.1 10.8 561 4.95
## 2 2002 11.7 677 10.3 11.0 696 5.96
## 3 2003 11.7 674 10.2 10.9 813 6.94
## 4 2004 12.2 659 10.6 11.3 899 7.38
## 5 2005 11.9 654 10.3 11.0 952 7.97
## 6 2006 12.2 657 10.5 11.2 1.07 8.71
## 7 2007 12.6 634 10.9 11.5 1.03 8.21
## 8 2008 12.9 638 11.0 11.6 1.28 9.92
## 9 2009 13.2 652 11.0 11.7 1.58 11.9
## 10 2010 13.6 638 11.2 11.8 1.73 12.8
## 11 2011 13.9 630 11.5 12.1 1.77 12.8
## 12 2012 14.3 684 11.8 12.5 1.83 12.8
## 13 2013 14.8 694 12.0 12.7 2.05 13.9
## 14 2014 15.2 703 12.5 13.2 2.00 13.2
## 15 2015 15.3 710 12.6 13.3 2.03 13.2
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[2]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 34.4 2.89 30.4 33.2 1.14 3.32
## 2 2002 34.8 2.94 30.3 33.2 1.56 4.5
## 3 2003 35.2 3.03 30.1 33.2 2.04 5.78
## 4 2004 35.6 3.06 30.1 33.2 2.39 6.73
## 5 2005 35.6 3.13 29.8 33.0 2.64 7.42
## 6 2006 35.5 3.12 30.3 33.4 2.03 5.72
## 7 2007 36.1 3.25 31.1 34.4 1.76 4.87
## 8 2008 37.5 3.39 31.9 35.3 2.23 5.95
## 9 2009 37.9 3.49 31.6 35.1 2.85 7.51
## 10 2010 38.6 3.54 32.1 35.7 2.96 7.66
## 11 2011 40.3 3.66 33.6 37.2 3.11 7.72
## 12 2012 41.8 3.81 34.6 38.4 3.37 8.07
## 13 2013 43.4 3.98 35.5 39.5 3.86 8.89
## 14 2014 44.6 4.11 36.7 40.8 3.84 8.6
## 15 2015 45.2 4.14 37.5 41.6 3.57 7.9
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[3]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 2.13 223 1.79 2.01 122 5.73
## 2 2002 2.14 220 1.78 2.00 134 6.27
## 3 2003 2.18 223 1.79 2.02 163 7.48
## 4 2004 2.24 227 1.85 2.08 156 6.97
## 5 2005 2.26 229 1.85 2.08 187 8.26
## 6 2006 2.20 206 1.82 2.02 181 8.22
## 7 2007 2.27 198 1.88 2.08 197 8.67
## 8 2008 2.30 200 1.90 2.10 201 8.75
## 9 2009 2.36 201 1.94 2.15 216 9.14
## 10 2010 2.42 195 1.97 2.17 256 10.6
## 11 2011 2.48 190 2.02 2.21 269 10.9
## 12 2012 2.59 188 2.10 2.29 301 11.6
## 13 2013 2.66 195 2.15 2.34 318 12.0
## 14 2014 2.69 185 2.19 2.38 315 11.7
## 15 2015 2.77 180 2.27 2.45 321 11.6
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
```
each element of data is a tibble for the specific country with columns `year`, `active_population`,
etc, the original columns. But obviously, there is no `division` column. So to plot the data, and
join all the dots together, we need to add `group = 1` in the call to `ggplot2()` (whereas if you
plot multiple lines in the same graph, you need to write `group = division`).
But more interestingly, how can you actually see the plots? If you want to simply look at them, it
is enough to use `pull()`:
```
plots_tibble %>%
pull(plot)
```
```
## [[1]]
```
```
##
## [[2]]
```
```
##
## [[3]]
```
And if we want to save these plots, we can do so using `map2()`:
```
map2(paste0(plots_tibble$division, ".pdf"), plots_tibble$plot, ggsave)
```
```
Saving 7 x 5 in image
Saving 6.01 x 3.94 in image
Saving 6.01 x 3.94 in image
```
This was probably the most advanced topic we have studied yet; but you probably agree with me that
it is among the most useful ones. This section is a perfect illustration of the power of functional
programming; you can mix and match functions as long as you give them the correct arguments.
You can pass data to functions that use data and then pass these functions to other functions that
use functions as arguments, such as `map()`.[7](#fn7) `map()` does not care if the functions you pass to it produces tables,
graphs or even another function. `map()` will simply map this function to a list of inputs, and as
long as these inputs are correct arguments to the function, `map()` will do its magic. If you
combine this with list\-columns, you can even use `map()` alongside `dplyr` functions and map your
function by first grouping, filtering, etc…
### 8\.4\.2 Modeling with functional programming
As written just above, `map()` simply applies a function to a list of inputs, and in the previous
section we mapped `ggplot()` to generate many plots at once. This approach can also be used to
map any modeling functions, for instance `lm()` to a list of datasets.
For instance, suppose that you wish to perform a Monte Carlo simulation. Suppose that you are
dealing with a binary choice problem; usually, you would use a logistic regression for this.
However, in certain disciplines, especially in the social sciences, the so\-called Linear Probability
Model is often used as well. The LPM is a simple linear regression, but unlike the standard setting
of a linear regression, the dependent variable, or target, is a binary variable, and not a continuous
variable. Before you yell “Wait, that’s illegal”, you should know that in practice LPMs do a good
job of estimating marginal effects, which is what social scientists and econometricians are often
interested in. Marginal effects are another way of interpreting models, giving how the outcome
(or the target) changes given a change in a independent variable (or a feature). For instance,
a marginal effect of 0\.10 for age would mean that probability of success would increase by 10% for
each added year of age. We already discussed marginal effects in Chapter 6\.
There has been a lot of discussion on logistic regression vs LPMs, and there are pros and cons
of using LPMs. Micro\-econometricians are still fond of LPMs, even though the pros of LPMs are
not really convincing. However, quoting Angrist and Pischke:
“While a nonlinear model may fit the CEF (population conditional expectation function) for LDVs
(limited dependent variables) more closely than a linear model, when it comes to marginal effects,
this probably matters little” (source: *Mostly Harmless Econometrics*)
so LPMs are still used for estimating marginal effects.
Let us check this assessment with one example. First, we simulate some data, then
run a logistic regression and compute the marginal effects, and then compare with a LPM:
```
set.seed(1234)
x1 <- rnorm(100)
x2 <- rnorm(100)
z <- .5 + 2*x1 + 4*x2
p <- 1/(1 + exp(-z))
y <- rbinom(100, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
```
This data generating process generates data from a binary choice model. Fitting the model using a
logistic regression allows us to recover the structural parameters:
```
logistic_regression <- glm(y ~ ., data = df, family = binomial(link = "logit"))
```
Let’s see a summary of the model fit:
```
summary(logistic_regression)
```
```
##
## Call:
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.91941 -0.44872 0.00038 0.42843 2.55426
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.0960 0.3293 0.292 0.770630
## x1 1.6625 0.4628 3.592 0.000328 ***
## x2 3.6582 0.8059 4.539 5.64e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 138.629 on 99 degrees of freedom
## Residual deviance: 60.576 on 97 degrees of freedom
## AIC: 66.576
##
## Number of Fisher Scoring iterations: 7
```
We do recover the parameters that generated the data, but what about the marginal effects? We can
get the marginal effects easily using the `{margins}` package:
```
library(margins)
margins(logistic_regression)
```
```
## Average marginal effects
```
```
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
```
```
## x1 x2
## 0.1598 0.3516
```
Or, even better, we can compute the *true* marginal effects, since we know the data
generating process:
```
meffects <- function(dataset, coefs){
X <- dataset %>%
select(-y) %>%
as.matrix()
dydx_x1 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[2])
dydx_x2 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[3])
tribble(~term, ~true_effect,
"x1", dydx_x1,
"x2", dydx_x2)
}
(true_meffects <- meffects(df, c(0.5, 2, 4)))
```
```
## # A tibble: 2 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.175
## 2 x2 0.350
```
Ok, so now what about using this infamous Linear Probability Model to estimate the marginal effects?
```
lpm <- lm(y ~ ., data = df)
summary(lpm)
```
```
##
## Call:
## lm(formula = y ~ ., data = df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.83953 -0.31588 -0.02885 0.28774 0.77407
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.51340 0.03587 14.314 < 2e-16 ***
## x1 0.16771 0.03545 4.732 7.58e-06 ***
## x2 0.31250 0.03449 9.060 1.43e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.3541 on 97 degrees of freedom
## Multiple R-squared: 0.5135, Adjusted R-squared: 0.5034
## F-statistic: 51.18 on 2 and 97 DF, p-value: 6.693e-16
```
It’s not too bad, but maybe it could have been better in other circumstances. Perhaps if we had more
observations, or perhaps for a different set of structural parameters the results of the LPM
would have been closer. The LPM estimates the marginal effect of `x1` to be
0\.1677134 vs 0\.1597956
for the logistic regression and for `x2`, the LPM estimation is 0\.3124966
vs 0\.351607\. The *true* marginal effects are
0\.1750963 and 0\.3501926 for `x1` and `x2` respectively.
Just as to assess the accuracy of a model data scientists perform cross\-validation, a Monte Carlo
study can be performed to asses how close the estimation of the marginal effects using a LPM is
to the marginal effects derived from a logistic regression. It will allow us to test with datasets
of different sizes, and generated using different structural parameters.
First, let’s write a function that generates data. The function below generates 10 datasets of size
100 (the code is inspired by this [StackExchange answer](https://stats.stackexchange.com/a/46525)):
```
generate_datasets <- function(coefs = c(.5, 2, 4), sample_size = 100, repeats = 10){
generate_one_dataset <- function(coefs, sample_size){
x1 <- rnorm(sample_size)
x2 <- rnorm(sample_size)
z <- coefs[1] + coefs[2]*x1 + coefs[3]*x2
p <- 1/(1 + exp(-z))
y <- rbinom(sample_size, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
}
simulations <- rerun(.n = repeats, generate_one_dataset(coefs, sample_size))
tibble("coefs" = list(coefs), "sample_size" = sample_size, "repeats" = repeats, "simulations" = list(simulations))
}
```
Let’s first generate one dataset:
```
one_dataset <- generate_datasets(repeats = 1)
```
Let’s take a look at `one_dataset`:
```
one_dataset
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 1 <list [1]>
```
As you can see, the tibble with the simulated data is inside a list\-column called `simulations`.
Let’s take a closer look:
```
str(one_dataset$simulations)
```
```
## List of 1
## $ :List of 1
## ..$ : tibble [100 × 3] (S3: tbl_df/tbl/data.frame)
## .. ..$ y : int [1:100] 0 1 1 1 0 1 1 0 0 1 ...
## .. ..$ x1: num [1:100] 0.437 1.06 0.452 0.663 -1.136 ...
## .. ..$ x2: num [1:100] -2.316 0.562 -0.784 -0.226 -1.587 ...
```
The structure is quite complex, and it’s important to understand this, because it will have an
impact on the next lines of code; it is a list, containing a list, containing a dataset! No worries
though, we can still map over the datasets directly, by using `modify_depth()` instead of `map()`.
Now, let’s fit a LPM and compare the estimation of the marginal effects with the *true* marginal
effects. In order to have some confidence in our results,
we will not simply run a linear regression on that single dataset, but will instead simulate hundreds,
then thousands and ten of thousands of data sets, get the marginal effects and compare
them to the true ones (but here I won’t simulate more than 500 datasets).
Let’s first generate 10 datasets:
```
many_datasets <- generate_datasets()
```
Now comes the tricky part. I have this object, `many_datasets` looking like this:
```
many_datasets
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 10 <list [10]>
```
I would like to fit LPMs to the 10 datasets. For this, I will need to use all the power of functional
programming and the `{tidyverse}`. I will be adding columns to this data frame using `mutate()`
and mapping over the `simulations` list\-column using `modify_depth()`. The list of data frames is
at the second level (remember, it’s a list containing a list containing data frames).
I’ll start by fitting the LPMs, then using `broom::tidy()` I will get a nice data frame of the
estimated parameters. I will then only select what I need, and then bind the rows of all the
data frames. I will do the same for the *true* marginal effects.
I highly suggest that you run the following lines, one after another. It is complicated to understand
what’s going on if you are not used to such workflows. However, I hope to convince you that once
it will click, it’ll be much more intuitive than doing all this inside a loop. Here’s the code:
```
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
```
This is how results looks like:
```
results
```
```
## # A tibble: 1 × 6
## coefs sample_size repeats simulations lpm true_effect
## <list> <dbl> <dbl> <list> <list> <list>
## 1 <dbl [3]> 100 10 <list [10]> <tibble [20 × 2]> <tibble [20 × 2]>
```
Let’s take a closer look to the `lpm` and `true_effect` columns:
```
results$lpm
```
```
## [[1]]
## # A tibble: 20 × 2
## term estimate
## <chr> <dbl>
## 1 x1 0.228
## 2 x2 0.353
## 3 x1 0.180
## 4 x2 0.361
## 5 x1 0.165
## 6 x2 0.374
## 7 x1 0.182
## 8 x2 0.358
## 9 x1 0.125
## 10 x2 0.345
## 11 x1 0.171
## 12 x2 0.331
## 13 x1 0.122
## 14 x2 0.309
## 15 x1 0.129
## 16 x2 0.332
## 17 x1 0.102
## 18 x2 0.374
## 19 x1 0.176
## 20 x2 0.410
```
```
results$true_effect
```
```
## [[1]]
## # A tibble: 20 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.183
## 2 x2 0.366
## 3 x1 0.166
## 4 x2 0.331
## 5 x1 0.174
## 6 x2 0.348
## 7 x1 0.169
## 8 x2 0.339
## 9 x1 0.167
## 10 x2 0.335
## 11 x1 0.173
## 12 x2 0.345
## 13 x1 0.157
## 14 x2 0.314
## 15 x1 0.170
## 16 x2 0.340
## 17 x1 0.182
## 18 x2 0.365
## 19 x1 0.161
## 20 x2 0.321
```
Let’s bind the columns, and compute the difference between the *true* and estimated marginal
effects:
```
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
```
```
## Joining, by = "term"
```
Let’s take a look at the simulation results:
```
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0122 0.0368
## 2 x2 -0.0141 0.0311
```
Already with only 10 simulated datasets, the difference in means is not significant. Let’s rerun
the analysis, but for difference sizes. In order to make things easier, we can put all the code
into a nifty function:
```
monte_carlo <- function(coefs, sample_size, repeats){
many_datasets <- generate_datasets(coefs, sample_size, repeats)
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
}
```
And now, let’s run the simulation for different parameters and sizes:
```
monte_carlo(c(.5, 2, 4), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00826 0.0318
## 2 x2 -0.00732 0.0421
```
```
monte_carlo(c(.5, 2, 4), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00360 0.0408
## 2 x2 0.00517 0.0459
```
```
monte_carlo(c(.5, 2, 4), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00152 0.0388
## 2 x2 -0.000701 0.0462
```
```
monte_carlo(c(pi, 6, 9), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00829 0.0421
## 2 x2 0.00178 0.0397
```
```
monte_carlo(c(pi, 6, 9), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0107 0.0576
## 2 x2 0.00831 0.0772
```
```
monte_carlo(c(pi, 6, 9), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00879 0.0518
## 2 x2 0.0113 0.0687
```
We see that, at least for this set of parameters, the LPM does a good job of estimating marginal
effects.
Now, this study might in itself not be very interesting to you, but I believe the general approach
is quite useful and flexible enough to be adapted to all kinds of use\-cases.
8\.5 Exercises
--------------
### Exercise 1
Suppose you have an Excel workbook that contains data on three sheets. Create a function that
reads entire workbooks, and that returns a list of tibbles, where each tibble is the data of one
sheet (download the example Excel workbook, `example_workbook.xlsx`, from the `assets` folder on
the books github).
### Exercise 2
Use one of the `map()` functions to combine two lists into one. Consider the following two lists:
```
mediterranean <- list("starters" = list("humous", "lasagna"), "dishes" = list("sardines", "olives"))
continental <- list("starters" = list("pea soup", "terrine"), "dishes" = list("frikadelle", "sauerkraut"))
```
The result we’d like to have would look like this:
```
$starters
$starters[[1]]
[1] "humous"
$starters[[2]]
[1] "olives"
$starters[[3]]
[1] "pea soup"
$starters[[4]]
[1] "terrine"
$dishes
$dishes[[1]]
[1] "sardines"
$dishes[[2]]
[1] "lasagna"
$dishes[[3]]
[1] "frikadelle"
$dishes[[4]]
[1] "sauerkraut"
```
8\.1 Function definitions
-------------------------
You should now be familiar with function definitions in R. Let’s suppose you want to write a function
to compute the square root of a number and want to do so using Newton’s algorithm:
```
sqrt_newton <- function(a, init, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
}
init
}
```
You can then use this function to get the square root of a number:
```
sqrt_newton(16, 2)
```
```
## [1] 4.00122
```
We are using a `while` loop inside the body of the function. The *body* of a function are the
instructions that define the function. You can get the body of a function with `body(some_func)`.
In *pure* functional programming languages, like Haskell, loops do not exist. How can you
program without loops, you may ask? In functional programming, loops are replaced by recursion,
which we already discussed in the previous chapter. Let’s rewrite our little example above
with recursion:
```
sqrt_newton_recur <- function(a, init, eps = 0.01){
if(abs(init**2 - a) < eps){
result <- init
} else {
init <- 1/2 * (init + a/init)
result <- sqrt_newton_recur(a, init, eps)
}
result
}
```
```
sqrt_newton_recur(16, 2)
```
```
## [1] 4.00122
```
R is not a pure functional programming language though, so we can still use loops (be it `while` or
`for` loops) in the bodies of our functions. As discussed in the previous chapter, it is actually
better, performance\-wise, to use loops instead of recursion, because R is not tail\-call optimized.
I won’t got into the details of what tail\-call optimization is but just remember that if
performance is important a loop will be faster. However, sometimes, it is easier to write a
function using recursion. I personally tend to avoid loops if performance is not important,
because I find that code that avoids loops is easier to read and debug. However, knowing that
you can use loops is reassuring, and encapsulating loops inside functions gives you the benefits of
both using functions, and loops. In the coming sections I will show you some built\-in functions
that make it possible to avoid writing loops and that don’t rely on recursion, so performance
won’t be penalized.
8\.2 Properties of functions
----------------------------
Mathematical functions have a nice property: we always get the same output for a given input. This
is called referential transparency and we should aim to write our R functions in such a way.
For example, the following function:
```
increment <- function(x){
x + 1
}
```
Is a referential transparent function. We always get the same result for any `x` that we give to
this function.
This:
```
increment(10)
```
```
## [1] 11
```
will always produce `11`.
However, this one:
```
increment_opaque <- function(x){
x + spam
}
```
is not a referential transparent function, because its value depends on the global variable `spam`.
```
spam <- 1
increment_opaque(10)
```
```
## [1] 11
```
will produce `11` if `spam = 1`. But what if `spam = 19`?
```
spam <- 19
increment_opaque(10)
```
```
## [1] 29
```
To make `increment_opaque()` a referential transparent function, it is enough to make `spam` an
argument:
```
increment_not_opaque <- function(x, spam){
x + spam
}
```
Now even if there is a global variable called `spam`, this will not influence our function:
```
spam <- 19
increment_not_opaque(10, 34)
```
```
## [1] 44
```
This is because the variable `spam` defined in the body of the function is a local variable. It
could have been called anything else, really. Avoiding opaque functions makes our life easier.
Another property that adepts of functional programming value is that functions should have no, or
very limited, side\-effects. This means that functions should not change the state of your program.
For example this function (which is not a referential transparent function):
```
count_iter <- 0
sqrt_newton_side_effect <- function(a, init, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
count_iter <<- count_iter + 1 # The "<<-" symbol means that we assign the
} # RHS value in a variable inside the global environment
init
}
```
If you look in the environment pane, you will see that `count_iter` equals 0\. Now call this
function with the following arguments:
```
sqrt_newton_side_effect(16000, 2)
```
```
## [1] 126.4911
```
```
print(count_iter)
```
```
## [1] 9
```
If you check the value of `count_iter` now, you will see that it increased! This is a side effect,
because the function changed something outside of its scope. It changed a value in the global
environment. In general, it is good practice to avoid side\-effects. For example, we could make the
above function not have any side effects like this:
```
sqrt_newton_count <- function(a, init, count_iter = 0, eps = 0.01){
while(abs(init**2 - a) > eps){
init <- 1/2 *(init + a/init)
count_iter <- count_iter + 1
}
c(init, count_iter)
}
```
Now, this function returns a list with two elements, the result, and the number of iterations it
took to get the result:
```
sqrt_newton_count(16000, 2)
```
```
## [1] 126.4911 9.0000
```
Writing to disk is also considered a side effect, because the function changes something (a file)
outside its scope. But this cannot be avoided since you *want* to write to disk.
Just remember: try to avoid having functions changing variables in the global environment unless
you have a very good reason of doing so.
Very long scripts that don’t use functions and use a lot of global variables with loops changing
the values of global variables are a nightmare to debug. If something goes wrong, it might be very
difficult to pinpoint where the problem is. Is there an error in one of the loops?
Is your code running for a particular value of a particular variable in the global environment, but
not for other values? Which values? And of which variables? It can be very difficult to know what
is wrong with such a script.
With functional programming, you can avoid a lot of this pain for free (well not entirely for free,
it still requires some effort, since R is not a pure functional language). Writing functions also
makes it easier to parallelize your code. We are going to learn about that later in this chapter too.
Finally, another property of mathematical functions, is that they do one single thing. Functional
programming purists also program their functions to do one single task. This has benefits, but
can complicate things. The function we wrote previously does two things: it computes the square
root of a number and also returns the number of iterations it took to compute the result. However,
this is not a bad thing; the function is doing two tasks, but these tasks are related to each other
and it makes sense to have them together. My piece of advice: avoid having functions that do
many *unrelated* things. This makes debugging harder.
In conclusion: you should strive for referential transparency, try to avoid side effects unless you
have a good reason to have them and try to keep your functions short and do as little tasks as
possible. This makes testing and debugging easier, as you will see in the next chapter, but also
improves readability and maintainability of your code.
8\.3 Functional programming with `{purrr}`
------------------------------------------
I mentioned it several times already, but R is not a pure functional programming language. It is
possible to write R code using the functional programming paradigm, but some effort is required.
The `{purrr}` package extends R’s base functional programming capabilities with some very interesting
functions. We have already seen `map()` and `reduce()`, which we are going to see in more detail now.
Then, we are going to learn about some other functions included in `{purrr}` that make functional
programming easier in R.
### 8\.3\.1 Doing away with loops: the `map*()` family of functions
Instead of using loops, pure functional programming languages use functions that achieve
the same result. These functions are often called `Map` or `Reduce` (also called `Fold`). R comes
with the `*apply()` family of functions (which are implementations of `Map`),
as well as `Reduce()` for functional programming.
Within this family, you can find `lapply()`, `sapply()`, `vapply()`, `tapply()`, `mapply()`, `rapply()`,
`eapply()` and `apply()` (I might have forgotten one or the other, but that’s not important).
Each version of an `*apply()` function has a different purpose, but it is not very easy to
remember which does what exactly. To add even more confusion, the arguments are sometimes different between
each of these.
In the `{purrr}` package, these functions are replaced by the `map*()` family of functions. As you will
shortly see, they are very consistent, and thus easier to use.
The first part of these functions’ names all start with `map_` and the second part tells you what
this function is going to return. For example, if you want `double`s out, you would use `map_dbl()`.
If you are working on data frames and want a data frame back, you would use `map_df()`. Let’s start
with the basic `map()` function. The following gif
(source: [Wikipedia](https://en.wikipedia.org/wiki/Map_(higher-order_function))) illustrates
what `map()` does fairly well:
\\(X\\) is a vector composed of the following scalars: \\((0, 5, 8, 3, 2, 1\)\\). The function we want to
map to each element of \\(X\\) is \\(f(x) \= x \+ 1\\). \\(X'\\) is the result of this operation. Using R, we
would do the following:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
Using a loop, you would write:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- vector("list", 6)
for(number in seq_along(numbers)){
my_results[[number]] <- plus_one(number)
}
my_results
```
```
## [[1]]
## [1] 2
##
## [[2]]
## [1] 3
##
## [[3]]
## [1] 4
##
## [[4]]
## [1] 5
##
## [[5]]
## [1] 6
##
## [[6]]
## [1] 7
```
Now I don’t know about you, but I prefer the first option. Using functional programming, you don’t
need to create an empty list to hold your results, and the code is more concise. Plus,
it is less error prone. I had to try several times to get the loop right
(and I’ve using R for almost 10 years now). Why? Well, first of all I used `%in%` instead of `in`.
Then, I forgot about `seq_along()`. After that, I made a typo, `plos_one()` instead of `plus_one()`
(ok, that one is unrelated to the loop). Let’s also see how this works using base R:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- lapply(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
So what is the added value of using `{purrr}`, you might ask. Well, imagine that instead of a list,
I need to an atomic vector of `numeric`s. This is fairly easy with `{purrr}`:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map_dbl(numbers, plus_one)
my_results
```
```
## [1] 1 6 9 4 3 2
```
We’re going to discuss these functions below, but know that in base R, outputting something else
involves more effort.
Let’s go back to our `sqrt_newton()` function. This function has more than one parameter. Often,
we would like to map functions with more than one parameter to a list, while holding constant
some of the functions parameters. This is easily achieved like so:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, sqrt_newton, init = 1)
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
It is also possible to use a formula:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, ~sqrt_newton(., init = 1))
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
Another function that is similar to `map()` is `rerun()`. You guessed it, this one simply
reruns an expression:
```
rerun(10, "hello")
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rerun()` simply runs an expression (which can be arbitrarily complex) `n` times, whereas `map()`
maps a function to a list of inputs, so to achieve the same with `map()`, you need to map the `print()`
function to a vector of characters:
```
map(rep("hello", 10), print)
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rep()` is a function that creates a vector by repeating something, in this case the string “hello”,
as many times as needed, here 10\. The output here is a bit different that before though, because first
you will see “hello” printed 10 times and then the list where each element is “hello”.
This is because the `print()` function has a side effect, which is, well printing to the console.
We see this side effect 10 times, plus then the list created with `map()`.
`rerun()` is useful if you want to run simulation. For instance, let’s suppose that I perform a simulation
where I throw a die 5 times, and compute the mean of the points obtained, as well as the variance:
```
mean_var_throws <- function(n){
throws <- sample(1:6, n, replace = TRUE)
mean_throws <- mean(throws)
var_throws <- var(throws)
tibble::tribble(~mean_throws, ~var_throws,
mean_throws, var_throws)
}
mean_var_throws(5)
```
```
## # A tibble: 1 × 2
## mean_throws var_throws
## <dbl> <dbl>
## 1 2.2 1.7
```
`mean_var_throws()` returns a `tibble` object with mean of points and the variance of the points. Now suppose
I want to compute the expected value of the distribution of throwing dice. We know from theory that it should
be equal to \\(3\.5 (\= 1\*1/6 \+ 2\*1/6 \+ 3\*1/6 \+ 4\*1/6 \+ 5\*1/6 \+ 6\*1/6\)\\).
Let’s rerun the simulation 50 times:
```
simulations <- rerun(50, mean_var_throws(5))
```
Let’s see what the `simulations` object is made of:
```
str(simulations)
```
```
## List of 50
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2
## ..$ var_throws : num 3
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.2
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.7
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 1.7
.....
```
`simulations` is a list of 50 data frames. We can easily combine them into a single data frame, and compute the
mean of the means, which should return something close to the expected value of 3\.5:
```
bind_rows(simulations) %>%
summarise(expected_value = mean(mean_throws))
```
```
## # A tibble: 1 × 1
## expected_value
## <dbl>
## 1 3.44
```
Pretty close! Now of course, one could have simply done something like this:
```
mean(sample(1:6, 1000, replace = TRUE))
```
```
## [1] 3.481
```
but the point was to illustrate that `rerun()` can run any arbitrarily complex expression, and that it is good
practice to put the result in a data frame or list, for easier further manipulation.
You now know the standard `map()` function, and also `rerun()`, which return lists, but there are a
number of variants of this function. `map_dbl()` returns an atomic vector of doubles, as seen
we’ve seen before. A little reminder below:
```
map_dbl(numbers, sqrt_newton, init = 1)
```
```
## [1] 2.645767 2.828469 4.358902 8.000002
```
In a similar fashion, `map_chr()` returns an atomic vector of strings:
```
map_chr(numbers, sqrt_newton, init = 1)
```
```
## [1] "2.645767" "2.828469" "4.358902" "8.000002"
```
`map_lgl()` returns an atomic vector of `TRUE` or `FALSE`:
```
divisible <- function(x, y){
if_else(x %% y == 0, TRUE, FALSE)
}
map_lgl(seq(1:100), divisible, 3)
```
```
## [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [13] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [25] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [37] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [49] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [61] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [73] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [85] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [97] FALSE FALSE TRUE FALSE
```
There are also other interesting variants, such as `map_if()`:
```
a <- seq(1,10)
map_if(a, (function(x) divisible(x, 2)), sqrt)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 1.414214
##
## [[3]]
## [1] 3
##
## [[4]]
## [1] 2
##
## [[5]]
## [1] 5
##
## [[6]]
## [1] 2.44949
##
## [[7]]
## [1] 7
##
## [[8]]
## [1] 2.828427
##
## [[9]]
## [1] 9
##
## [[10]]
## [1] 3.162278
```
I used `map_if()` to take the square root of only those numbers in vector `a` that are divisble by 2,
by using an anonymous function that checks if a number is divisible by 2 (by wrapping `divisible()`).
`map_at()` is similar to `map_if()` but maps the function at a position specified by the user:
```
map_at(numbers, c(1, 3), sqrt)
```
```
## [[1]]
## [1] 2.645751
##
## [[2]]
## [1] 8
##
## [[3]]
## [1] 4.358899
##
## [[4]]
## [1] 64
```
or if you have a named list:
```
recipe <- list("spam" = 1, "eggs" = 3, "bacon" = 10)
map_at(recipe, "bacon", `*`, 2)
```
```
## $spam
## [1] 1
##
## $eggs
## [1] 3
##
## $bacon
## [1] 20
```
I used `map_at()` to double the quantity of bacon in the recipe (by using the `*` function, and specifying
its second argument, `2`. Try the following in the command prompt: ``*`(3, 4)`).
`map2()` is the equivalent of `mapply()` and `pmap()` is the generalisation of `map2()` for more
than 2 arguments:
```
print(a)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b <- seq(1, 2, length.out = 10)
print(b)
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
map2(a, b, `*`)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 2.222222
##
## [[3]]
## [1] 3.666667
##
## [[4]]
## [1] 5.333333
##
## [[5]]
## [1] 7.222222
##
## [[6]]
## [1] 9.333333
##
## [[7]]
## [1] 11.66667
##
## [[8]]
## [1] 14.22222
##
## [[9]]
## [1] 17
##
## [[10]]
## [1] 20
```
Each element of `a` gets multiplied by the element of `b` that is in the same position.
Let’s see what `pmap()` does. Can you guess from the code below what is going on? I will print
`a` and `b` again for clarity:
```
a
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
n <- seq(1:10)
pmap(list(a, b, n), rnorm)
```
```
## [[1]]
## [1] -0.1758315
##
## [[2]]
## [1] -0.2162863 1.1033912
##
## [[3]]
## [1] 4.5731231 -0.3743379 6.8130737
##
## [[4]]
## [1] 0.8933089 4.1930837 7.5276030 -2.3575522
##
## [[5]]
## [1] 2.1814981 -1.7455750 5.0548288 2.7848458 0.9230675
##
## [[6]]
## [1] 2.806217 5.667499 -5.032922 6.741065 -2.757928 12.414101
##
## [[7]]
## [1] -3.314145 -7.912019 -3.865292 4.307842 18.022049 1.278158 1.083208
##
## [[8]]
## [1] 6.2629161 2.1213552 0.3543566 2.1041606 -0.2643654 8.7600450 3.3616206
## [8] -7.7446668
##
## [[9]]
## [1] -7.609538 5.472267 -4.869374 -11.943063 4.707929 -7.730088 13.431771
## [8] 1.606800 -6.578745
##
## [[10]]
## [1] -9.101480 4.404571 -16.071437 1.110689 7.168097 15.848579
## [7] 16.710863 1.998482 -17.856521 -2.021087
```
Let’s take a closer look at what `a`, `b` and `n` look like, when they are place next to each other:
```
cbind(a, b, n)
```
```
## a b n
## [1,] 1 1.000000 1
## [2,] 2 1.111111 2
## [3,] 3 1.222222 3
## [4,] 4 1.333333 4
## [5,] 5 1.444444 5
## [6,] 6 1.555556 6
## [7,] 7 1.666667 7
## [8,] 8 1.777778 8
## [9,] 9 1.888889 9
## [10,] 10 2.000000 10
```
`rnorm()` gets first called with the parameters from the first line, meaning
`rnorm(a[1], b[1], n[1])`. The second time `rnorm()` gets called, you guessed it,
it with the parameters on the second line of the array above,
`rnorm(a[2], b[2], n[2])`, etc.
There are other functions in the `map()` family of functions, but we will discover them in the
exercises!
The `map()` family of functions does not have any more secrets for you. Let’s now take a look at
the `reduce()` family of functions.
### 8\.3\.2 Reducing with `purrr`
Reducing is another important concept in functional programming. It allows going from a list of
elements, to a single element, by somehow *combining* the elements into one. For instance, using
the base R `Reduce()` function, you can sum the elements of a list like so:
```
Reduce(`+`, seq(1:100))
```
```
## [1] 5050
```
using `purrr::reduce()`, this becomes:
```
reduce(seq(1:100), `+`)
```
```
## [1] 5050
```
If you don’t really get what happening, don’t worry. Things should get clearer once I’ll introduce
another version of `reduce()`, called `accumulate()`, which we will see below.
Sometimes, the direction from which we start to reduce is quite important. You can “start from the
end” of the list by using the `.dir` argument:
```
reduce(seq(1:100), `+`, .dir = "backward")
```
```
## [1] 5050
```
Of course, for commutative operations, direction does not matter. But it does matter for non\-commutative
operations:
```
reduce(seq(1:100), `-`)
```
```
## [1] -5048
```
```
reduce(seq(1:100), `-`, .dir = "backward")
```
```
## [1] -50
```
Let’s now take a look at `accumulate()`. `accumulate()` is very similar to `map()`, but keeps the
intermediary results. Which intermediary results? Let’s try and see what happens:
```
a <- seq(1, 10)
accumulate(a, `-`)
```
```
## [1] 1 -1 -4 -8 -13 -19 -26 -34 -43 -53
```
`accumulate()` illustrates pretty well what is happening; the first element, `1`, is simply the
first element of `seq(1, 10)`. The second element of the result however, is the difference between
`1` and `2`, `-1`. The next element in `a` is `3`. Thus the next result is `-1-3`, `-4`, and so
on until we run out of elements in `a`.
The below illustration shows the algorithm step\-by\-step:
```
(1-2-3-4-5-6-7-8-9-10)
((1)-2-3-4-5-6-7-8-9-10)
((1-2)-3-4-5-6-7-8-9-10)
((-1-3)-4-5-6-7-8-9-10)
((-4-4)-5-6-7-8-9-10)
((-8-5)-6-7-8-9-10)
((-13-6)-7-8-9-10)
((-19-7)-8-9-10)
((-26-8)-9-10)
((-34-9)-10)
(-43-10)
-53
```
`reduce()` only shows the final result of all these operations. `accumulate()` and `reduce()` also
have an `.init` argument, that makes it possible to start the reducing procedure from an initial
value that is different from the first element of the vector:
```
reduce(a, `+`, .init = 1000)
accumulate(a, `-`, .init = 1000, .dir = "backward")
```
```
## [1] 1055
```
```
## [1] 995 -994 996 -993 997 -992 998 -991 999 -990 1000
```
`reduce()` generalizes functions that only take two arguments. If you were to write a function that returns
the minimum between two numbers:
```
my_min <- function(a, b){
if(a < b){
return(a)
} else {
return(b)
}
}
```
You could use `reduce()` to get the minimum of a list of numbers:
```
numbers2 <- c(3, 1, -8, 9)
reduce(numbers2, my_min)
```
```
## [1] -8
```
`map()` and `reduce()` are arguably the most useful higher\-order functions, and perhaps also the
most famous one, true ambassadors of functional programming. You might have read about
[MapReduce](https://en.wikipedia.org/wiki/MapReduce), a programming model for processing big
data in parallel. The way MapReduce works is inspired by both these `map()` and `reduce()` functions,
which are always included in functional programming languages. This illustrates that the functional
programming paradigm is very well suited to parallel computing.
Something else that is very important to understand at this point; up until now, we only used these
functions on lists, or atomic vectors, of numbers. However, `map()` and `reduce()`, and other
higher\-order functions for that matter, do not care about the contents of the list. What these
functions do, is take another functions, and make it do something to the elements of the list.
It does not matter if it’s a list of numbers, of characters, of data frames, even of models. All that
matters is that the function that will be applied to these elements, can operate on them.
So if you have a list of fitted models, you can map `summary()` on this list to get summaries of
each model. Or if you have a list of data frames, you can map a function that performs several
cleaning steps. This will be explored in a future section, but it is important to keep this in mind.
### 8\.3\.3 Error handling with `safely()` and `possibly()`
`safely()` and `possibly()` are very useful functions. Consider the following situation:
```
a <- list("a", 4, 5)
sqrt(a)
```
```
Error in sqrt(a) : non-numeric argument to mathematical function
```
Using `map()` or `Map()` will result in a similar error. `safely()` is an higher\-order function that
takes one function as an argument and executes it… *safely*, meaning the execution of the function
will not stop if there is an error. The error message gets captured alongside valid results.
```
a <- list("a", 4, 5)
safe_sqrt <- safely(sqrt)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## NULL
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
`possibly()` works similarly, but also allows you to specify a return value in case of an error:
```
possible_sqrt <- possibly(sqrt, otherwise = NA_real_)
map(a, possible_sqrt)
```
```
## [[1]]
## [1] NA
##
## [[2]]
## [1] 2
##
## [[3]]
## [1] 2.236068
```
Of course, in this particular example, the same effect could be obtained way more easily:
```
sqrt(as.numeric(a))
```
```
## Warning: NAs introduced by coercion
```
```
## [1] NA 2.000000 2.236068
```
However, in some situations, this trick does not work as intended (or at all). `possibly()` and
`safely()` allow the programmer to model errors explicitly, and to then provide a consistent way
of dealing with them. For instance, consider the following example:
```
data(mtcars)
write.csv(mtcars, "my_data/mtcars.csv")
```
```
Error in file(file, ifelse(append, "a", "w")) :
cannot open the connection
In addition: Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The folder `path/to/save/` does not exist, and as such this code produces an error. You might
want to catch this error, and create the directory for instance:
```
possibly_write.csv <- possibly(write.csv, otherwise = NULL)
if(is.null(possibly_write.csv(mtcars, "my_data/mtcars.csv"))) {
print("Creating folder...")
dir.create("my_data/")
print("Saving file...")
write.csv(mtcars, "my_data/mtcars.csv")
}
```
```
[1] "Creating folder..."
[1] "Saving file..."
Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The warning message comes from the first time we try to write the `.csv`, inside the `if`
statement. Because this fails, we create the directory and then actually save the file.
In the exercises, you’ll discover `quietly()`, which also captures warnings and messages.
To conclude this section: remember function factories? Turns out that `safely()`, `purely()` and `quietly()` are
function factories.
### 8\.3\.4 Partial applications with `partial()`
Consider the following simple function:
```
add <- function(a, b) a+b
```
It is possible to create a new function, where one of the parameters is fixed, for instance, where
`a = 10`:
```
add_to_10 <- partial(add, a = 10)
```
```
add_to_10(12)
```
```
## [1] 22
```
This is equivalent to the following:
```
add_to_10_2 <- function(b){
add(a = 10, b)
}
```
Using `partial()` is much less verbose however, and allowing you to define new functions very quickly:
```
head10 <- partial(head, n = 10)
head10(mtcars)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
```
### 8\.3\.5 Function composition using `compose`
Function composition is another handy tool, which makes chaining equation much more elegant:
```
compose(sqrt, log10, exp)(10)
```
```
## [1] 2.083973
```
You can read this expression as *`exp()` after `log10()` after `sqrt()`* and is equivalent to:
```
sqrt(log10(exp(10)))
```
```
## [1] 2.083973
```
It is also possible to reverse the order the functions get called using the `.dir =` option:
```
compose(sqrt, log10, exp, .dir = "forward")(10)
```
```
## [1] 1.648721
```
One could also use the `%>%` operator to achieve the same result:
```
10 %>%
sqrt %>%
log10 %>%
exp
```
```
## [1] 1.648721
```
but strictly speaking, this is not function composition.
### 8\.3\.6 «Transposing lists»
Another interesting function is `transpose()`. It is not an alternative to the function `t()` from
`base` but, has a similar effect. `transpose()` works on lists. Let’s take a look at the example
from before:
```
safe_sqrt <- safely(sqrt, otherwise = NA_real_)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## [1] NA
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
The output is a list with the first element being a list with a result and an error message. One
might want to have all the results in a single list, and all the error messages in another list.
This is possible with `transpose()`:
```
purrr::transpose(map(a, safe_sqrt))
```
```
## $result
## $result[[1]]
## [1] NA
##
## $result[[2]]
## [1] 2
##
## $result[[3]]
## [1] 2.236068
##
##
## $error
## $error[[1]]
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
## $error[[2]]
## NULL
##
## $error[[3]]
## NULL
```
I explicitely call `purrr::transpose()` because there is also a `data.table::transpose()`, which
is not the same function. You have to be careful about that sort of thing, because it can cause
errors in your programs and debuging this type of error is a nightmare.
Now that we are familiar with functional programming, let’s try to apply some of its principles
to data manipulation.
### 8\.3\.1 Doing away with loops: the `map*()` family of functions
Instead of using loops, pure functional programming languages use functions that achieve
the same result. These functions are often called `Map` or `Reduce` (also called `Fold`). R comes
with the `*apply()` family of functions (which are implementations of `Map`),
as well as `Reduce()` for functional programming.
Within this family, you can find `lapply()`, `sapply()`, `vapply()`, `tapply()`, `mapply()`, `rapply()`,
`eapply()` and `apply()` (I might have forgotten one or the other, but that’s not important).
Each version of an `*apply()` function has a different purpose, but it is not very easy to
remember which does what exactly. To add even more confusion, the arguments are sometimes different between
each of these.
In the `{purrr}` package, these functions are replaced by the `map*()` family of functions. As you will
shortly see, they are very consistent, and thus easier to use.
The first part of these functions’ names all start with `map_` and the second part tells you what
this function is going to return. For example, if you want `double`s out, you would use `map_dbl()`.
If you are working on data frames and want a data frame back, you would use `map_df()`. Let’s start
with the basic `map()` function. The following gif
(source: [Wikipedia](https://en.wikipedia.org/wiki/Map_(higher-order_function))) illustrates
what `map()` does fairly well:
\\(X\\) is a vector composed of the following scalars: \\((0, 5, 8, 3, 2, 1\)\\). The function we want to
map to each element of \\(X\\) is \\(f(x) \= x \+ 1\\). \\(X'\\) is the result of this operation. Using R, we
would do the following:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
Using a loop, you would write:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- vector("list", 6)
for(number in seq_along(numbers)){
my_results[[number]] <- plus_one(number)
}
my_results
```
```
## [[1]]
## [1] 2
##
## [[2]]
## [1] 3
##
## [[3]]
## [1] 4
##
## [[4]]
## [1] 5
##
## [[5]]
## [1] 6
##
## [[6]]
## [1] 7
```
Now I don’t know about you, but I prefer the first option. Using functional programming, you don’t
need to create an empty list to hold your results, and the code is more concise. Plus,
it is less error prone. I had to try several times to get the loop right
(and I’ve using R for almost 10 years now). Why? Well, first of all I used `%in%` instead of `in`.
Then, I forgot about `seq_along()`. After that, I made a typo, `plos_one()` instead of `plus_one()`
(ok, that one is unrelated to the loop). Let’s also see how this works using base R:
```
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- lapply(numbers, plus_one)
my_results
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 6
##
## [[3]]
## [1] 9
##
## [[4]]
## [1] 4
##
## [[5]]
## [1] 3
##
## [[6]]
## [1] 2
```
So what is the added value of using `{purrr}`, you might ask. Well, imagine that instead of a list,
I need to an atomic vector of `numeric`s. This is fairly easy with `{purrr}`:
```
library("purrr")
numbers <- c(0, 5, 8, 3, 2, 1)
plus_one <- function(x) (x + 1)
my_results <- map_dbl(numbers, plus_one)
my_results
```
```
## [1] 1 6 9 4 3 2
```
We’re going to discuss these functions below, but know that in base R, outputting something else
involves more effort.
Let’s go back to our `sqrt_newton()` function. This function has more than one parameter. Often,
we would like to map functions with more than one parameter to a list, while holding constant
some of the functions parameters. This is easily achieved like so:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, sqrt_newton, init = 1)
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
It is also possible to use a formula:
```
library("purrr")
numbers <- c(7, 8, 19, 64)
map(numbers, ~sqrt_newton(., init = 1))
```
```
## [[1]]
## [1] 2.645767
##
## [[2]]
## [1] 2.828469
##
## [[3]]
## [1] 4.358902
##
## [[4]]
## [1] 8.000002
```
Another function that is similar to `map()` is `rerun()`. You guessed it, this one simply
reruns an expression:
```
rerun(10, "hello")
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rerun()` simply runs an expression (which can be arbitrarily complex) `n` times, whereas `map()`
maps a function to a list of inputs, so to achieve the same with `map()`, you need to map the `print()`
function to a vector of characters:
```
map(rep("hello", 10), print)
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
```
## [[1]]
## [1] "hello"
##
## [[2]]
## [1] "hello"
##
## [[3]]
## [1] "hello"
##
## [[4]]
## [1] "hello"
##
## [[5]]
## [1] "hello"
##
## [[6]]
## [1] "hello"
##
## [[7]]
## [1] "hello"
##
## [[8]]
## [1] "hello"
##
## [[9]]
## [1] "hello"
##
## [[10]]
## [1] "hello"
```
`rep()` is a function that creates a vector by repeating something, in this case the string “hello”,
as many times as needed, here 10\. The output here is a bit different that before though, because first
you will see “hello” printed 10 times and then the list where each element is “hello”.
This is because the `print()` function has a side effect, which is, well printing to the console.
We see this side effect 10 times, plus then the list created with `map()`.
`rerun()` is useful if you want to run simulation. For instance, let’s suppose that I perform a simulation
where I throw a die 5 times, and compute the mean of the points obtained, as well as the variance:
```
mean_var_throws <- function(n){
throws <- sample(1:6, n, replace = TRUE)
mean_throws <- mean(throws)
var_throws <- var(throws)
tibble::tribble(~mean_throws, ~var_throws,
mean_throws, var_throws)
}
mean_var_throws(5)
```
```
## # A tibble: 1 × 2
## mean_throws var_throws
## <dbl> <dbl>
## 1 2.2 1.7
```
`mean_var_throws()` returns a `tibble` object with mean of points and the variance of the points. Now suppose
I want to compute the expected value of the distribution of throwing dice. We know from theory that it should
be equal to \\(3\.5 (\= 1\*1/6 \+ 2\*1/6 \+ 3\*1/6 \+ 4\*1/6 \+ 5\*1/6 \+ 6\*1/6\)\\).
Let’s rerun the simulation 50 times:
```
simulations <- rerun(50, mean_var_throws(5))
```
Let’s see what the `simulations` object is made of:
```
str(simulations)
```
```
## List of 50
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2
## ..$ var_throws : num 3
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.2
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 0.7
## $ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of 2 variables:
## ..$ mean_throws: num 2.8
## ..$ var_throws : num 1.7
.....
```
`simulations` is a list of 50 data frames. We can easily combine them into a single data frame, and compute the
mean of the means, which should return something close to the expected value of 3\.5:
```
bind_rows(simulations) %>%
summarise(expected_value = mean(mean_throws))
```
```
## # A tibble: 1 × 1
## expected_value
## <dbl>
## 1 3.44
```
Pretty close! Now of course, one could have simply done something like this:
```
mean(sample(1:6, 1000, replace = TRUE))
```
```
## [1] 3.481
```
but the point was to illustrate that `rerun()` can run any arbitrarily complex expression, and that it is good
practice to put the result in a data frame or list, for easier further manipulation.
You now know the standard `map()` function, and also `rerun()`, which return lists, but there are a
number of variants of this function. `map_dbl()` returns an atomic vector of doubles, as seen
we’ve seen before. A little reminder below:
```
map_dbl(numbers, sqrt_newton, init = 1)
```
```
## [1] 2.645767 2.828469 4.358902 8.000002
```
In a similar fashion, `map_chr()` returns an atomic vector of strings:
```
map_chr(numbers, sqrt_newton, init = 1)
```
```
## [1] "2.645767" "2.828469" "4.358902" "8.000002"
```
`map_lgl()` returns an atomic vector of `TRUE` or `FALSE`:
```
divisible <- function(x, y){
if_else(x %% y == 0, TRUE, FALSE)
}
map_lgl(seq(1:100), divisible, 3)
```
```
## [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [13] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [25] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [37] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [49] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [61] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [73] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [85] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE
## [97] FALSE FALSE TRUE FALSE
```
There are also other interesting variants, such as `map_if()`:
```
a <- seq(1,10)
map_if(a, (function(x) divisible(x, 2)), sqrt)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 1.414214
##
## [[3]]
## [1] 3
##
## [[4]]
## [1] 2
##
## [[5]]
## [1] 5
##
## [[6]]
## [1] 2.44949
##
## [[7]]
## [1] 7
##
## [[8]]
## [1] 2.828427
##
## [[9]]
## [1] 9
##
## [[10]]
## [1] 3.162278
```
I used `map_if()` to take the square root of only those numbers in vector `a` that are divisble by 2,
by using an anonymous function that checks if a number is divisible by 2 (by wrapping `divisible()`).
`map_at()` is similar to `map_if()` but maps the function at a position specified by the user:
```
map_at(numbers, c(1, 3), sqrt)
```
```
## [[1]]
## [1] 2.645751
##
## [[2]]
## [1] 8
##
## [[3]]
## [1] 4.358899
##
## [[4]]
## [1] 64
```
or if you have a named list:
```
recipe <- list("spam" = 1, "eggs" = 3, "bacon" = 10)
map_at(recipe, "bacon", `*`, 2)
```
```
## $spam
## [1] 1
##
## $eggs
## [1] 3
##
## $bacon
## [1] 20
```
I used `map_at()` to double the quantity of bacon in the recipe (by using the `*` function, and specifying
its second argument, `2`. Try the following in the command prompt: ``*`(3, 4)`).
`map2()` is the equivalent of `mapply()` and `pmap()` is the generalisation of `map2()` for more
than 2 arguments:
```
print(a)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b <- seq(1, 2, length.out = 10)
print(b)
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
map2(a, b, `*`)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 2.222222
##
## [[3]]
## [1] 3.666667
##
## [[4]]
## [1] 5.333333
##
## [[5]]
## [1] 7.222222
##
## [[6]]
## [1] 9.333333
##
## [[7]]
## [1] 11.66667
##
## [[8]]
## [1] 14.22222
##
## [[9]]
## [1] 17
##
## [[10]]
## [1] 20
```
Each element of `a` gets multiplied by the element of `b` that is in the same position.
Let’s see what `pmap()` does. Can you guess from the code below what is going on? I will print
`a` and `b` again for clarity:
```
a
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
b
```
```
## [1] 1.000000 1.111111 1.222222 1.333333 1.444444 1.555556 1.666667 1.777778
## [9] 1.888889 2.000000
```
```
n <- seq(1:10)
pmap(list(a, b, n), rnorm)
```
```
## [[1]]
## [1] -0.1758315
##
## [[2]]
## [1] -0.2162863 1.1033912
##
## [[3]]
## [1] 4.5731231 -0.3743379 6.8130737
##
## [[4]]
## [1] 0.8933089 4.1930837 7.5276030 -2.3575522
##
## [[5]]
## [1] 2.1814981 -1.7455750 5.0548288 2.7848458 0.9230675
##
## [[6]]
## [1] 2.806217 5.667499 -5.032922 6.741065 -2.757928 12.414101
##
## [[7]]
## [1] -3.314145 -7.912019 -3.865292 4.307842 18.022049 1.278158 1.083208
##
## [[8]]
## [1] 6.2629161 2.1213552 0.3543566 2.1041606 -0.2643654 8.7600450 3.3616206
## [8] -7.7446668
##
## [[9]]
## [1] -7.609538 5.472267 -4.869374 -11.943063 4.707929 -7.730088 13.431771
## [8] 1.606800 -6.578745
##
## [[10]]
## [1] -9.101480 4.404571 -16.071437 1.110689 7.168097 15.848579
## [7] 16.710863 1.998482 -17.856521 -2.021087
```
Let’s take a closer look at what `a`, `b` and `n` look like, when they are place next to each other:
```
cbind(a, b, n)
```
```
## a b n
## [1,] 1 1.000000 1
## [2,] 2 1.111111 2
## [3,] 3 1.222222 3
## [4,] 4 1.333333 4
## [5,] 5 1.444444 5
## [6,] 6 1.555556 6
## [7,] 7 1.666667 7
## [8,] 8 1.777778 8
## [9,] 9 1.888889 9
## [10,] 10 2.000000 10
```
`rnorm()` gets first called with the parameters from the first line, meaning
`rnorm(a[1], b[1], n[1])`. The second time `rnorm()` gets called, you guessed it,
it with the parameters on the second line of the array above,
`rnorm(a[2], b[2], n[2])`, etc.
There are other functions in the `map()` family of functions, but we will discover them in the
exercises!
The `map()` family of functions does not have any more secrets for you. Let’s now take a look at
the `reduce()` family of functions.
### 8\.3\.2 Reducing with `purrr`
Reducing is another important concept in functional programming. It allows going from a list of
elements, to a single element, by somehow *combining* the elements into one. For instance, using
the base R `Reduce()` function, you can sum the elements of a list like so:
```
Reduce(`+`, seq(1:100))
```
```
## [1] 5050
```
using `purrr::reduce()`, this becomes:
```
reduce(seq(1:100), `+`)
```
```
## [1] 5050
```
If you don’t really get what happening, don’t worry. Things should get clearer once I’ll introduce
another version of `reduce()`, called `accumulate()`, which we will see below.
Sometimes, the direction from which we start to reduce is quite important. You can “start from the
end” of the list by using the `.dir` argument:
```
reduce(seq(1:100), `+`, .dir = "backward")
```
```
## [1] 5050
```
Of course, for commutative operations, direction does not matter. But it does matter for non\-commutative
operations:
```
reduce(seq(1:100), `-`)
```
```
## [1] -5048
```
```
reduce(seq(1:100), `-`, .dir = "backward")
```
```
## [1] -50
```
Let’s now take a look at `accumulate()`. `accumulate()` is very similar to `map()`, but keeps the
intermediary results. Which intermediary results? Let’s try and see what happens:
```
a <- seq(1, 10)
accumulate(a, `-`)
```
```
## [1] 1 -1 -4 -8 -13 -19 -26 -34 -43 -53
```
`accumulate()` illustrates pretty well what is happening; the first element, `1`, is simply the
first element of `seq(1, 10)`. The second element of the result however, is the difference between
`1` and `2`, `-1`. The next element in `a` is `3`. Thus the next result is `-1-3`, `-4`, and so
on until we run out of elements in `a`.
The below illustration shows the algorithm step\-by\-step:
```
(1-2-3-4-5-6-7-8-9-10)
((1)-2-3-4-5-6-7-8-9-10)
((1-2)-3-4-5-6-7-8-9-10)
((-1-3)-4-5-6-7-8-9-10)
((-4-4)-5-6-7-8-9-10)
((-8-5)-6-7-8-9-10)
((-13-6)-7-8-9-10)
((-19-7)-8-9-10)
((-26-8)-9-10)
((-34-9)-10)
(-43-10)
-53
```
`reduce()` only shows the final result of all these operations. `accumulate()` and `reduce()` also
have an `.init` argument, that makes it possible to start the reducing procedure from an initial
value that is different from the first element of the vector:
```
reduce(a, `+`, .init = 1000)
accumulate(a, `-`, .init = 1000, .dir = "backward")
```
```
## [1] 1055
```
```
## [1] 995 -994 996 -993 997 -992 998 -991 999 -990 1000
```
`reduce()` generalizes functions that only take two arguments. If you were to write a function that returns
the minimum between two numbers:
```
my_min <- function(a, b){
if(a < b){
return(a)
} else {
return(b)
}
}
```
You could use `reduce()` to get the minimum of a list of numbers:
```
numbers2 <- c(3, 1, -8, 9)
reduce(numbers2, my_min)
```
```
## [1] -8
```
`map()` and `reduce()` are arguably the most useful higher\-order functions, and perhaps also the
most famous one, true ambassadors of functional programming. You might have read about
[MapReduce](https://en.wikipedia.org/wiki/MapReduce), a programming model for processing big
data in parallel. The way MapReduce works is inspired by both these `map()` and `reduce()` functions,
which are always included in functional programming languages. This illustrates that the functional
programming paradigm is very well suited to parallel computing.
Something else that is very important to understand at this point; up until now, we only used these
functions on lists, or atomic vectors, of numbers. However, `map()` and `reduce()`, and other
higher\-order functions for that matter, do not care about the contents of the list. What these
functions do, is take another functions, and make it do something to the elements of the list.
It does not matter if it’s a list of numbers, of characters, of data frames, even of models. All that
matters is that the function that will be applied to these elements, can operate on them.
So if you have a list of fitted models, you can map `summary()` on this list to get summaries of
each model. Or if you have a list of data frames, you can map a function that performs several
cleaning steps. This will be explored in a future section, but it is important to keep this in mind.
### 8\.3\.3 Error handling with `safely()` and `possibly()`
`safely()` and `possibly()` are very useful functions. Consider the following situation:
```
a <- list("a", 4, 5)
sqrt(a)
```
```
Error in sqrt(a) : non-numeric argument to mathematical function
```
Using `map()` or `Map()` will result in a similar error. `safely()` is an higher\-order function that
takes one function as an argument and executes it… *safely*, meaning the execution of the function
will not stop if there is an error. The error message gets captured alongside valid results.
```
a <- list("a", 4, 5)
safe_sqrt <- safely(sqrt)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## NULL
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
`possibly()` works similarly, but also allows you to specify a return value in case of an error:
```
possible_sqrt <- possibly(sqrt, otherwise = NA_real_)
map(a, possible_sqrt)
```
```
## [[1]]
## [1] NA
##
## [[2]]
## [1] 2
##
## [[3]]
## [1] 2.236068
```
Of course, in this particular example, the same effect could be obtained way more easily:
```
sqrt(as.numeric(a))
```
```
## Warning: NAs introduced by coercion
```
```
## [1] NA 2.000000 2.236068
```
However, in some situations, this trick does not work as intended (or at all). `possibly()` and
`safely()` allow the programmer to model errors explicitly, and to then provide a consistent way
of dealing with them. For instance, consider the following example:
```
data(mtcars)
write.csv(mtcars, "my_data/mtcars.csv")
```
```
Error in file(file, ifelse(append, "a", "w")) :
cannot open the connection
In addition: Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The folder `path/to/save/` does not exist, and as such this code produces an error. You might
want to catch this error, and create the directory for instance:
```
possibly_write.csv <- possibly(write.csv, otherwise = NULL)
if(is.null(possibly_write.csv(mtcars, "my_data/mtcars.csv"))) {
print("Creating folder...")
dir.create("my_data/")
print("Saving file...")
write.csv(mtcars, "my_data/mtcars.csv")
}
```
```
[1] "Creating folder..."
[1] "Saving file..."
Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'my_data/mtcars.csv': No such file or directory
```
The warning message comes from the first time we try to write the `.csv`, inside the `if`
statement. Because this fails, we create the directory and then actually save the file.
In the exercises, you’ll discover `quietly()`, which also captures warnings and messages.
To conclude this section: remember function factories? Turns out that `safely()`, `purely()` and `quietly()` are
function factories.
### 8\.3\.4 Partial applications with `partial()`
Consider the following simple function:
```
add <- function(a, b) a+b
```
It is possible to create a new function, where one of the parameters is fixed, for instance, where
`a = 10`:
```
add_to_10 <- partial(add, a = 10)
```
```
add_to_10(12)
```
```
## [1] 22
```
This is equivalent to the following:
```
add_to_10_2 <- function(b){
add(a = 10, b)
}
```
Using `partial()` is much less verbose however, and allowing you to define new functions very quickly:
```
head10 <- partial(head, n = 10)
head10(mtcars)
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
```
### 8\.3\.5 Function composition using `compose`
Function composition is another handy tool, which makes chaining equation much more elegant:
```
compose(sqrt, log10, exp)(10)
```
```
## [1] 2.083973
```
You can read this expression as *`exp()` after `log10()` after `sqrt()`* and is equivalent to:
```
sqrt(log10(exp(10)))
```
```
## [1] 2.083973
```
It is also possible to reverse the order the functions get called using the `.dir =` option:
```
compose(sqrt, log10, exp, .dir = "forward")(10)
```
```
## [1] 1.648721
```
One could also use the `%>%` operator to achieve the same result:
```
10 %>%
sqrt %>%
log10 %>%
exp
```
```
## [1] 1.648721
```
but strictly speaking, this is not function composition.
### 8\.3\.6 «Transposing lists»
Another interesting function is `transpose()`. It is not an alternative to the function `t()` from
`base` but, has a similar effect. `transpose()` works on lists. Let’s take a look at the example
from before:
```
safe_sqrt <- safely(sqrt, otherwise = NA_real_)
map(a, safe_sqrt)
```
```
## [[1]]
## [[1]]$result
## [1] NA
##
## [[1]]$error
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
##
## [[2]]
## [[2]]$result
## [1] 2
##
## [[2]]$error
## NULL
##
##
## [[3]]
## [[3]]$result
## [1] 2.236068
##
## [[3]]$error
## NULL
```
The output is a list with the first element being a list with a result and an error message. One
might want to have all the results in a single list, and all the error messages in another list.
This is possible with `transpose()`:
```
purrr::transpose(map(a, safe_sqrt))
```
```
## $result
## $result[[1]]
## [1] NA
##
## $result[[2]]
## [1] 2
##
## $result[[3]]
## [1] 2.236068
##
##
## $error
## $error[[1]]
## <simpleError in .Primitive("sqrt")(x): non-numeric argument to mathematical function>
##
## $error[[2]]
## NULL
##
## $error[[3]]
## NULL
```
I explicitely call `purrr::transpose()` because there is also a `data.table::transpose()`, which
is not the same function. You have to be careful about that sort of thing, because it can cause
errors in your programs and debuging this type of error is a nightmare.
Now that we are familiar with functional programming, let’s try to apply some of its principles
to data manipulation.
8\.4 List\-based workflows for efficiency
-----------------------------------------
You can use your own functions in pipe workflows:
```
double_number <- function(x){
x+x
}
```
```
mtcars %>%
head() %>%
mutate(double_mpg = double_number(mpg))
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb double_mpg
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 42.0
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 42.0
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 45.6
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 42.8
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 37.4
## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 36.2
```
It is important to understand that your functions, and functions that are built\-in into R, or that
come from packages, are exactly the same thing. Every function is a first\-class object in R, no
matter where they come from. The consequence of functions being first\-class objects is that
functions can take functions as arguments, functions can return functions (the function factories
from the previous chapter) and can be assigned to any variable:
```
plop <- sqrt
plop(4)
```
```
## [1] 2
```
```
bacon <- function(.f){
message("Bacon is tasty")
.f
}
bacon(sqrt) # `bacon` is a function factory, as it returns a function (alongside an informative message)
```
```
## Bacon is tasty
```
```
## function (x) .Primitive("sqrt")
```
```
# To actually call it:
bacon(sqrt)(4)
```
```
## Bacon is tasty
```
```
## [1] 2
```
Now, let’s step back for a bit and think about what we learned up until now, and especially
the `map()` family of functions.
Let’s read the list of datasets from the previous chapter:
```
paths <- Sys.glob("datasets/unemployment/*.csv")
all_datasets <- import_list(paths)
str(all_datasets)
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of which: Wage-earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of which: Non-wage-earners: int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ Unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ Active population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ Year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of which: Wage-earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of which: Non-wage-earners: int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ Unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ Active population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ Year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of which: Wage-earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of which: Non-wage-earners: int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ Active population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ Year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ Commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ Total employed population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of which: Wage-earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of which: Non-wage-earners: int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ Unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ Active population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ Unemployment rate (in %) : num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ Year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
`all_datasets` is a list with 4 elements, each of them is a `data.frame`.
The first thing we are going to do is use a function to clean the names of the datasets. These
names are not very easy to work with; there are spaces, and it would be better if the names of the
columns would be all lowercase. For this we are going to use the function `clean_names()` from the
`janitor` package. For a single dataset, I would write this:
```
library(janitor)
one_dataset <- one_dataset %>%
clean_names()
```
and I would get a dataset with column names in lowercase and spaces replaced by `_` (and other
corrections). How can I apply, or map, this function to each dataset in the list? To do this I need
to use `purrr::map()`, which we’ve seen in the previous section:
```
library(purrr)
all_datasets <- all_datasets %>%
map(clean_names)
all_datasets %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 223407 17802 1703 844 1431 4094 2146 971 1218 3002 ...
## ..$ of_which_wage_earners : int [1:118] 203535 15993 1535 750 1315 3800 1874 858 1029 2664 ...
## ..$ of_which_non_wage_earners : int [1:118] 19872 1809 168 94 116 294 272 113 189 338 ...
## ..$ unemployed : int [1:118] 19287 1071 114 25 74 261 98 45 66 207 ...
## ..$ active_population : int [1:118] 242694 18873 1817 869 1505 4355 2244 1016 1284 3209 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.95 5.67 6.27 2.88 4.92 ...
## ..$ year : int [1:118] 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 228423 18166 1767 845 1505 4129 2172 1007 1268 3124 ...
## ..$ of_which_wage_earners : int [1:118] 208238 16366 1606 757 1390 3840 1897 887 1082 2782 ...
## ..$ of_which_non_wage_earners : int [1:118] 20185 1800 161 88 115 289 275 120 186 342 ...
## ..$ unemployed : int [1:118] 19362 1066 122 19 66 287 91 38 61 202 ...
## ..$ active_population : int [1:118] 247785 19232 1889 864 1571 4416 2263 1045 1329 3326 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.81 5.54 6.46 2.2 4.2 ...
## ..$ year : int [1:118] 2014 2014 2014 2014 2014 2014 2014 2014 2014 2014 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 233130 18310 1780 870 1470 4130 2170 1050 1300 3140 ...
## ..$ of_which_wage_earners : int [1:118] 212530 16430 1620 780 1350 3820 1910 920 1100 2770 ...
## ..$ of_which_non_wage_earners : int [1:118] 20600 1880 160 90 120 310 260 130 200 370 ...
## ..$ unemployed : int [1:118] 18806 988 106 29 73 260 80 41 72 169 ...
## ..$ active_population : int [1:118] 251936 19298 1886 899 1543 4390 2250 1091 1372 3309 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.46 5.12 5.62 3.23 4.73 ...
## ..$ year : int [1:118] 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 118 obs. of 8 variables:
## ..$ commune : chr [1:118] "Grand-Duche de Luxembourg" "Canton Capellen" "Dippach" "Garnich" ...
## ..$ total_employed_population : int [1:118] 236100 18380 1790 870 1470 4160 2160 1030 1330 3150 ...
## ..$ of_which_wage_earners : int [1:118] 215430 16500 1640 780 1350 3840 1900 900 1130 2780 ...
## ..$ of_which_non_wage_earners : int [1:118] 20670 1880 150 90 120 320 260 130 200 370 ...
## ..$ unemployed : int [1:118] 18185 975 91 27 66 246 76 35 70 206 ...
## ..$ active_population : int [1:118] 254285 19355 1881 897 1536 4406 2236 1065 1400 3356 ...
## ..$ unemployment_rate_in_percent: num [1:118] 7.15 5.04 4.84 3.01 4.3 ...
## ..$ year : int [1:118] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
Remember that `map(list, function)` simply evaluates `function` to each element of `list`.
So now, what if I want to know, for each dataset, which *communes* have an unemployment rate that is
less than, say, 3%? For a single dataset I would do something like this:
```
one_dataset %>%
filter(unemployment_rate_in_percent < 3)
```
but since we’re dealing with a list of data sets, we cannot simply use `filter()` on it. This is because
`filter()` expects a data frame, not a list of data frames. The way around this is to use `map()`.
```
all_datasets %>%
map(~filter(., unemployment_rate_in_percent < 3))
```
```
## $unemp_2013
## commune total_employed_population of_which_wage_earners
## 1 Garnich 844 750
## 2 Leudelange 1064 937
## 3 Bech 526 463
## of_which_non_wage_earners unemployed active_population
## 1 94 25 869
## 2 127 32 1096
## 3 63 16 542
## unemployment_rate_in_percent year
## 1 2.876870 2013
## 2 2.919708 2013
## 3 2.952030 2013
##
## $unemp_2014
## commune total_employed_population of_which_wage_earners
## 1 Garnich 845 757
## 2 Leudelange 1102 965
## 3 Bech 543 476
## 4 Flaxweiler 879 789
## of_which_non_wage_earners unemployed active_population
## 1 88 19 864
## 2 137 34 1136
## 3 67 15 558
## 4 90 27 906
## unemployment_rate_in_percent year
## 1 2.199074 2014
## 2 2.992958 2014
## 3 2.688172 2014
## 4 2.980132 2014
##
## $unemp_2015
## commune total_employed_population of_which_wage_earners
## 1 Bech 520 450
## 2 Bous 750 680
## of_which_non_wage_earners unemployed active_population
## 1 70 14 534
## 2 70 22 772
## unemployment_rate_in_percent year
## 1 2.621723 2015
## 2 2.849741 2015
##
## $unemp_2016
## commune total_employed_population of_which_wage_earners
## 1 Reckange-sur-Mess 980 850
## 2 Bech 520 450
## 3 Betzdorf 1500 1350
## 4 Flaxweiler 910 820
## of_which_non_wage_earners unemployed active_population
## 1 130 30 1010
## 2 70 11 531
## 3 150 45 1545
## 4 90 24 934
## unemployment_rate_in_percent year
## 1 2.970297 2016
## 2 2.071563 2016
## 3 2.912621 2016
## 4 2.569593 2016
```
`map()` needs a function to map to each element of the list. `all_datasets` is the list to which I
want to map the function. But what function? `filter()` is the function I need, so why doesn’t:
```
all_datasets %>%
map(filter(unemployment_rate_in_percent < 3))
```
work? This is what happens if we try it:
```
Error in filter(unemployment_rate_in_percent < 3) :
object 'unemployment_rate_in_percent' not found
```
This is because `filter()` needs both the data set, and a so\-called predicate (a predicate
is an expression that evaluates to `TRUE` or `FALSE`). But you need to make more explicit
what is the dataset and what is the predicate, because here, `filter()` thinks that the
dataset is `unemployment_rate_in_percent`. The way to do this is to use an anonymous
function (discussed in Chapter 7\), which allows you to explicitely state what is the
dataset, and what is the predicate. As we’ve seen, there’s three ways to define
anonymous functions:
* Using a formula (only works within `{tidyverse}` functions):
```
all_datasets %>%
map(~filter(., unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
(notice the `.` in the formula, making the position of the dataset as the first argument to `filter()`
explicit) or
* using an anonymous function (using the `function(x)` keyword):
```
all_datasets %>%
map(function(x)filter(x, unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
* or, since R 4\.1, using the shorthand `\(x)`:
```
all_datasets %>%
map(\(x)filter(x, unemployment_rate_in_percent < 3)) %>%
glimpse()
```
```
## List of 4
## $ unemp_2013:'data.frame': 3 obs. of 8 variables:
## ..$ commune : chr [1:3] "Garnich" "Leudelange" "Bech"
## ..$ total_employed_population : int [1:3] 844 1064 526
## ..$ of_which_wage_earners : int [1:3] 750 937 463
## ..$ of_which_non_wage_earners : int [1:3] 94 127 63
## ..$ unemployed : int [1:3] 25 32 16
## ..$ active_population : int [1:3] 869 1096 542
## ..$ unemployment_rate_in_percent: num [1:3] 2.88 2.92 2.95
## ..$ year : int [1:3] 2013 2013 2013
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2013.csv"
## $ unemp_2014:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Garnich" "Leudelange" "Bech" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 845 1102 543 879
## ..$ of_which_wage_earners : int [1:4] 757 965 476 789
## ..$ of_which_non_wage_earners : int [1:4] 88 137 67 90
## ..$ unemployed : int [1:4] 19 34 15 27
## ..$ active_population : int [1:4] 864 1136 558 906
## ..$ unemployment_rate_in_percent: num [1:4] 2.2 2.99 2.69 2.98
## ..$ year : int [1:4] 2014 2014 2014 2014
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2014.csv"
## $ unemp_2015:'data.frame': 2 obs. of 8 variables:
## ..$ commune : chr [1:2] "Bech" "Bous"
## ..$ total_employed_population : int [1:2] 520 750
## ..$ of_which_wage_earners : int [1:2] 450 680
## ..$ of_which_non_wage_earners : int [1:2] 70 70
## ..$ unemployed : int [1:2] 14 22
## ..$ active_population : int [1:2] 534 772
## ..$ unemployment_rate_in_percent: num [1:2] 2.62 2.85
## ..$ year : int [1:2] 2015 2015
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2015.csv"
## $ unemp_2016:'data.frame': 4 obs. of 8 variables:
## ..$ commune : chr [1:4] "Reckange-sur-Mess" "Bech" "Betzdorf" "Flaxweiler"
## ..$ total_employed_population : int [1:4] 980 520 1500 910
## ..$ of_which_wage_earners : int [1:4] 850 450 1350 820
## ..$ of_which_non_wage_earners : int [1:4] 130 70 150 90
## ..$ unemployed : int [1:4] 30 11 45 24
## ..$ active_population : int [1:4] 1010 531 1545 934
## ..$ unemployment_rate_in_percent: num [1:4] 2.97 2.07 2.91 2.57
## ..$ year : int [1:4] 2016 2016 2016 2016
## ..- attr(*, "filename")= chr "datasets/unemployment/unemp_2016.csv"
```
As you see, everything is starting to come together: lists, to hold complex objects, over which anonymous
functions are mapped using higher\-order functions. Let’s continue cleaning this dataset.
Before merging these datasets together, we would need them to have a `year` column indicating the
year the data was measured in each data frame. It would also be helpful if gave names to these datasets, meaning
converting the list to a named list. For this task, we can use `purrr::set_names()`:
```
all_datasets <- set_names(all_datasets, as.character(seq(2013, 2016)))
```
Let’s take a look at the list now:
```
str(all_datasets)
```
As you can see, each `data.frame` object contained in the list has been renamed. You can thus
access them with the `$` operator:
Using `map()` we now know how to apply a function to each dataset of a list. But maybe it would be
easier to merge all the datasets first, and then manipulate them? This can be the case sometimes,
but not always.
As long as you provide a function and a list of elements to `reduce()`, you will get a single
output. So how could `reduce()` help us with merging all the datasets that are in the list? `dplyr`
comes with a lot of function to merge *two* datasets. Remember that I said before that `reduce()`
allows you to generalize a function of two arguments? Let’s try it with our list of datasets:
```
unemp_lux <- reduce(all_datasets, full_join)
```
```
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
## Joining, by = c("commune", "total_employed_population", "of_which_wage_earners", "of_which_non_wage_earners",
## "unemployed", "active_population", "unemployment_rate_in_percent", "year")
```
```
glimpse(unemp_lux)
```
```
## Rows: 472
## Columns: 8
## $ commune <chr> "Grand-Duche de Luxembourg", "Canton Cape…
## $ total_employed_population <int> 223407, 17802, 1703, 844, 1431, 4094, 214…
## $ of_which_wage_earners <int> 203535, 15993, 1535, 750, 1315, 3800, 187…
## $ of_which_non_wage_earners <int> 19872, 1809, 168, 94, 116, 294, 272, 113,…
## $ unemployed <int> 19287, 1071, 114, 25, 74, 261, 98, 45, 66…
## $ active_population <int> 242694, 18873, 1817, 869, 1505, 4355, 224…
## $ unemployment_rate_in_percent <dbl> 7.947044, 5.674773, 6.274078, 2.876870, 4…
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013,…
```
`full_join()` is one of the `dplyr` function that merges data. There are others that might be
useful depending on the kind of join operation you need. Let’s write this data to disk as we’re
going to keep using it for the next chapters:
```
export(unemp_lux, "datasets/unemp_lux.csv")
```
### 8\.4\.1 Functional programming and plotting
In this section, we are going to learn how to use the possibilities offered by the `purrr` package
and how it can work together with `ggplot2` to generate many plots. This is a more advanced topic,
but what comes next is also what makes R, and the functional programming paradigm so powerful.
For example, suppose that instead of wanting a single plot with the unemployment rate of each
commune, you need one unemployment plot, per commune:
```
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Luxembourg", x = "Year", y = "Rate") +
geom_line()
```
and then you would write the same for “Esch\-sur\-Alzette” and also for “Wiltz”. If you only have to
make to make these 3 plots, copy and pasting the above lines is no big deal:
```
unemp_lux_data %>%
filter(division == "Esch-sur-Alzette") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
```
unemp_lux_data %>%
filter(division == "Wiltz") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
But copy and pasting is error prone. Can you spot the copy\-paste mistake I made? And what if you
have to create the above plots for all 108 Luxembourguish communes? That’s a lot of copy pasting.
What if, once you are done copy pasting, you have to change something, for example, the theme? You
could use the search and replace function of RStudio, true, but sometimes search and replace can
also introduce bugs and typos. You can avoid all these issues by using `purrr::map()`. What do you
need to map over? The commune names. So let’s create a vector of commune names:
```
communes <- list("Luxembourg", "Esch-sur-Alzette", "Wiltz")
```
Now we can create the graphs using `map()`, or `map2()` to be exact:
```
plots_tibble <- unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest() %>%
mutate(plot = map2(.x = data, .y = division, ~ggplot(data = .x) +
theme_minimal() +
geom_line(aes(year, unemployment_rate_in_percent, group = 1)) +
labs(title = paste("Unemployment in", .y))))
```
Let’s study this line by line: the first line is easy, we simply use `filter()` to keep only the
communes we are interested in. Then we group by `division` and use `tidyr::nest()`. As a refresher,
let’s take a look at what this does:
```
unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest()
```
```
## # A tibble: 3 × 2
## # Groups: division [3]
## division data
## <chr> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]>
## 2 Luxembourg <tibble [15 × 7]>
## 3 Wiltz <tibble [15 × 7]>
```
This creates a tibble with two columns, `division` and `data`, where each individual (or
commune in this case) is another tibble with all the original variables. This is very useful,
because now we can pass these tibbles to `map2()`, to generate the plots. But why `map2()` and
what’s the difference with `map()`? `map2()` works the same way as `map()`, but maps over two
inputs:
```
numbers1 <- list(1, 2, 3, 4, 5)
numbers2 <- list(9, 8, 7, 6, 5)
map2(numbers1, numbers2, `*`)
```
```
## [[1]]
## [1] 9
##
## [[2]]
## [1] 16
##
## [[3]]
## [1] 21
##
## [[4]]
## [1] 24
##
## [[5]]
## [1] 25
```
In our example with the graphs, the two inputs are the data, and the names of the communes. This is
useful to create the title with `labs(title = paste("Unemployment in", .y))))` where `.y` is the
second input of `map2()`, the commune names contained in variable `division`.
So what happened? We now have a tibble called `plots_tibble` that looks like this:
```
print(plots_tibble)
```
```
## # A tibble: 3 × 3
## # Groups: division [3]
## division data plot
## <chr> <list> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]> <gg>
## 2 Luxembourg <tibble [15 × 7]> <gg>
## 3 Wiltz <tibble [15 × 7]> <gg>
```
This tibble contains three columns, `division`, `data` and now a new one called `plot`, that we
created before using the last line `mutate(plot = ...)` (remember that `mutate()` adds columns to
tibbles). `plot` is a list\-column, with elements… being plots! Yes you read that right, the
elements of the column `plot` are literally plots. This is what I meant with list columns.
Let’s see what is inside the `data` and the `plot` columns exactly:
```
plots_tibble %>%
pull(data)
```
```
## [[1]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 11.3 665 10.1 10.8 561 4.95
## 2 2002 11.7 677 10.3 11.0 696 5.96
## 3 2003 11.7 674 10.2 10.9 813 6.94
## 4 2004 12.2 659 10.6 11.3 899 7.38
## 5 2005 11.9 654 10.3 11.0 952 7.97
## 6 2006 12.2 657 10.5 11.2 1.07 8.71
## 7 2007 12.6 634 10.9 11.5 1.03 8.21
## 8 2008 12.9 638 11.0 11.6 1.28 9.92
## 9 2009 13.2 652 11.0 11.7 1.58 11.9
## 10 2010 13.6 638 11.2 11.8 1.73 12.8
## 11 2011 13.9 630 11.5 12.1 1.77 12.8
## 12 2012 14.3 684 11.8 12.5 1.83 12.8
## 13 2013 14.8 694 12.0 12.7 2.05 13.9
## 14 2014 15.2 703 12.5 13.2 2.00 13.2
## 15 2015 15.3 710 12.6 13.3 2.03 13.2
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[2]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 34.4 2.89 30.4 33.2 1.14 3.32
## 2 2002 34.8 2.94 30.3 33.2 1.56 4.5
## 3 2003 35.2 3.03 30.1 33.2 2.04 5.78
## 4 2004 35.6 3.06 30.1 33.2 2.39 6.73
## 5 2005 35.6 3.13 29.8 33.0 2.64 7.42
## 6 2006 35.5 3.12 30.3 33.4 2.03 5.72
## 7 2007 36.1 3.25 31.1 34.4 1.76 4.87
## 8 2008 37.5 3.39 31.9 35.3 2.23 5.95
## 9 2009 37.9 3.49 31.6 35.1 2.85 7.51
## 10 2010 38.6 3.54 32.1 35.7 2.96 7.66
## 11 2011 40.3 3.66 33.6 37.2 3.11 7.72
## 12 2012 41.8 3.81 34.6 38.4 3.37 8.07
## 13 2013 43.4 3.98 35.5 39.5 3.86 8.89
## 14 2014 44.6 4.11 36.7 40.8 3.84 8.6
## 15 2015 45.2 4.14 37.5 41.6 3.57 7.9
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[3]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 2.13 223 1.79 2.01 122 5.73
## 2 2002 2.14 220 1.78 2.00 134 6.27
## 3 2003 2.18 223 1.79 2.02 163 7.48
## 4 2004 2.24 227 1.85 2.08 156 6.97
## 5 2005 2.26 229 1.85 2.08 187 8.26
## 6 2006 2.20 206 1.82 2.02 181 8.22
## 7 2007 2.27 198 1.88 2.08 197 8.67
## 8 2008 2.30 200 1.90 2.10 201 8.75
## 9 2009 2.36 201 1.94 2.15 216 9.14
## 10 2010 2.42 195 1.97 2.17 256 10.6
## 11 2011 2.48 190 2.02 2.21 269 10.9
## 12 2012 2.59 188 2.10 2.29 301 11.6
## 13 2013 2.66 195 2.15 2.34 318 12.0
## 14 2014 2.69 185 2.19 2.38 315 11.7
## 15 2015 2.77 180 2.27 2.45 321 11.6
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
```
each element of data is a tibble for the specific country with columns `year`, `active_population`,
etc, the original columns. But obviously, there is no `division` column. So to plot the data, and
join all the dots together, we need to add `group = 1` in the call to `ggplot2()` (whereas if you
plot multiple lines in the same graph, you need to write `group = division`).
But more interestingly, how can you actually see the plots? If you want to simply look at them, it
is enough to use `pull()`:
```
plots_tibble %>%
pull(plot)
```
```
## [[1]]
```
```
##
## [[2]]
```
```
##
## [[3]]
```
And if we want to save these plots, we can do so using `map2()`:
```
map2(paste0(plots_tibble$division, ".pdf"), plots_tibble$plot, ggsave)
```
```
Saving 7 x 5 in image
Saving 6.01 x 3.94 in image
Saving 6.01 x 3.94 in image
```
This was probably the most advanced topic we have studied yet; but you probably agree with me that
it is among the most useful ones. This section is a perfect illustration of the power of functional
programming; you can mix and match functions as long as you give them the correct arguments.
You can pass data to functions that use data and then pass these functions to other functions that
use functions as arguments, such as `map()`.[7](#fn7) `map()` does not care if the functions you pass to it produces tables,
graphs or even another function. `map()` will simply map this function to a list of inputs, and as
long as these inputs are correct arguments to the function, `map()` will do its magic. If you
combine this with list\-columns, you can even use `map()` alongside `dplyr` functions and map your
function by first grouping, filtering, etc…
### 8\.4\.2 Modeling with functional programming
As written just above, `map()` simply applies a function to a list of inputs, and in the previous
section we mapped `ggplot()` to generate many plots at once. This approach can also be used to
map any modeling functions, for instance `lm()` to a list of datasets.
For instance, suppose that you wish to perform a Monte Carlo simulation. Suppose that you are
dealing with a binary choice problem; usually, you would use a logistic regression for this.
However, in certain disciplines, especially in the social sciences, the so\-called Linear Probability
Model is often used as well. The LPM is a simple linear regression, but unlike the standard setting
of a linear regression, the dependent variable, or target, is a binary variable, and not a continuous
variable. Before you yell “Wait, that’s illegal”, you should know that in practice LPMs do a good
job of estimating marginal effects, which is what social scientists and econometricians are often
interested in. Marginal effects are another way of interpreting models, giving how the outcome
(or the target) changes given a change in a independent variable (or a feature). For instance,
a marginal effect of 0\.10 for age would mean that probability of success would increase by 10% for
each added year of age. We already discussed marginal effects in Chapter 6\.
There has been a lot of discussion on logistic regression vs LPMs, and there are pros and cons
of using LPMs. Micro\-econometricians are still fond of LPMs, even though the pros of LPMs are
not really convincing. However, quoting Angrist and Pischke:
“While a nonlinear model may fit the CEF (population conditional expectation function) for LDVs
(limited dependent variables) more closely than a linear model, when it comes to marginal effects,
this probably matters little” (source: *Mostly Harmless Econometrics*)
so LPMs are still used for estimating marginal effects.
Let us check this assessment with one example. First, we simulate some data, then
run a logistic regression and compute the marginal effects, and then compare with a LPM:
```
set.seed(1234)
x1 <- rnorm(100)
x2 <- rnorm(100)
z <- .5 + 2*x1 + 4*x2
p <- 1/(1 + exp(-z))
y <- rbinom(100, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
```
This data generating process generates data from a binary choice model. Fitting the model using a
logistic regression allows us to recover the structural parameters:
```
logistic_regression <- glm(y ~ ., data = df, family = binomial(link = "logit"))
```
Let’s see a summary of the model fit:
```
summary(logistic_regression)
```
```
##
## Call:
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.91941 -0.44872 0.00038 0.42843 2.55426
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.0960 0.3293 0.292 0.770630
## x1 1.6625 0.4628 3.592 0.000328 ***
## x2 3.6582 0.8059 4.539 5.64e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 138.629 on 99 degrees of freedom
## Residual deviance: 60.576 on 97 degrees of freedom
## AIC: 66.576
##
## Number of Fisher Scoring iterations: 7
```
We do recover the parameters that generated the data, but what about the marginal effects? We can
get the marginal effects easily using the `{margins}` package:
```
library(margins)
margins(logistic_regression)
```
```
## Average marginal effects
```
```
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
```
```
## x1 x2
## 0.1598 0.3516
```
Or, even better, we can compute the *true* marginal effects, since we know the data
generating process:
```
meffects <- function(dataset, coefs){
X <- dataset %>%
select(-y) %>%
as.matrix()
dydx_x1 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[2])
dydx_x2 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[3])
tribble(~term, ~true_effect,
"x1", dydx_x1,
"x2", dydx_x2)
}
(true_meffects <- meffects(df, c(0.5, 2, 4)))
```
```
## # A tibble: 2 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.175
## 2 x2 0.350
```
Ok, so now what about using this infamous Linear Probability Model to estimate the marginal effects?
```
lpm <- lm(y ~ ., data = df)
summary(lpm)
```
```
##
## Call:
## lm(formula = y ~ ., data = df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.83953 -0.31588 -0.02885 0.28774 0.77407
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.51340 0.03587 14.314 < 2e-16 ***
## x1 0.16771 0.03545 4.732 7.58e-06 ***
## x2 0.31250 0.03449 9.060 1.43e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.3541 on 97 degrees of freedom
## Multiple R-squared: 0.5135, Adjusted R-squared: 0.5034
## F-statistic: 51.18 on 2 and 97 DF, p-value: 6.693e-16
```
It’s not too bad, but maybe it could have been better in other circumstances. Perhaps if we had more
observations, or perhaps for a different set of structural parameters the results of the LPM
would have been closer. The LPM estimates the marginal effect of `x1` to be
0\.1677134 vs 0\.1597956
for the logistic regression and for `x2`, the LPM estimation is 0\.3124966
vs 0\.351607\. The *true* marginal effects are
0\.1750963 and 0\.3501926 for `x1` and `x2` respectively.
Just as to assess the accuracy of a model data scientists perform cross\-validation, a Monte Carlo
study can be performed to asses how close the estimation of the marginal effects using a LPM is
to the marginal effects derived from a logistic regression. It will allow us to test with datasets
of different sizes, and generated using different structural parameters.
First, let’s write a function that generates data. The function below generates 10 datasets of size
100 (the code is inspired by this [StackExchange answer](https://stats.stackexchange.com/a/46525)):
```
generate_datasets <- function(coefs = c(.5, 2, 4), sample_size = 100, repeats = 10){
generate_one_dataset <- function(coefs, sample_size){
x1 <- rnorm(sample_size)
x2 <- rnorm(sample_size)
z <- coefs[1] + coefs[2]*x1 + coefs[3]*x2
p <- 1/(1 + exp(-z))
y <- rbinom(sample_size, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
}
simulations <- rerun(.n = repeats, generate_one_dataset(coefs, sample_size))
tibble("coefs" = list(coefs), "sample_size" = sample_size, "repeats" = repeats, "simulations" = list(simulations))
}
```
Let’s first generate one dataset:
```
one_dataset <- generate_datasets(repeats = 1)
```
Let’s take a look at `one_dataset`:
```
one_dataset
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 1 <list [1]>
```
As you can see, the tibble with the simulated data is inside a list\-column called `simulations`.
Let’s take a closer look:
```
str(one_dataset$simulations)
```
```
## List of 1
## $ :List of 1
## ..$ : tibble [100 × 3] (S3: tbl_df/tbl/data.frame)
## .. ..$ y : int [1:100] 0 1 1 1 0 1 1 0 0 1 ...
## .. ..$ x1: num [1:100] 0.437 1.06 0.452 0.663 -1.136 ...
## .. ..$ x2: num [1:100] -2.316 0.562 -0.784 -0.226 -1.587 ...
```
The structure is quite complex, and it’s important to understand this, because it will have an
impact on the next lines of code; it is a list, containing a list, containing a dataset! No worries
though, we can still map over the datasets directly, by using `modify_depth()` instead of `map()`.
Now, let’s fit a LPM and compare the estimation of the marginal effects with the *true* marginal
effects. In order to have some confidence in our results,
we will not simply run a linear regression on that single dataset, but will instead simulate hundreds,
then thousands and ten of thousands of data sets, get the marginal effects and compare
them to the true ones (but here I won’t simulate more than 500 datasets).
Let’s first generate 10 datasets:
```
many_datasets <- generate_datasets()
```
Now comes the tricky part. I have this object, `many_datasets` looking like this:
```
many_datasets
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 10 <list [10]>
```
I would like to fit LPMs to the 10 datasets. For this, I will need to use all the power of functional
programming and the `{tidyverse}`. I will be adding columns to this data frame using `mutate()`
and mapping over the `simulations` list\-column using `modify_depth()`. The list of data frames is
at the second level (remember, it’s a list containing a list containing data frames).
I’ll start by fitting the LPMs, then using `broom::tidy()` I will get a nice data frame of the
estimated parameters. I will then only select what I need, and then bind the rows of all the
data frames. I will do the same for the *true* marginal effects.
I highly suggest that you run the following lines, one after another. It is complicated to understand
what’s going on if you are not used to such workflows. However, I hope to convince you that once
it will click, it’ll be much more intuitive than doing all this inside a loop. Here’s the code:
```
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
```
This is how results looks like:
```
results
```
```
## # A tibble: 1 × 6
## coefs sample_size repeats simulations lpm true_effect
## <list> <dbl> <dbl> <list> <list> <list>
## 1 <dbl [3]> 100 10 <list [10]> <tibble [20 × 2]> <tibble [20 × 2]>
```
Let’s take a closer look to the `lpm` and `true_effect` columns:
```
results$lpm
```
```
## [[1]]
## # A tibble: 20 × 2
## term estimate
## <chr> <dbl>
## 1 x1 0.228
## 2 x2 0.353
## 3 x1 0.180
## 4 x2 0.361
## 5 x1 0.165
## 6 x2 0.374
## 7 x1 0.182
## 8 x2 0.358
## 9 x1 0.125
## 10 x2 0.345
## 11 x1 0.171
## 12 x2 0.331
## 13 x1 0.122
## 14 x2 0.309
## 15 x1 0.129
## 16 x2 0.332
## 17 x1 0.102
## 18 x2 0.374
## 19 x1 0.176
## 20 x2 0.410
```
```
results$true_effect
```
```
## [[1]]
## # A tibble: 20 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.183
## 2 x2 0.366
## 3 x1 0.166
## 4 x2 0.331
## 5 x1 0.174
## 6 x2 0.348
## 7 x1 0.169
## 8 x2 0.339
## 9 x1 0.167
## 10 x2 0.335
## 11 x1 0.173
## 12 x2 0.345
## 13 x1 0.157
## 14 x2 0.314
## 15 x1 0.170
## 16 x2 0.340
## 17 x1 0.182
## 18 x2 0.365
## 19 x1 0.161
## 20 x2 0.321
```
Let’s bind the columns, and compute the difference between the *true* and estimated marginal
effects:
```
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
```
```
## Joining, by = "term"
```
Let’s take a look at the simulation results:
```
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0122 0.0368
## 2 x2 -0.0141 0.0311
```
Already with only 10 simulated datasets, the difference in means is not significant. Let’s rerun
the analysis, but for difference sizes. In order to make things easier, we can put all the code
into a nifty function:
```
monte_carlo <- function(coefs, sample_size, repeats){
many_datasets <- generate_datasets(coefs, sample_size, repeats)
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
}
```
And now, let’s run the simulation for different parameters and sizes:
```
monte_carlo(c(.5, 2, 4), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00826 0.0318
## 2 x2 -0.00732 0.0421
```
```
monte_carlo(c(.5, 2, 4), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00360 0.0408
## 2 x2 0.00517 0.0459
```
```
monte_carlo(c(.5, 2, 4), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00152 0.0388
## 2 x2 -0.000701 0.0462
```
```
monte_carlo(c(pi, 6, 9), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00829 0.0421
## 2 x2 0.00178 0.0397
```
```
monte_carlo(c(pi, 6, 9), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0107 0.0576
## 2 x2 0.00831 0.0772
```
```
monte_carlo(c(pi, 6, 9), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00879 0.0518
## 2 x2 0.0113 0.0687
```
We see that, at least for this set of parameters, the LPM does a good job of estimating marginal
effects.
Now, this study might in itself not be very interesting to you, but I believe the general approach
is quite useful and flexible enough to be adapted to all kinds of use\-cases.
### 8\.4\.1 Functional programming and plotting
In this section, we are going to learn how to use the possibilities offered by the `purrr` package
and how it can work together with `ggplot2` to generate many plots. This is a more advanced topic,
but what comes next is also what makes R, and the functional programming paradigm so powerful.
For example, suppose that instead of wanting a single plot with the unemployment rate of each
commune, you need one unemployment plot, per commune:
```
unemp_lux_data %>%
filter(division == "Luxembourg") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Luxembourg", x = "Year", y = "Rate") +
geom_line()
```
and then you would write the same for “Esch\-sur\-Alzette” and also for “Wiltz”. If you only have to
make to make these 3 plots, copy and pasting the above lines is no big deal:
```
unemp_lux_data %>%
filter(division == "Esch-sur-Alzette") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
```
unemp_lux_data %>%
filter(division == "Wiltz") %>%
ggplot(aes(year, unemployment_rate_in_percent, group = division)) +
theme_minimal() +
labs(title = "Unemployment in Esch-sur-Alzette", x = "Year", y = "Rate") +
geom_line()
```
But copy and pasting is error prone. Can you spot the copy\-paste mistake I made? And what if you
have to create the above plots for all 108 Luxembourguish communes? That’s a lot of copy pasting.
What if, once you are done copy pasting, you have to change something, for example, the theme? You
could use the search and replace function of RStudio, true, but sometimes search and replace can
also introduce bugs and typos. You can avoid all these issues by using `purrr::map()`. What do you
need to map over? The commune names. So let’s create a vector of commune names:
```
communes <- list("Luxembourg", "Esch-sur-Alzette", "Wiltz")
```
Now we can create the graphs using `map()`, or `map2()` to be exact:
```
plots_tibble <- unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest() %>%
mutate(plot = map2(.x = data, .y = division, ~ggplot(data = .x) +
theme_minimal() +
geom_line(aes(year, unemployment_rate_in_percent, group = 1)) +
labs(title = paste("Unemployment in", .y))))
```
Let’s study this line by line: the first line is easy, we simply use `filter()` to keep only the
communes we are interested in. Then we group by `division` and use `tidyr::nest()`. As a refresher,
let’s take a look at what this does:
```
unemp_lux_data %>%
filter(division %in% communes) %>%
group_by(division) %>%
nest()
```
```
## # A tibble: 3 × 2
## # Groups: division [3]
## division data
## <chr> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]>
## 2 Luxembourg <tibble [15 × 7]>
## 3 Wiltz <tibble [15 × 7]>
```
This creates a tibble with two columns, `division` and `data`, where each individual (or
commune in this case) is another tibble with all the original variables. This is very useful,
because now we can pass these tibbles to `map2()`, to generate the plots. But why `map2()` and
what’s the difference with `map()`? `map2()` works the same way as `map()`, but maps over two
inputs:
```
numbers1 <- list(1, 2, 3, 4, 5)
numbers2 <- list(9, 8, 7, 6, 5)
map2(numbers1, numbers2, `*`)
```
```
## [[1]]
## [1] 9
##
## [[2]]
## [1] 16
##
## [[3]]
## [1] 21
##
## [[4]]
## [1] 24
##
## [[5]]
## [1] 25
```
In our example with the graphs, the two inputs are the data, and the names of the communes. This is
useful to create the title with `labs(title = paste("Unemployment in", .y))))` where `.y` is the
second input of `map2()`, the commune names contained in variable `division`.
So what happened? We now have a tibble called `plots_tibble` that looks like this:
```
print(plots_tibble)
```
```
## # A tibble: 3 × 3
## # Groups: division [3]
## division data plot
## <chr> <list> <list>
## 1 Esch-sur-Alzette <tibble [15 × 7]> <gg>
## 2 Luxembourg <tibble [15 × 7]> <gg>
## 3 Wiltz <tibble [15 × 7]> <gg>
```
This tibble contains three columns, `division`, `data` and now a new one called `plot`, that we
created before using the last line `mutate(plot = ...)` (remember that `mutate()` adds columns to
tibbles). `plot` is a list\-column, with elements… being plots! Yes you read that right, the
elements of the column `plot` are literally plots. This is what I meant with list columns.
Let’s see what is inside the `data` and the `plot` columns exactly:
```
plots_tibble %>%
pull(data)
```
```
## [[1]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 11.3 665 10.1 10.8 561 4.95
## 2 2002 11.7 677 10.3 11.0 696 5.96
## 3 2003 11.7 674 10.2 10.9 813 6.94
## 4 2004 12.2 659 10.6 11.3 899 7.38
## 5 2005 11.9 654 10.3 11.0 952 7.97
## 6 2006 12.2 657 10.5 11.2 1.07 8.71
## 7 2007 12.6 634 10.9 11.5 1.03 8.21
## 8 2008 12.9 638 11.0 11.6 1.28 9.92
## 9 2009 13.2 652 11.0 11.7 1.58 11.9
## 10 2010 13.6 638 11.2 11.8 1.73 12.8
## 11 2011 13.9 630 11.5 12.1 1.77 12.8
## 12 2012 14.3 684 11.8 12.5 1.83 12.8
## 13 2013 14.8 694 12.0 12.7 2.05 13.9
## 14 2014 15.2 703 12.5 13.2 2.00 13.2
## 15 2015 15.3 710 12.6 13.3 2.03 13.2
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[2]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 34.4 2.89 30.4 33.2 1.14 3.32
## 2 2002 34.8 2.94 30.3 33.2 1.56 4.5
## 3 2003 35.2 3.03 30.1 33.2 2.04 5.78
## 4 2004 35.6 3.06 30.1 33.2 2.39 6.73
## 5 2005 35.6 3.13 29.8 33.0 2.64 7.42
## 6 2006 35.5 3.12 30.3 33.4 2.03 5.72
## 7 2007 36.1 3.25 31.1 34.4 1.76 4.87
## 8 2008 37.5 3.39 31.9 35.3 2.23 5.95
## 9 2009 37.9 3.49 31.6 35.1 2.85 7.51
## 10 2010 38.6 3.54 32.1 35.7 2.96 7.66
## 11 2011 40.3 3.66 33.6 37.2 3.11 7.72
## 12 2012 41.8 3.81 34.6 38.4 3.37 8.07
## 13 2013 43.4 3.98 35.5 39.5 3.86 8.89
## 14 2014 44.6 4.11 36.7 40.8 3.84 8.6
## 15 2015 45.2 4.14 37.5 41.6 3.57 7.9
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
##
## [[3]]
## # A tibble: 15 × 7
## year active_population of_which_non_wage_e…¹ of_wh…² total…³ unemp…⁴ unemp…⁵
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2001 2.13 223 1.79 2.01 122 5.73
## 2 2002 2.14 220 1.78 2.00 134 6.27
## 3 2003 2.18 223 1.79 2.02 163 7.48
## 4 2004 2.24 227 1.85 2.08 156 6.97
## 5 2005 2.26 229 1.85 2.08 187 8.26
## 6 2006 2.20 206 1.82 2.02 181 8.22
## 7 2007 2.27 198 1.88 2.08 197 8.67
## 8 2008 2.30 200 1.90 2.10 201 8.75
## 9 2009 2.36 201 1.94 2.15 216 9.14
## 10 2010 2.42 195 1.97 2.17 256 10.6
## 11 2011 2.48 190 2.02 2.21 269 10.9
## 12 2012 2.59 188 2.10 2.29 301 11.6
## 13 2013 2.66 195 2.15 2.34 318 12.0
## 14 2014 2.69 185 2.19 2.38 315 11.7
## 15 2015 2.77 180 2.27 2.45 321 11.6
## # … with abbreviated variable names ¹of_which_non_wage_earners,
## # ²of_which_wage_earners, ³total_employed_population, ⁴unemployed,
## # ⁵unemployment_rate_in_percent
```
each element of data is a tibble for the specific country with columns `year`, `active_population`,
etc, the original columns. But obviously, there is no `division` column. So to plot the data, and
join all the dots together, we need to add `group = 1` in the call to `ggplot2()` (whereas if you
plot multiple lines in the same graph, you need to write `group = division`).
But more interestingly, how can you actually see the plots? If you want to simply look at them, it
is enough to use `pull()`:
```
plots_tibble %>%
pull(plot)
```
```
## [[1]]
```
```
##
## [[2]]
```
```
##
## [[3]]
```
And if we want to save these plots, we can do so using `map2()`:
```
map2(paste0(plots_tibble$division, ".pdf"), plots_tibble$plot, ggsave)
```
```
Saving 7 x 5 in image
Saving 6.01 x 3.94 in image
Saving 6.01 x 3.94 in image
```
This was probably the most advanced topic we have studied yet; but you probably agree with me that
it is among the most useful ones. This section is a perfect illustration of the power of functional
programming; you can mix and match functions as long as you give them the correct arguments.
You can pass data to functions that use data and then pass these functions to other functions that
use functions as arguments, such as `map()`.[7](#fn7) `map()` does not care if the functions you pass to it produces tables,
graphs or even another function. `map()` will simply map this function to a list of inputs, and as
long as these inputs are correct arguments to the function, `map()` will do its magic. If you
combine this with list\-columns, you can even use `map()` alongside `dplyr` functions and map your
function by first grouping, filtering, etc…
### 8\.4\.2 Modeling with functional programming
As written just above, `map()` simply applies a function to a list of inputs, and in the previous
section we mapped `ggplot()` to generate many plots at once. This approach can also be used to
map any modeling functions, for instance `lm()` to a list of datasets.
For instance, suppose that you wish to perform a Monte Carlo simulation. Suppose that you are
dealing with a binary choice problem; usually, you would use a logistic regression for this.
However, in certain disciplines, especially in the social sciences, the so\-called Linear Probability
Model is often used as well. The LPM is a simple linear regression, but unlike the standard setting
of a linear regression, the dependent variable, or target, is a binary variable, and not a continuous
variable. Before you yell “Wait, that’s illegal”, you should know that in practice LPMs do a good
job of estimating marginal effects, which is what social scientists and econometricians are often
interested in. Marginal effects are another way of interpreting models, giving how the outcome
(or the target) changes given a change in a independent variable (or a feature). For instance,
a marginal effect of 0\.10 for age would mean that probability of success would increase by 10% for
each added year of age. We already discussed marginal effects in Chapter 6\.
There has been a lot of discussion on logistic regression vs LPMs, and there are pros and cons
of using LPMs. Micro\-econometricians are still fond of LPMs, even though the pros of LPMs are
not really convincing. However, quoting Angrist and Pischke:
“While a nonlinear model may fit the CEF (population conditional expectation function) for LDVs
(limited dependent variables) more closely than a linear model, when it comes to marginal effects,
this probably matters little” (source: *Mostly Harmless Econometrics*)
so LPMs are still used for estimating marginal effects.
Let us check this assessment with one example. First, we simulate some data, then
run a logistic regression and compute the marginal effects, and then compare with a LPM:
```
set.seed(1234)
x1 <- rnorm(100)
x2 <- rnorm(100)
z <- .5 + 2*x1 + 4*x2
p <- 1/(1 + exp(-z))
y <- rbinom(100, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
```
This data generating process generates data from a binary choice model. Fitting the model using a
logistic regression allows us to recover the structural parameters:
```
logistic_regression <- glm(y ~ ., data = df, family = binomial(link = "logit"))
```
Let’s see a summary of the model fit:
```
summary(logistic_regression)
```
```
##
## Call:
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.91941 -0.44872 0.00038 0.42843 2.55426
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.0960 0.3293 0.292 0.770630
## x1 1.6625 0.4628 3.592 0.000328 ***
## x2 3.6582 0.8059 4.539 5.64e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 138.629 on 99 degrees of freedom
## Residual deviance: 60.576 on 97 degrees of freedom
## AIC: 66.576
##
## Number of Fisher Scoring iterations: 7
```
We do recover the parameters that generated the data, but what about the marginal effects? We can
get the marginal effects easily using the `{margins}` package:
```
library(margins)
margins(logistic_regression)
```
```
## Average marginal effects
```
```
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
```
```
## x1 x2
## 0.1598 0.3516
```
Or, even better, we can compute the *true* marginal effects, since we know the data
generating process:
```
meffects <- function(dataset, coefs){
X <- dataset %>%
select(-y) %>%
as.matrix()
dydx_x1 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[2])
dydx_x2 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[3])
tribble(~term, ~true_effect,
"x1", dydx_x1,
"x2", dydx_x2)
}
(true_meffects <- meffects(df, c(0.5, 2, 4)))
```
```
## # A tibble: 2 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.175
## 2 x2 0.350
```
Ok, so now what about using this infamous Linear Probability Model to estimate the marginal effects?
```
lpm <- lm(y ~ ., data = df)
summary(lpm)
```
```
##
## Call:
## lm(formula = y ~ ., data = df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.83953 -0.31588 -0.02885 0.28774 0.77407
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.51340 0.03587 14.314 < 2e-16 ***
## x1 0.16771 0.03545 4.732 7.58e-06 ***
## x2 0.31250 0.03449 9.060 1.43e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.3541 on 97 degrees of freedom
## Multiple R-squared: 0.5135, Adjusted R-squared: 0.5034
## F-statistic: 51.18 on 2 and 97 DF, p-value: 6.693e-16
```
It’s not too bad, but maybe it could have been better in other circumstances. Perhaps if we had more
observations, or perhaps for a different set of structural parameters the results of the LPM
would have been closer. The LPM estimates the marginal effect of `x1` to be
0\.1677134 vs 0\.1597956
for the logistic regression and for `x2`, the LPM estimation is 0\.3124966
vs 0\.351607\. The *true* marginal effects are
0\.1750963 and 0\.3501926 for `x1` and `x2` respectively.
Just as to assess the accuracy of a model data scientists perform cross\-validation, a Monte Carlo
study can be performed to asses how close the estimation of the marginal effects using a LPM is
to the marginal effects derived from a logistic regression. It will allow us to test with datasets
of different sizes, and generated using different structural parameters.
First, let’s write a function that generates data. The function below generates 10 datasets of size
100 (the code is inspired by this [StackExchange answer](https://stats.stackexchange.com/a/46525)):
```
generate_datasets <- function(coefs = c(.5, 2, 4), sample_size = 100, repeats = 10){
generate_one_dataset <- function(coefs, sample_size){
x1 <- rnorm(sample_size)
x2 <- rnorm(sample_size)
z <- coefs[1] + coefs[2]*x1 + coefs[3]*x2
p <- 1/(1 + exp(-z))
y <- rbinom(sample_size, 1, p)
df <- tibble(y = y, x1 = x1, x2 = x2)
}
simulations <- rerun(.n = repeats, generate_one_dataset(coefs, sample_size))
tibble("coefs" = list(coefs), "sample_size" = sample_size, "repeats" = repeats, "simulations" = list(simulations))
}
```
Let’s first generate one dataset:
```
one_dataset <- generate_datasets(repeats = 1)
```
Let’s take a look at `one_dataset`:
```
one_dataset
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 1 <list [1]>
```
As you can see, the tibble with the simulated data is inside a list\-column called `simulations`.
Let’s take a closer look:
```
str(one_dataset$simulations)
```
```
## List of 1
## $ :List of 1
## ..$ : tibble [100 × 3] (S3: tbl_df/tbl/data.frame)
## .. ..$ y : int [1:100] 0 1 1 1 0 1 1 0 0 1 ...
## .. ..$ x1: num [1:100] 0.437 1.06 0.452 0.663 -1.136 ...
## .. ..$ x2: num [1:100] -2.316 0.562 -0.784 -0.226 -1.587 ...
```
The structure is quite complex, and it’s important to understand this, because it will have an
impact on the next lines of code; it is a list, containing a list, containing a dataset! No worries
though, we can still map over the datasets directly, by using `modify_depth()` instead of `map()`.
Now, let’s fit a LPM and compare the estimation of the marginal effects with the *true* marginal
effects. In order to have some confidence in our results,
we will not simply run a linear regression on that single dataset, but will instead simulate hundreds,
then thousands and ten of thousands of data sets, get the marginal effects and compare
them to the true ones (but here I won’t simulate more than 500 datasets).
Let’s first generate 10 datasets:
```
many_datasets <- generate_datasets()
```
Now comes the tricky part. I have this object, `many_datasets` looking like this:
```
many_datasets
```
```
## # A tibble: 1 × 4
## coefs sample_size repeats simulations
## <list> <dbl> <dbl> <list>
## 1 <dbl [3]> 100 10 <list [10]>
```
I would like to fit LPMs to the 10 datasets. For this, I will need to use all the power of functional
programming and the `{tidyverse}`. I will be adding columns to this data frame using `mutate()`
and mapping over the `simulations` list\-column using `modify_depth()`. The list of data frames is
at the second level (remember, it’s a list containing a list containing data frames).
I’ll start by fitting the LPMs, then using `broom::tidy()` I will get a nice data frame of the
estimated parameters. I will then only select what I need, and then bind the rows of all the
data frames. I will do the same for the *true* marginal effects.
I highly suggest that you run the following lines, one after another. It is complicated to understand
what’s going on if you are not used to such workflows. However, I hope to convince you that once
it will click, it’ll be much more intuitive than doing all this inside a loop. Here’s the code:
```
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
```
This is how results looks like:
```
results
```
```
## # A tibble: 1 × 6
## coefs sample_size repeats simulations lpm true_effect
## <list> <dbl> <dbl> <list> <list> <list>
## 1 <dbl [3]> 100 10 <list [10]> <tibble [20 × 2]> <tibble [20 × 2]>
```
Let’s take a closer look to the `lpm` and `true_effect` columns:
```
results$lpm
```
```
## [[1]]
## # A tibble: 20 × 2
## term estimate
## <chr> <dbl>
## 1 x1 0.228
## 2 x2 0.353
## 3 x1 0.180
## 4 x2 0.361
## 5 x1 0.165
## 6 x2 0.374
## 7 x1 0.182
## 8 x2 0.358
## 9 x1 0.125
## 10 x2 0.345
## 11 x1 0.171
## 12 x2 0.331
## 13 x1 0.122
## 14 x2 0.309
## 15 x1 0.129
## 16 x2 0.332
## 17 x1 0.102
## 18 x2 0.374
## 19 x1 0.176
## 20 x2 0.410
```
```
results$true_effect
```
```
## [[1]]
## # A tibble: 20 × 2
## term true_effect
## <chr> <dbl>
## 1 x1 0.183
## 2 x2 0.366
## 3 x1 0.166
## 4 x2 0.331
## 5 x1 0.174
## 6 x2 0.348
## 7 x1 0.169
## 8 x2 0.339
## 9 x1 0.167
## 10 x2 0.335
## 11 x1 0.173
## 12 x2 0.345
## 13 x1 0.157
## 14 x2 0.314
## 15 x1 0.170
## 16 x2 0.340
## 17 x1 0.182
## 18 x2 0.365
## 19 x1 0.161
## 20 x2 0.321
```
Let’s bind the columns, and compute the difference between the *true* and estimated marginal
effects:
```
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
```
```
## Joining, by = "term"
```
Let’s take a look at the simulation results:
```
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0122 0.0368
## 2 x2 -0.0141 0.0311
```
Already with only 10 simulated datasets, the difference in means is not significant. Let’s rerun
the analysis, but for difference sizes. In order to make things easier, we can put all the code
into a nifty function:
```
monte_carlo <- function(coefs, sample_size, repeats){
many_datasets <- generate_datasets(coefs, sample_size, repeats)
results <- many_datasets %>%
mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>%
mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>%
mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>%
mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>%
mutate(lpm = map(lpm, bind_rows)) %>%
mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>%
mutate(true_effect = map(true_effect, bind_rows))
simulation_results <- results %>%
mutate(difference = map2(.x = lpm, .y = true_effect, full_join)) %>%
mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>%
mutate(difference = map(difference, ~select(., term, difference))) %>%
pull(difference) %>%
.[[1]]
simulation_results %>%
group_by(term) %>%
summarise(mean = mean(difference),
sd = sd(difference))
}
```
And now, let’s run the simulation for different parameters and sizes:
```
monte_carlo(c(.5, 2, 4), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00826 0.0318
## 2 x2 -0.00732 0.0421
```
```
monte_carlo(c(.5, 2, 4), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00360 0.0408
## 2 x2 0.00517 0.0459
```
```
monte_carlo(c(.5, 2, 4), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00152 0.0388
## 2 x2 -0.000701 0.0462
```
```
monte_carlo(c(pi, 6, 9), 100, 10)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 -0.00829 0.0421
## 2 x2 0.00178 0.0397
```
```
monte_carlo(c(pi, 6, 9), 100, 100)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.0107 0.0576
## 2 x2 0.00831 0.0772
```
```
monte_carlo(c(pi, 6, 9), 100, 500)
```
```
## Joining, by = "term"
```
```
## # A tibble: 2 × 3
## term mean sd
## <chr> <dbl> <dbl>
## 1 x1 0.00879 0.0518
## 2 x2 0.0113 0.0687
```
We see that, at least for this set of parameters, the LPM does a good job of estimating marginal
effects.
Now, this study might in itself not be very interesting to you, but I believe the general approach
is quite useful and flexible enough to be adapted to all kinds of use\-cases.
8\.5 Exercises
--------------
### Exercise 1
Suppose you have an Excel workbook that contains data on three sheets. Create a function that
reads entire workbooks, and that returns a list of tibbles, where each tibble is the data of one
sheet (download the example Excel workbook, `example_workbook.xlsx`, from the `assets` folder on
the books github).
### Exercise 2
Use one of the `map()` functions to combine two lists into one. Consider the following two lists:
```
mediterranean <- list("starters" = list("humous", "lasagna"), "dishes" = list("sardines", "olives"))
continental <- list("starters" = list("pea soup", "terrine"), "dishes" = list("frikadelle", "sauerkraut"))
```
The result we’d like to have would look like this:
```
$starters
$starters[[1]]
[1] "humous"
$starters[[2]]
[1] "olives"
$starters[[3]]
[1] "pea soup"
$starters[[4]]
[1] "terrine"
$dishes
$dishes[[1]]
[1] "sardines"
$dishes[[2]]
[1] "lasagna"
$dishes[[3]]
[1] "frikadelle"
$dishes[[4]]
[1] "sauerkraut"
```
### Exercise 1
Suppose you have an Excel workbook that contains data on three sheets. Create a function that
reads entire workbooks, and that returns a list of tibbles, where each tibble is the data of one
sheet (download the example Excel workbook, `example_workbook.xlsx`, from the `assets` folder on
the books github).
### Exercise 2
Use one of the `map()` functions to combine two lists into one. Consider the following two lists:
```
mediterranean <- list("starters" = list("humous", "lasagna"), "dishes" = list("sardines", "olives"))
continental <- list("starters" = list("pea soup", "terrine"), "dishes" = list("frikadelle", "sauerkraut"))
```
The result we’d like to have would look like this:
```
$starters
$starters[[1]]
[1] "humous"
$starters[[2]]
[1] "olives"
$starters[[3]]
[1] "pea soup"
$starters[[4]]
[1] "terrine"
$dishes
$dishes[[1]]
[1] "sardines"
$dishes[[2]]
[1] "lasagna"
$dishes[[3]]
[1] "frikadelle"
$dishes[[4]]
[1] "sauerkraut"
```
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/package-development.html |
Chapter 9 Package development
=============================
9\.1 Why you need to write your own package
-------------------------------------------
One of the reasons you might have tried R in the first place is the abundance of packages. As I’m
writing these lines (in November 2020\) 16523 packages are available on CRAN (in August 2019, there
were 14762, and in August 2016, when I first wrote the number of packages down for my first ebook,
it was 8922 packages).
This is a staggering amount of packages and to help you look for the right ones, you can check
out [CRAN Task Views](https://cran.r-project.org/).
You might wonder why the heck should you write your own packages? After all, with so many packages
you’re sure to find something that suits your needs, right? Well, it depends. Of course, you will
not need to write you own function to perform non\-linear regression, or to train a neural network.
But as time will go, you will start writing your own functions, functions that fit your needs, and
that you use daily. It may be functions that prepare and shape data that you use at work for
analysis. Or maybe you want to deliver an analysis to a client, with data and source code, so
you decide to deliver a package that contains everything (something I’ve already done in the
past). Maybe you want to develop a Shiny applications using the `{golem}` framework, which allows
you to build apps as packages.
Ok, but is it necessary to write a package? Why not just write functions inside some scripts and
then simply run or share these scripts (and in the case of Shiny, you don’t have to use `{golem}`)?
This seems like a valid solution at first. However, it quickly becomes tedious, especially if you
have multiple scripts scattered around your computer or inside different subfolders. You’ll also
have to write the documentation on separate files and these can easily get lost or become outdated.
Relying on scripts does not scale well; even if you are not sharing your code outside of your
computer (maybe you’re working on super secret projects at NASA), you always have to think about
future you. And in general, future you thinks that past you is an asshole, exactly because you put
0 effort in documenting, testing and making your code easy to use. Having everything inside a
package takes care of these headaches for you, and will make future you proud of past you. And if
you have to share your code, or deliver to a client, believe me, it will make things a thousand
times easier.
Code that is inside packages is very easy to document and test, especially if you’re using Rstudio.
It also makes it possible to use the wonderful `{covr}` package, which tells you which lines in
which functions are called by your tests. If some lines are missing, write tests that invoke them and
increase the coverage of your tests! Documenting and testing your code is very important; it gives
you assurance that the code your writing works, but most importantly, it gives *others* assurance
that what you wrote works. And I include future you in these *others* too.
In order to share this package with these *others* we are going to use Git. If you’re familiar with
Git, great, you’ll be able to skip some sections. If not, then buckle up, you’re in for a wild ride.
As I mentioned in the introduction, if you want to learn much more than I’ll show about packages
read Wickham ([2015](#ref-wickham2015)). I will only show you the basics, but it should be enough to get you productive.
9\.2 Starting easy: creating a package to share data
----------------------------------------------------
We will start a package from scratch, in order to share data with the world. For this, we are first
going to scrape a table off Wikipedia, prepare the data and then include it in a package. To make
distributing this package easy, we’re going to put it up on Github, so you’ll need a Github account.
Let’s start by creating a Github account.
### 9\.2\.1 Setting up a Github account
Setting up a Github account is very easy; just go over to <https://github.com/>
and simply sign up!
Then you will need to generate a ssh key on your computer. This is a way for you to securely
interact with your Github account, and push your code to the repository without having to always
type your password. I will assume you never created any ssh
keys before, because if you already did, you could skip these steps. I will also assume that you are
on a GNU\+Linux or macOS system; if you’re using windows, the instructions are very similar, but
you’ll first need to install Git available [here](https://git-scm.com/downloads). Git is available
by default on any GNU\+Linux system, and as far as I know also on macOS, but I might be wrong and
you might also need to install git on macOS (but then the instructions are the same whether
you’re using GNU\+Linux or macOS). If you have trouble installing git, read the following section
from the [Pro Git book](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
Then, open a terminal (or the git command line on Windows) and type the following:
```
ssh-keygen
```
This command will generate several files in the `.ssh` directory inside your `HOME` directory. Look
for the file that ends with the `.pub` extension, and copy its contents. You will need to paste
these contents on Github.
So now sign in to Github; once you are signed in, go to settings and then `SSH and GPG keys`:
In the screenshot above, you see my ssh key associated with my account; this will be empty for you.
Click on the top right, *New SSH key*:
Give your key a name, and paste the key you generated before. You’re done! You can now configure
git a bit more by telling it who you are. Open a terminal, adapt and type the following commands:
```
git config --global user.name "Harold Zurcher"
git config --global user.email harold.zurcher@madisonbus.com
```
You’re ready to go
You can now push code to github to share it with the world. Or if you do not want
to share you package (for confidentiality reasons for instance), you can still benefit from using
git, as it possible to have an internal git server that could be managed by your company’s IT team.
There is also the possibility to set up corporate, and thus private git servers by buying the service
from github, or other providers such as gitlab.
### 9\.2\.2 Starting your package
To start writing a package, the easiest way is to load up Rstudio and start a new project, under the
*File* menu. If you’re starting from scratch, just choose the first option, *New Directory* and then
*R package*. Give a new to your package, for example `arcade` (you’ll see why in a bit) and you can
also choose to use git for version control. Now if you check the folder where you chose to save
your package, you will see a folder with the same name as your package, and inside this folder a
lot of new files and other folders. The most important folder for now is the `R` folder. This is
the folder that will hold your `.R` source code files. You can also see these files and folders
inside the *Files* panel from within Rstudio. Rstudio will also have `hello.R` opened, which is a
single demo source file inside the `R` folder. You can get rid of this file, or keep it and edit it.
I would advise you keep it and even distribute it inside your package. You can save this file
in a special directory called `data-raw`. You don’t need to manually create this folder now, we will
do so in a bit. For now, just follow along.
Now, to start working on your package, the best is to use a package called `{usethis}`. `{usethis}`
is a package that makes writing packages very easy; it includes functions that create the required
subfolders and necessary template files so that you do not need to constantly check how file so\-and\-so
should be placed or named.
Let’s start by adding a readme file. This is easily achieved by using the following function from
`{usethis}`:
```
usethis::use_readme_md()
```
This creates a template README.md file in the root directory of your package. You can now edit this
file accordingly, and that’s it.
The next step could be setting up your package to work with `{roxygen2}`, which will help write
the documentation of your package:
```
usethis::use_roxygen_md()
```
The output tells you to run `devtools::document()`, we will do this later.
Since you have learned about the tidyverse by reading this book, I am willing to bet that you will
want to use the `%>%` operator inside the functions contained in your package. To do this without issues,
which wil become apparent later, use the following command:
```
usethis::use_pipe()
```
This will make the `%>%` operator available internally to your package’s functions, but also to the
user that will load the package.
We are almost done setting up the package. If you plan on distributing data with your package,
you might want to also share the code that prepared the data. For instance, if you receive the
data from your finance department, but this data needs some cleaning before being useful, you could
write a script to do so and then distribute this script also with the package, for reproducibility
purposes. These scripts, while not central to the package, could still be of interest to the users.
The directory to place them is called `data-raw`:
```
usethis::use_data_raw()
```
One final folder is `inst`. You can add files to this folder, and they will be available to the users
that install the package. Users can find the files in the folder where packages get installed. On
GNU\+Linux systems, that would be somewhere like: `/home/user/R/amd64-linux-gnu-library/3.6`. There,
you will find the installation folders of all the packages. If the package you make is called `{spam}`,
you will find the files you put inside the `inst` folder on the root of the installation folder of
`spam`. You can simply create the `inst` folder yourself, or use the following command:
```
usethis::use_directory("inst")
```
Finally, the last step is to give your package a license; this again is only useful if you plan on
distributing it to the world. If you are writing your own package for yourself, or for purposes
internal to your company, this is probably superfluous. I won’t discuss the particularities of
licenses, so let’s just say that for the sake of this example package we are writing, we are going
to use the MIT license:
```
usethis::use_mit_license()
```
This again creates the right file at the right spot. There are other interesting functions inside
the `{usethis}` package, and we will come back to it later.
9\.3 Including data inside the package
--------------------------------------
Many packages include data and we are going to learn how to do it. I’ll assume that we already
have a dataset on hand that we have to share. This is quite simple to do, first let’s simply
load the data:
```
arcade <- readr::read_csv("~/path/to/data/arcade.csv")
```
and then use, once again, `{usethis}` comes to our rescue:
```
usethis::use_data(arcade, compress = "xz")
```
that’s it! Well almost. We still need to write a little script that will allow users of your
package to load the data. This script is simply called `data.R` and contains the following lines:
```
#' List of highest-grossing games
#'
#' Source: https://en.wikipedia.org/wiki/Arcade_game#List_of_highest-grossing_games
#'
#' @format A data frame with 6 variables: \code{game}, \code{release_year},
#' \code{hardware_units_sold}, \code{comment_hardware}, \code{estimated_gross_revenue},
#' \code{comment_revenue}
#' \describe{
#' \item{game}{The name of the game}
#' \item{release_year}{The year the game was released}
#' \item{hardware_units_sold}{The amount of hardware units sold}
#' \item{comment_hardware}{Comment accompanying the amount of hardware units sold}
#' \item{estimated_gross_revenue}{Estimated gross revenue in US$ with 2019 inflation}
#' \item{comment_revenue}{Comment accompanying the amount of hardware units sold}
#' }
"arcade"
```
Basically this is a description of the data, and the name with which the user will invoke the data. To
conclude this part, remember the `data-raw` folder? If you used a script to scrape/get the data
from somewhere, or if you had to write code to prepare the data to make it fit for sharing, this
is where you can put that script. I have written such a script, I will discuss it in the next
chapter, where I’ll show you how to scrape data from the internet. You can also save the file
where you wrote all your calls to `{usethis}` functions if you want.
9\.4 Adding functions to your package
-------------------------------------
Functions will be added inside the `R` package. In there, you will find the `hello.R` file. You can
edit this file if you kept it or you can create a new script. This script can hold one function, or
several functions.
Let’s start with the simplest case; one function inside one script.
### 9\.4\.1 One function inside one script
Create a new R script, or edit the `hello.R` file, and add in the following code:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select_if(is.numeric) %>%
gather(variable, value) %>%
group_by(variable) %>%
summarise_all(list(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.)))) %>%
mutate(type = "Numeric")
}
```
Save the script under the name `describe.R`.
This function shows you pretty much you need to know when writing functions for packages. First,
there’s the comment lines, that start with `#'` and not with `#`. These lines will be converted
into the function’s documentation which you and your package’s users will be able to read in
Rstudio’s *Help* pane. Notice the keywords that start with `@`. These are quite important:
* `@param`: used to define the function’s parameters;
* `@return`: used to define the object returned by the function;
* `@import`: if the function needs functions from another package, in the present case `{dplyr}`;
then this is where you would define these. Separate several package with a space;
* `@importFrom`: if the function only needs one function from a package, define it here. Read it as
*from tidyr import gather*, very similar to how it is done in Python;
* `@export`: makes the function available to the users. If you omit this, this function will not
be available to the users and only available internally to the other functions of the package. Not
making functions available to users can be useful if you need to write functions that are used by
other functions but never be used by anyone directly. It is still possible to access these internal,
private, functions by using `:::`, as in, `package:::private_function()`;
* `@examples`: lists examples in the documentation. The `\dontrun{}` tag is used for when you do
not want these examples to run when building the package.
As explained before, if the function depends on function from other packages, then `@import` or
`@importFrom` must be used. But it is also possible to use the `package::function()` syntax like
I did on the following line:
```
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
```
This function uses the `sample_mode()` function from my `{brotools}` package. Since it is the only
function that I am using, I don’t import the whole package with `@import`. I could have done the
same for `gather()` from `{tidyr}` instead of using `@importFrom`, but I wanted to showcase
`@importFrom`, which can also be use to import several functions:
```
@importFrom package function_1 function_2 function_3
```
The way I’m doing this however is not optimal. If your package depends on many functions from
other packages that are not available on CRAN, but rather on Github, you might want to do that
in a cleaner way. The cleaner way is to add a “Remotes” field in the package’s NAMESPACE (this is
a very important file that gets generated automatically by `devtools::document()`) I won’t
cover this here, but you can read more about it [here](https://cran.r-project.org/web/packages/devtools/vignettes/dependencies.html).
What I will cover is how to declare dependencies to other CRAN packages. These dependencies also
get declared inside the ‘Description’ file, which we will cover in the next section.
Because I’m doing that in this hacky way, my `{brotools}` package should be installed:
```
devtools::install_github("b-rodrigues/brotools")
```
Again, I want to emphasize that this is not the best way of doing it. However, using the “REMOTES”
field as described in the document I linked above is not complicated.
Now comes the function itself. The function is written in pretty much the same way as usual, but
there are some particularities. First of all, the second argument of the function is the `...`, which
were already covered in Chapter 7\. I want to give the option to my users to specify any columns to
summarise only these columns, instead of all of them, which is the default behaviour. But because
I cannot know how many columns the user wants to summarize beforehand, and also because I do not
want to limit the user to 2 or 3 columns, I use the `...`.
But what if the user wants to summarize all the columns? This is taken care of in this line:
```
if (nargs() > 1) df <- select(df, ...)
```
`nargs()` counts the number of arguments of the function. If the user calls the function like so:
```
describe_numeric(mtcars)
```
`nargs()` will return 1\. If, instead, the user calls the function with one or more columns:
```
describe_numeric(mtcars, hp, mpg)
```
then `nargs()` will return 2 (in this case). And does, this piece of code will be executed:
```
df <- select(df, ...)
```
which selects the columns `hp` and `mpg` from the `mtcars` dataset. This reduced data set is then
the one that is being summarized.
### 9\.4\.2 Many functions inside a script
If you need to add more functions, you can add more in the same
script, or create one script per function. The advantage of writing more than one function per
script is that you can keep functions that are conceptually similar in the same place. For instance,
if you want to add a function called `describe_character()` to your package, adding it to the same
script where `describe_numeric()` is might be a good idea, so let’s do just that:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select(is.numeric) %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
tibble::lst(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.))))) %>%
mutate(type = "Numeric")
}
#' Compute descriptive statistics for the character or factor columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character or factor columns.
#' @import dplyr
#' @importFrom tidyr gather
describe_character_or_factors <- function(df, type){
df %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
funs(mode = brotools::sample_mode(value, na.rm = TRUE),
nobs = length(value),
n_missing = sum(is.na(value)),
n_unique = length(unique(value))))) %>%
mutate(type = type)
}
#' Compute descriptive statistics for the character columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character columns.
#' @import dplyr
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' }
describe_character <- function(df){
df %>%
select(where(is.character)) %>%
describe_character_or_factors(type = "Character")
}
```
Let’s now continue on to the next section, where we will learn to document the package.
9\.5 Documenting your package
-----------------------------
There are several files that you must edit to fully document the package; for now, only the functions
are documented. The first of these files is the `DESCRIPTION` file.
### 9\.5\.1 Description
By default, the `DESCRIPTION` file, which you can find in the root of your package project, contains
the following lines:
```
Package: arcade
Type: Package
Title: What the Package Does (Title Case)
Version: 0.1.0
Author: Who wrote it
Maintainer: The package maintainer <yourself@somewhere.net>
Description: More about what it does (maybe more than one line)
Use four spaces when indenting paragraphs within the Description.
License: What license is it under?
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
Each section is quite self\-explanatory. This is how it could look like once you’re done editing it:
```
Package: arcade
Type: Package
Title: List of highest-grossing Arcade Games
Version: 0.1.0
Author: person("Harold", "Zurcher", email = "harold.zurcher@madisonbus.com", role = c("aut", "cre"))
Description: This package contains data about the highest-grossing arcade games from the 70's until
2010's. Also contains some functions to summarize data.
License: CC0
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
The `Author` and `Maintainer` need some further explanations; I have added Harold Zurcher as
the athor and creator, with the `role = c("aut", "cre")` bit. `"cre"` can also be used for
maintainer, so I removed the `Maintainer` line.
9\.6 Unit testing your package
------------------------------
9\.1 Why you need to write your own package
-------------------------------------------
One of the reasons you might have tried R in the first place is the abundance of packages. As I’m
writing these lines (in November 2020\) 16523 packages are available on CRAN (in August 2019, there
were 14762, and in August 2016, when I first wrote the number of packages down for my first ebook,
it was 8922 packages).
This is a staggering amount of packages and to help you look for the right ones, you can check
out [CRAN Task Views](https://cran.r-project.org/).
You might wonder why the heck should you write your own packages? After all, with so many packages
you’re sure to find something that suits your needs, right? Well, it depends. Of course, you will
not need to write you own function to perform non\-linear regression, or to train a neural network.
But as time will go, you will start writing your own functions, functions that fit your needs, and
that you use daily. It may be functions that prepare and shape data that you use at work for
analysis. Or maybe you want to deliver an analysis to a client, with data and source code, so
you decide to deliver a package that contains everything (something I’ve already done in the
past). Maybe you want to develop a Shiny applications using the `{golem}` framework, which allows
you to build apps as packages.
Ok, but is it necessary to write a package? Why not just write functions inside some scripts and
then simply run or share these scripts (and in the case of Shiny, you don’t have to use `{golem}`)?
This seems like a valid solution at first. However, it quickly becomes tedious, especially if you
have multiple scripts scattered around your computer or inside different subfolders. You’ll also
have to write the documentation on separate files and these can easily get lost or become outdated.
Relying on scripts does not scale well; even if you are not sharing your code outside of your
computer (maybe you’re working on super secret projects at NASA), you always have to think about
future you. And in general, future you thinks that past you is an asshole, exactly because you put
0 effort in documenting, testing and making your code easy to use. Having everything inside a
package takes care of these headaches for you, and will make future you proud of past you. And if
you have to share your code, or deliver to a client, believe me, it will make things a thousand
times easier.
Code that is inside packages is very easy to document and test, especially if you’re using Rstudio.
It also makes it possible to use the wonderful `{covr}` package, which tells you which lines in
which functions are called by your tests. If some lines are missing, write tests that invoke them and
increase the coverage of your tests! Documenting and testing your code is very important; it gives
you assurance that the code your writing works, but most importantly, it gives *others* assurance
that what you wrote works. And I include future you in these *others* too.
In order to share this package with these *others* we are going to use Git. If you’re familiar with
Git, great, you’ll be able to skip some sections. If not, then buckle up, you’re in for a wild ride.
As I mentioned in the introduction, if you want to learn much more than I’ll show about packages
read Wickham ([2015](#ref-wickham2015)). I will only show you the basics, but it should be enough to get you productive.
9\.2 Starting easy: creating a package to share data
----------------------------------------------------
We will start a package from scratch, in order to share data with the world. For this, we are first
going to scrape a table off Wikipedia, prepare the data and then include it in a package. To make
distributing this package easy, we’re going to put it up on Github, so you’ll need a Github account.
Let’s start by creating a Github account.
### 9\.2\.1 Setting up a Github account
Setting up a Github account is very easy; just go over to <https://github.com/>
and simply sign up!
Then you will need to generate a ssh key on your computer. This is a way for you to securely
interact with your Github account, and push your code to the repository without having to always
type your password. I will assume you never created any ssh
keys before, because if you already did, you could skip these steps. I will also assume that you are
on a GNU\+Linux or macOS system; if you’re using windows, the instructions are very similar, but
you’ll first need to install Git available [here](https://git-scm.com/downloads). Git is available
by default on any GNU\+Linux system, and as far as I know also on macOS, but I might be wrong and
you might also need to install git on macOS (but then the instructions are the same whether
you’re using GNU\+Linux or macOS). If you have trouble installing git, read the following section
from the [Pro Git book](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
Then, open a terminal (or the git command line on Windows) and type the following:
```
ssh-keygen
```
This command will generate several files in the `.ssh` directory inside your `HOME` directory. Look
for the file that ends with the `.pub` extension, and copy its contents. You will need to paste
these contents on Github.
So now sign in to Github; once you are signed in, go to settings and then `SSH and GPG keys`:
In the screenshot above, you see my ssh key associated with my account; this will be empty for you.
Click on the top right, *New SSH key*:
Give your key a name, and paste the key you generated before. You’re done! You can now configure
git a bit more by telling it who you are. Open a terminal, adapt and type the following commands:
```
git config --global user.name "Harold Zurcher"
git config --global user.email harold.zurcher@madisonbus.com
```
You’re ready to go
You can now push code to github to share it with the world. Or if you do not want
to share you package (for confidentiality reasons for instance), you can still benefit from using
git, as it possible to have an internal git server that could be managed by your company’s IT team.
There is also the possibility to set up corporate, and thus private git servers by buying the service
from github, or other providers such as gitlab.
### 9\.2\.2 Starting your package
To start writing a package, the easiest way is to load up Rstudio and start a new project, under the
*File* menu. If you’re starting from scratch, just choose the first option, *New Directory* and then
*R package*. Give a new to your package, for example `arcade` (you’ll see why in a bit) and you can
also choose to use git for version control. Now if you check the folder where you chose to save
your package, you will see a folder with the same name as your package, and inside this folder a
lot of new files and other folders. The most important folder for now is the `R` folder. This is
the folder that will hold your `.R` source code files. You can also see these files and folders
inside the *Files* panel from within Rstudio. Rstudio will also have `hello.R` opened, which is a
single demo source file inside the `R` folder. You can get rid of this file, or keep it and edit it.
I would advise you keep it and even distribute it inside your package. You can save this file
in a special directory called `data-raw`. You don’t need to manually create this folder now, we will
do so in a bit. For now, just follow along.
Now, to start working on your package, the best is to use a package called `{usethis}`. `{usethis}`
is a package that makes writing packages very easy; it includes functions that create the required
subfolders and necessary template files so that you do not need to constantly check how file so\-and\-so
should be placed or named.
Let’s start by adding a readme file. This is easily achieved by using the following function from
`{usethis}`:
```
usethis::use_readme_md()
```
This creates a template README.md file in the root directory of your package. You can now edit this
file accordingly, and that’s it.
The next step could be setting up your package to work with `{roxygen2}`, which will help write
the documentation of your package:
```
usethis::use_roxygen_md()
```
The output tells you to run `devtools::document()`, we will do this later.
Since you have learned about the tidyverse by reading this book, I am willing to bet that you will
want to use the `%>%` operator inside the functions contained in your package. To do this without issues,
which wil become apparent later, use the following command:
```
usethis::use_pipe()
```
This will make the `%>%` operator available internally to your package’s functions, but also to the
user that will load the package.
We are almost done setting up the package. If you plan on distributing data with your package,
you might want to also share the code that prepared the data. For instance, if you receive the
data from your finance department, but this data needs some cleaning before being useful, you could
write a script to do so and then distribute this script also with the package, for reproducibility
purposes. These scripts, while not central to the package, could still be of interest to the users.
The directory to place them is called `data-raw`:
```
usethis::use_data_raw()
```
One final folder is `inst`. You can add files to this folder, and they will be available to the users
that install the package. Users can find the files in the folder where packages get installed. On
GNU\+Linux systems, that would be somewhere like: `/home/user/R/amd64-linux-gnu-library/3.6`. There,
you will find the installation folders of all the packages. If the package you make is called `{spam}`,
you will find the files you put inside the `inst` folder on the root of the installation folder of
`spam`. You can simply create the `inst` folder yourself, or use the following command:
```
usethis::use_directory("inst")
```
Finally, the last step is to give your package a license; this again is only useful if you plan on
distributing it to the world. If you are writing your own package for yourself, or for purposes
internal to your company, this is probably superfluous. I won’t discuss the particularities of
licenses, so let’s just say that for the sake of this example package we are writing, we are going
to use the MIT license:
```
usethis::use_mit_license()
```
This again creates the right file at the right spot. There are other interesting functions inside
the `{usethis}` package, and we will come back to it later.
### 9\.2\.1 Setting up a Github account
Setting up a Github account is very easy; just go over to <https://github.com/>
and simply sign up!
Then you will need to generate a ssh key on your computer. This is a way for you to securely
interact with your Github account, and push your code to the repository without having to always
type your password. I will assume you never created any ssh
keys before, because if you already did, you could skip these steps. I will also assume that you are
on a GNU\+Linux or macOS system; if you’re using windows, the instructions are very similar, but
you’ll first need to install Git available [here](https://git-scm.com/downloads). Git is available
by default on any GNU\+Linux system, and as far as I know also on macOS, but I might be wrong and
you might also need to install git on macOS (but then the instructions are the same whether
you’re using GNU\+Linux or macOS). If you have trouble installing git, read the following section
from the [Pro Git book](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
Then, open a terminal (or the git command line on Windows) and type the following:
```
ssh-keygen
```
This command will generate several files in the `.ssh` directory inside your `HOME` directory. Look
for the file that ends with the `.pub` extension, and copy its contents. You will need to paste
these contents on Github.
So now sign in to Github; once you are signed in, go to settings and then `SSH and GPG keys`:
In the screenshot above, you see my ssh key associated with my account; this will be empty for you.
Click on the top right, *New SSH key*:
Give your key a name, and paste the key you generated before. You’re done! You can now configure
git a bit more by telling it who you are. Open a terminal, adapt and type the following commands:
```
git config --global user.name "Harold Zurcher"
git config --global user.email harold.zurcher@madisonbus.com
```
You’re ready to go
You can now push code to github to share it with the world. Or if you do not want
to share you package (for confidentiality reasons for instance), you can still benefit from using
git, as it possible to have an internal git server that could be managed by your company’s IT team.
There is also the possibility to set up corporate, and thus private git servers by buying the service
from github, or other providers such as gitlab.
### 9\.2\.2 Starting your package
To start writing a package, the easiest way is to load up Rstudio and start a new project, under the
*File* menu. If you’re starting from scratch, just choose the first option, *New Directory* and then
*R package*. Give a new to your package, for example `arcade` (you’ll see why in a bit) and you can
also choose to use git for version control. Now if you check the folder where you chose to save
your package, you will see a folder with the same name as your package, and inside this folder a
lot of new files and other folders. The most important folder for now is the `R` folder. This is
the folder that will hold your `.R` source code files. You can also see these files and folders
inside the *Files* panel from within Rstudio. Rstudio will also have `hello.R` opened, which is a
single demo source file inside the `R` folder. You can get rid of this file, or keep it and edit it.
I would advise you keep it and even distribute it inside your package. You can save this file
in a special directory called `data-raw`. You don’t need to manually create this folder now, we will
do so in a bit. For now, just follow along.
Now, to start working on your package, the best is to use a package called `{usethis}`. `{usethis}`
is a package that makes writing packages very easy; it includes functions that create the required
subfolders and necessary template files so that you do not need to constantly check how file so\-and\-so
should be placed or named.
Let’s start by adding a readme file. This is easily achieved by using the following function from
`{usethis}`:
```
usethis::use_readme_md()
```
This creates a template README.md file in the root directory of your package. You can now edit this
file accordingly, and that’s it.
The next step could be setting up your package to work with `{roxygen2}`, which will help write
the documentation of your package:
```
usethis::use_roxygen_md()
```
The output tells you to run `devtools::document()`, we will do this later.
Since you have learned about the tidyverse by reading this book, I am willing to bet that you will
want to use the `%>%` operator inside the functions contained in your package. To do this without issues,
which wil become apparent later, use the following command:
```
usethis::use_pipe()
```
This will make the `%>%` operator available internally to your package’s functions, but also to the
user that will load the package.
We are almost done setting up the package. If you plan on distributing data with your package,
you might want to also share the code that prepared the data. For instance, if you receive the
data from your finance department, but this data needs some cleaning before being useful, you could
write a script to do so and then distribute this script also with the package, for reproducibility
purposes. These scripts, while not central to the package, could still be of interest to the users.
The directory to place them is called `data-raw`:
```
usethis::use_data_raw()
```
One final folder is `inst`. You can add files to this folder, and they will be available to the users
that install the package. Users can find the files in the folder where packages get installed. On
GNU\+Linux systems, that would be somewhere like: `/home/user/R/amd64-linux-gnu-library/3.6`. There,
you will find the installation folders of all the packages. If the package you make is called `{spam}`,
you will find the files you put inside the `inst` folder on the root of the installation folder of
`spam`. You can simply create the `inst` folder yourself, or use the following command:
```
usethis::use_directory("inst")
```
Finally, the last step is to give your package a license; this again is only useful if you plan on
distributing it to the world. If you are writing your own package for yourself, or for purposes
internal to your company, this is probably superfluous. I won’t discuss the particularities of
licenses, so let’s just say that for the sake of this example package we are writing, we are going
to use the MIT license:
```
usethis::use_mit_license()
```
This again creates the right file at the right spot. There are other interesting functions inside
the `{usethis}` package, and we will come back to it later.
9\.3 Including data inside the package
--------------------------------------
Many packages include data and we are going to learn how to do it. I’ll assume that we already
have a dataset on hand that we have to share. This is quite simple to do, first let’s simply
load the data:
```
arcade <- readr::read_csv("~/path/to/data/arcade.csv")
```
and then use, once again, `{usethis}` comes to our rescue:
```
usethis::use_data(arcade, compress = "xz")
```
that’s it! Well almost. We still need to write a little script that will allow users of your
package to load the data. This script is simply called `data.R` and contains the following lines:
```
#' List of highest-grossing games
#'
#' Source: https://en.wikipedia.org/wiki/Arcade_game#List_of_highest-grossing_games
#'
#' @format A data frame with 6 variables: \code{game}, \code{release_year},
#' \code{hardware_units_sold}, \code{comment_hardware}, \code{estimated_gross_revenue},
#' \code{comment_revenue}
#' \describe{
#' \item{game}{The name of the game}
#' \item{release_year}{The year the game was released}
#' \item{hardware_units_sold}{The amount of hardware units sold}
#' \item{comment_hardware}{Comment accompanying the amount of hardware units sold}
#' \item{estimated_gross_revenue}{Estimated gross revenue in US$ with 2019 inflation}
#' \item{comment_revenue}{Comment accompanying the amount of hardware units sold}
#' }
"arcade"
```
Basically this is a description of the data, and the name with which the user will invoke the data. To
conclude this part, remember the `data-raw` folder? If you used a script to scrape/get the data
from somewhere, or if you had to write code to prepare the data to make it fit for sharing, this
is where you can put that script. I have written such a script, I will discuss it in the next
chapter, where I’ll show you how to scrape data from the internet. You can also save the file
where you wrote all your calls to `{usethis}` functions if you want.
9\.4 Adding functions to your package
-------------------------------------
Functions will be added inside the `R` package. In there, you will find the `hello.R` file. You can
edit this file if you kept it or you can create a new script. This script can hold one function, or
several functions.
Let’s start with the simplest case; one function inside one script.
### 9\.4\.1 One function inside one script
Create a new R script, or edit the `hello.R` file, and add in the following code:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select_if(is.numeric) %>%
gather(variable, value) %>%
group_by(variable) %>%
summarise_all(list(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.)))) %>%
mutate(type = "Numeric")
}
```
Save the script under the name `describe.R`.
This function shows you pretty much you need to know when writing functions for packages. First,
there’s the comment lines, that start with `#'` and not with `#`. These lines will be converted
into the function’s documentation which you and your package’s users will be able to read in
Rstudio’s *Help* pane. Notice the keywords that start with `@`. These are quite important:
* `@param`: used to define the function’s parameters;
* `@return`: used to define the object returned by the function;
* `@import`: if the function needs functions from another package, in the present case `{dplyr}`;
then this is where you would define these. Separate several package with a space;
* `@importFrom`: if the function only needs one function from a package, define it here. Read it as
*from tidyr import gather*, very similar to how it is done in Python;
* `@export`: makes the function available to the users. If you omit this, this function will not
be available to the users and only available internally to the other functions of the package. Not
making functions available to users can be useful if you need to write functions that are used by
other functions but never be used by anyone directly. It is still possible to access these internal,
private, functions by using `:::`, as in, `package:::private_function()`;
* `@examples`: lists examples in the documentation. The `\dontrun{}` tag is used for when you do
not want these examples to run when building the package.
As explained before, if the function depends on function from other packages, then `@import` or
`@importFrom` must be used. But it is also possible to use the `package::function()` syntax like
I did on the following line:
```
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
```
This function uses the `sample_mode()` function from my `{brotools}` package. Since it is the only
function that I am using, I don’t import the whole package with `@import`. I could have done the
same for `gather()` from `{tidyr}` instead of using `@importFrom`, but I wanted to showcase
`@importFrom`, which can also be use to import several functions:
```
@importFrom package function_1 function_2 function_3
```
The way I’m doing this however is not optimal. If your package depends on many functions from
other packages that are not available on CRAN, but rather on Github, you might want to do that
in a cleaner way. The cleaner way is to add a “Remotes” field in the package’s NAMESPACE (this is
a very important file that gets generated automatically by `devtools::document()`) I won’t
cover this here, but you can read more about it [here](https://cran.r-project.org/web/packages/devtools/vignettes/dependencies.html).
What I will cover is how to declare dependencies to other CRAN packages. These dependencies also
get declared inside the ‘Description’ file, which we will cover in the next section.
Because I’m doing that in this hacky way, my `{brotools}` package should be installed:
```
devtools::install_github("b-rodrigues/brotools")
```
Again, I want to emphasize that this is not the best way of doing it. However, using the “REMOTES”
field as described in the document I linked above is not complicated.
Now comes the function itself. The function is written in pretty much the same way as usual, but
there are some particularities. First of all, the second argument of the function is the `...`, which
were already covered in Chapter 7\. I want to give the option to my users to specify any columns to
summarise only these columns, instead of all of them, which is the default behaviour. But because
I cannot know how many columns the user wants to summarize beforehand, and also because I do not
want to limit the user to 2 or 3 columns, I use the `...`.
But what if the user wants to summarize all the columns? This is taken care of in this line:
```
if (nargs() > 1) df <- select(df, ...)
```
`nargs()` counts the number of arguments of the function. If the user calls the function like so:
```
describe_numeric(mtcars)
```
`nargs()` will return 1\. If, instead, the user calls the function with one or more columns:
```
describe_numeric(mtcars, hp, mpg)
```
then `nargs()` will return 2 (in this case). And does, this piece of code will be executed:
```
df <- select(df, ...)
```
which selects the columns `hp` and `mpg` from the `mtcars` dataset. This reduced data set is then
the one that is being summarized.
### 9\.4\.2 Many functions inside a script
If you need to add more functions, you can add more in the same
script, or create one script per function. The advantage of writing more than one function per
script is that you can keep functions that are conceptually similar in the same place. For instance,
if you want to add a function called `describe_character()` to your package, adding it to the same
script where `describe_numeric()` is might be a good idea, so let’s do just that:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select(is.numeric) %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
tibble::lst(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.))))) %>%
mutate(type = "Numeric")
}
#' Compute descriptive statistics for the character or factor columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character or factor columns.
#' @import dplyr
#' @importFrom tidyr gather
describe_character_or_factors <- function(df, type){
df %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
funs(mode = brotools::sample_mode(value, na.rm = TRUE),
nobs = length(value),
n_missing = sum(is.na(value)),
n_unique = length(unique(value))))) %>%
mutate(type = type)
}
#' Compute descriptive statistics for the character columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character columns.
#' @import dplyr
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' }
describe_character <- function(df){
df %>%
select(where(is.character)) %>%
describe_character_or_factors(type = "Character")
}
```
Let’s now continue on to the next section, where we will learn to document the package.
### 9\.4\.1 One function inside one script
Create a new R script, or edit the `hello.R` file, and add in the following code:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select_if(is.numeric) %>%
gather(variable, value) %>%
group_by(variable) %>%
summarise_all(list(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.)))) %>%
mutate(type = "Numeric")
}
```
Save the script under the name `describe.R`.
This function shows you pretty much you need to know when writing functions for packages. First,
there’s the comment lines, that start with `#'` and not with `#`. These lines will be converted
into the function’s documentation which you and your package’s users will be able to read in
Rstudio’s *Help* pane. Notice the keywords that start with `@`. These are quite important:
* `@param`: used to define the function’s parameters;
* `@return`: used to define the object returned by the function;
* `@import`: if the function needs functions from another package, in the present case `{dplyr}`;
then this is where you would define these. Separate several package with a space;
* `@importFrom`: if the function only needs one function from a package, define it here. Read it as
*from tidyr import gather*, very similar to how it is done in Python;
* `@export`: makes the function available to the users. If you omit this, this function will not
be available to the users and only available internally to the other functions of the package. Not
making functions available to users can be useful if you need to write functions that are used by
other functions but never be used by anyone directly. It is still possible to access these internal,
private, functions by using `:::`, as in, `package:::private_function()`;
* `@examples`: lists examples in the documentation. The `\dontrun{}` tag is used for when you do
not want these examples to run when building the package.
As explained before, if the function depends on function from other packages, then `@import` or
`@importFrom` must be used. But it is also possible to use the `package::function()` syntax like
I did on the following line:
```
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
```
This function uses the `sample_mode()` function from my `{brotools}` package. Since it is the only
function that I am using, I don’t import the whole package with `@import`. I could have done the
same for `gather()` from `{tidyr}` instead of using `@importFrom`, but I wanted to showcase
`@importFrom`, which can also be use to import several functions:
```
@importFrom package function_1 function_2 function_3
```
The way I’m doing this however is not optimal. If your package depends on many functions from
other packages that are not available on CRAN, but rather on Github, you might want to do that
in a cleaner way. The cleaner way is to add a “Remotes” field in the package’s NAMESPACE (this is
a very important file that gets generated automatically by `devtools::document()`) I won’t
cover this here, but you can read more about it [here](https://cran.r-project.org/web/packages/devtools/vignettes/dependencies.html).
What I will cover is how to declare dependencies to other CRAN packages. These dependencies also
get declared inside the ‘Description’ file, which we will cover in the next section.
Because I’m doing that in this hacky way, my `{brotools}` package should be installed:
```
devtools::install_github("b-rodrigues/brotools")
```
Again, I want to emphasize that this is not the best way of doing it. However, using the “REMOTES”
field as described in the document I linked above is not complicated.
Now comes the function itself. The function is written in pretty much the same way as usual, but
there are some particularities. First of all, the second argument of the function is the `...`, which
were already covered in Chapter 7\. I want to give the option to my users to specify any columns to
summarise only these columns, instead of all of them, which is the default behaviour. But because
I cannot know how many columns the user wants to summarize beforehand, and also because I do not
want to limit the user to 2 or 3 columns, I use the `...`.
But what if the user wants to summarize all the columns? This is taken care of in this line:
```
if (nargs() > 1) df <- select(df, ...)
```
`nargs()` counts the number of arguments of the function. If the user calls the function like so:
```
describe_numeric(mtcars)
```
`nargs()` will return 1\. If, instead, the user calls the function with one or more columns:
```
describe_numeric(mtcars, hp, mpg)
```
then `nargs()` will return 2 (in this case). And does, this piece of code will be executed:
```
df <- select(df, ...)
```
which selects the columns `hp` and `mpg` from the `mtcars` dataset. This reduced data set is then
the one that is being summarized.
### 9\.4\.2 Many functions inside a script
If you need to add more functions, you can add more in the same
script, or create one script per function. The advantage of writing more than one function per
script is that you can keep functions that are conceptually similar in the same place. For instance,
if you want to add a function called `describe_character()` to your package, adding it to the same
script where `describe_numeric()` is might be a good idea, so let’s do just that:
```
#' Compute descriptive statistics for the numeric columns of a data frame.
#' @param df The data frame to summarise.
#' @param ... Optional. Columns in the data frame
#' @return A data frame with descriptive statistics. If you are only interested in certain columns
#' you can add these columns.
#' @import dplyr
#' @importFrom tidyr gather
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' describe(dataset, col1, col2)
#' }
describe_numeric <- function(df, ...){
if (nargs() > 1) df <- select(df, ...)
df %>%
select(is.numeric) %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
tibble::lst(mean = ~mean(., na.rm = TRUE),
sd = ~sd(., na.rm = TRUE),
nobs = ~length(.),
min = ~min(., na.rm = TRUE),
max = ~max(., na.rm = TRUE),
q05 = ~quantile(., 0.05, na.rm = TRUE),
q25 = ~quantile(., 0.25, na.rm = TRUE),
mode = ~as.character(brotools::sample_mode(.), na.rm = TRUE),
median = ~quantile(., 0.5, na.rm = TRUE),
q75 = ~quantile(., 0.75, na.rm = TRUE),
q95 = ~quantile(., 0.95, na.rm = TRUE),
n_missing = ~sum(is.na(.))))) %>%
mutate(type = "Numeric")
}
#' Compute descriptive statistics for the character or factor columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character or factor columns.
#' @import dplyr
#' @importFrom tidyr gather
describe_character_or_factors <- function(df, type){
df %>%
pivot_longer(cols = everything(),
names_to = "variable", values_to = "value") %>%
group_by(variable) %>%
summarise(across(everything(),
funs(mode = brotools::sample_mode(value, na.rm = TRUE),
nobs = length(value),
n_missing = sum(is.na(value)),
n_unique = length(unique(value))))) %>%
mutate(type = type)
}
#' Compute descriptive statistics for the character columns of a data frame.
#' @param df The data frame to summarise.
#' @return A data frame with a description of the character columns.
#' @import dplyr
#' @export
#' @examples
#' \dontrun{
#' describe(dataset)
#' }
describe_character <- function(df){
df %>%
select(where(is.character)) %>%
describe_character_or_factors(type = "Character")
}
```
Let’s now continue on to the next section, where we will learn to document the package.
9\.5 Documenting your package
-----------------------------
There are several files that you must edit to fully document the package; for now, only the functions
are documented. The first of these files is the `DESCRIPTION` file.
### 9\.5\.1 Description
By default, the `DESCRIPTION` file, which you can find in the root of your package project, contains
the following lines:
```
Package: arcade
Type: Package
Title: What the Package Does (Title Case)
Version: 0.1.0
Author: Who wrote it
Maintainer: The package maintainer <yourself@somewhere.net>
Description: More about what it does (maybe more than one line)
Use four spaces when indenting paragraphs within the Description.
License: What license is it under?
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
Each section is quite self\-explanatory. This is how it could look like once you’re done editing it:
```
Package: arcade
Type: Package
Title: List of highest-grossing Arcade Games
Version: 0.1.0
Author: person("Harold", "Zurcher", email = "harold.zurcher@madisonbus.com", role = c("aut", "cre"))
Description: This package contains data about the highest-grossing arcade games from the 70's until
2010's. Also contains some functions to summarize data.
License: CC0
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
The `Author` and `Maintainer` need some further explanations; I have added Harold Zurcher as
the athor and creator, with the `role = c("aut", "cre")` bit. `"cre"` can also be used for
maintainer, so I removed the `Maintainer` line.
### 9\.5\.1 Description
By default, the `DESCRIPTION` file, which you can find in the root of your package project, contains
the following lines:
```
Package: arcade
Type: Package
Title: What the Package Does (Title Case)
Version: 0.1.0
Author: Who wrote it
Maintainer: The package maintainer <yourself@somewhere.net>
Description: More about what it does (maybe more than one line)
Use four spaces when indenting paragraphs within the Description.
License: What license is it under?
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
Each section is quite self\-explanatory. This is how it could look like once you’re done editing it:
```
Package: arcade
Type: Package
Title: List of highest-grossing Arcade Games
Version: 0.1.0
Author: person("Harold", "Zurcher", email = "harold.zurcher@madisonbus.com", role = c("aut", "cre"))
Description: This package contains data about the highest-grossing arcade games from the 70's until
2010's. Also contains some functions to summarize data.
License: CC0
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.0.2
```
The `Author` and `Maintainer` need some further explanations; I have added Harold Zurcher as
the athor and creator, with the `role = c("aut", "cre")` bit. `"cre"` can also be used for
maintainer, so I removed the `Maintainer` line.
9\.6 Unit testing your package
------------------------------
| R Programming |
b-rodrigues.github.io | https://b-rodrigues.github.io/modern_R/further-topics.html |
Chapter 10 Further topics
=========================
This chapter is a collection of short section that show some of the very nice things you can use
R for. These sections are based on past blog posts.
10\.1 Using Python from R with `{reticulate}`
---------------------------------------------
There is a lot of discussion online about the benefits of Python over and vice\-versa. When it comes
to data science, they are for the most part interchangeable. I would say that R has an advantage
over Python when it comes to offering specialized packages for certain topics such as
econometrics, bioinformatics, actuarial sciences, etc… while Python seems to offer more possibilities
when it comes to integrating a machine learning model into an app.
However, if most of your work is data analysis/machine learning, both languages are practically
interchangeable. But it can happen that you need access to a very specific library with no R
equivalent. Well, in that case, no need to completely switch to Python, as you can call Python code
from R using the `{reticulate}` package.
`{reticulate}` allows you to seamlessly call Python functions from an R session. An easy way to use
`{reticulate}` is to start a a new notebook, but you can also use `{reticulate}` and the included
functions interactively. However, I find that Rstudio notebooks work very well for this particular
use\-case, because you can write R and Python chunks, and thus differentiate the different
specific lines of code really well.
Let’s see how this works. First of all, you might need to specify the path to your Python executable,
in my case, because I’ve installed Python using Anaconda, I need to specify it:
```
# This is an R chunk
use_python("~/miniconda3/bin/python")
```
10\.2 Generating Pdf or Word reports with R
-------------------------------------------
10\.3 Scraping the internet
---------------------------
10\.4 Regular expressions
-------------------------
10\.5 Setting up a blog with `{blogdown}`
-----------------------------------------
10\.1 Using Python from R with `{reticulate}`
---------------------------------------------
There is a lot of discussion online about the benefits of Python over and vice\-versa. When it comes
to data science, they are for the most part interchangeable. I would say that R has an advantage
over Python when it comes to offering specialized packages for certain topics such as
econometrics, bioinformatics, actuarial sciences, etc… while Python seems to offer more possibilities
when it comes to integrating a machine learning model into an app.
However, if most of your work is data analysis/machine learning, both languages are practically
interchangeable. But it can happen that you need access to a very specific library with no R
equivalent. Well, in that case, no need to completely switch to Python, as you can call Python code
from R using the `{reticulate}` package.
`{reticulate}` allows you to seamlessly call Python functions from an R session. An easy way to use
`{reticulate}` is to start a a new notebook, but you can also use `{reticulate}` and the included
functions interactively. However, I find that Rstudio notebooks work very well for this particular
use\-case, because you can write R and Python chunks, and thus differentiate the different
specific lines of code really well.
Let’s see how this works. First of all, you might need to specify the path to your Python executable,
in my case, because I’ve installed Python using Anaconda, I need to specify it:
```
# This is an R chunk
use_python("~/miniconda3/bin/python")
```
10\.2 Generating Pdf or Word reports with R
-------------------------------------------
10\.3 Scraping the internet
---------------------------
10\.4 Regular expressions
-------------------------
10\.5 Setting up a blog with `{blogdown}`
-----------------------------------------
| R Programming |
ucdavisdatalab.github.io | https://ucdavisdatalab.github.io/workshop_r_basics/getting-started.html |
1 Getting Started
=================
R is a program for statistical computing. It provides a rich set of built\-in
tools for cleaning, exploring, modeling, and visualizing data.
The main way you’ll interact with R is by writing code or *expressions* in the
R programming language. Most people use “R” as a blanket term to refer to both
the program and the programming language. Usually, the distinction doesn’t
matter, but in cases where it does, we’ll point it out and be specific.
By writing code, you create an unambiguous record of every step taken in an
analysis. This it one of the major advantages of R (and other programming
languages) over point\-and\-click software like Tableau and Microsoft Excel.
Code you write in R is *reproducible*: you can share it with someone else, and
if they run it with the same inputs, they’ll get the same results.
Another advantage of writing code is that it’s often *reusable*. This can mean
automating a repetitive task within a single analysis, recycling code from one
analysis into another, or *packaging* useful code for distribution to the
general public. At the time of writing, there were over 17,000 user\-contributed
packages available for R, spanning a broad range of disciplines.
R is one of many programming languages used in data science. Compared to other
programming languages, R’s particular strengths are its interactivity, built\-in
support for handling missing data, the ease with which you can produce
high\-quality data visualizations, and its broad base of user\-contributed
packages (due to both its age and growing popularity).
#### Learning Objectives
* Run code in the R console
* Call functions and create variables
* Check (in)equality of values
* Describe a file system, directory, and working directory
* Write paths to files or directories
* Get or set the R working directory
* Identify RDS, CSV, TSV files and functions for reading these
* Inspect the structure of a data frame
1\.1 Prerequisites
------------------
You can download R for free [here](https://cloud.r-project.org/), and can find
an install guide here.
In addition to R, you’ll need RStudio. RStudio is an *integrated development
environment* (IDE), which means it’s a comprehensive program for writing,
editing, searching, and running code. You can do all of these things without
RStudio, but RStudio makes the process easier. You can download RStudio Desktop
Open\-Source Edition for free
[here](https://www.rstudio.com/products/rstudio/download/), and can find an
install guide here.
1\.2 The R Interface
--------------------
The first time you open RStudio, you’ll see a window divided into several
panes, like this:
Don’t worry if the text in the panes isn’t exactly the same on your computer;
it depends on your operating system and versions of R and RStudio. The console
pane, on the left, is the main interface to R. If you type R code into the
console and press the `Enter` key on your keyboard, R will run your code and
return the result.
On the right are the environment pane and the plots pane. The environment pane
shows data in your R workspace. The plots pane shows any plots you make, and
also has tabs to browse your file system and to view R’s built\-in help files.
We’ll learn more about these gradually, but to get started we’ll focus on the
console pane.
Let’s start by using R to do some arithmetic. In the console, you’ll see that
the cursor is on a line that begins with `>`, called the *prompt*. You can make
R compute the sum \\(2 \+ 2\\) by typing the code `2 + 2` after the prompt and then
pressing the `Enter` key. Your code and the result from R should look like
this:
R always puts the result on a separate line (or lines) from your code. In this
case, the result begins with the tag `[1]`, which is a hint from R that the
result is a *vector* and that this line starts with the *element* at position 1\.
We’ll learn more about vectors in Section [2\.1](data-structures.html#vectors), and eventually learn
about other data types that are displayed differently. The result of the sum,
`4`, is displayed after the tag. In this reader, results from R will usually be
typeset in monospace and further prefixed with `##` to indicate that they
aren’t code.
If you enter an incomplete expression, R will change the prompt to `+`, then
wait for you to type the rest of the expression and press the `Enter` key.
Here’s what it looks like if you only enter `2 +`:
You can finish entering the expression, or you can cancel it by pressing the
`Esc` key (or `Ctrl-c` if you’re using R without RStudio). R can only tell an
expression is incomplete if it’s missing something, like the second operand in
`2 +`. So if you mean to enter `2 + 2` but accidentally enter `2`, which is a
complete expression by itself, don’t expect R to read your mind and wait for
more input!
Try out some other arithmetic in the R console. Besides `+` for addition, the
other arithmetic operators are:
* `-` for subtraction
* `*` for multiplication
* `/` for division
* `%%` for remainder division (modulo)
* `^` or `**` for exponentiation
You can combine these and use parentheses to make more complicated expressions,
just as you would when writing a mathematical expression. When R computes a
result, it follows the standard order of operations: parentheses,
exponentiation, multiplication, division, addition, and finally subtraction.
For example, to estimate the area of a circle with radius 3, you can write:
```
3.14 * 3^2
```
```
## [1] 28.26
```
You can write R expressions with any number of spaces (including none) around
the operators and R will still compute the result. Nevertheless, putting spaces
in your code makes it easier for you and others to read, so it’s good to make
it a habit. Put spaces around most operators, after commas, and after keywords.
### 1\.2\.1 Variables
Since R is designed for mathematics and statistics, you might expect that it
provides a better appoximation for \\(\\pi\\) than `3.14`. R and most other
programming languages allow you to create named values, or *variables*. R
provides a built\-in variable called `pi` for the value of \\(\\pi\\). You can
display a variable’s value by entering its name in the console:
```
pi
```
```
## [1] 3.141593
```
You can also use variables in expressions. For instance, here’s a more precise
expression for the area of a circle with radius 3:
```
pi * 3^2
```
```
## [1] 28.27433
```
You can define your own variables with the assignment operator `=` or `<-`. In
most circumstances these two operators are interchangeable. For clarity, it’s
best to choose one you like and use it consistently in all of your R code. In
this reader, we use `=` for assignment because this is the assignment operator
in most programming languages.
The main reason to use variables is to save results so that you can use them
on other expressions later. For example, to save the area of the circle in a
variable called `area`, we can write:
```
area = pi * 3^2
```
In R, variable names can contain any combination of letters, numbers, dots `.`,
and underscores `_`, but must always start with a letter or a dot. Spaces and
other symbols are not allowed in variable names.
Now we can use the `area` variable anywhere we want the computed area. Notice
that when you assign a result to a variable, R doesn’t automatically display
that result. If you want to see the result as well, you have to enter the
variable’s name as a separate expression:
```
area
```
```
## [1] 28.27433
```
Another reason to use variables is to make an expression more general. For
instance, you might want to compute the area of several circles with different
radii. Then the expression `pi * 3^2` is too specific. You can rewrite it as
`pi * r^2`, and then assign a value to the variable `r` just before you compute
the area. Here’s the code to compute and display the area of a circle with
radius 1 this way:
```
r = 1
area = pi * r^2
area
```
```
## [1] 3.141593
```
Now if you want to compute the area for a different radius, all you have to do
is change `r` and run the code again (R will not change `area` until you do
this). Writing code that’s general enough to reuse across multiple problems can
be a big time\-saver in the long run. Later on, we’ll see ways to make this code
even easier to reuse.
### 1\.2\.2 Strings
R treats anything inside single or double quotes as literal text rather than as
an expression to evaluate. In programming jargon, a piece of literal text is
called a *string*. You can use whichever kind of quotes you prefer, but the
quote at the beginning of the string must match the quote at the end.
```
'Hi'
```
```
## [1] "Hi"
```
```
"Hello!"
```
```
## [1] "Hello!"
```
Numbers and strings are not the same thing, so for example R considers `1`
different from `"1"`. As a result, you can’t use strings with most of R’s
arithmetic operators. For instance, this code causes an error:
```
"1" + 3
```
```
## Error in "1" + 3: non-numeric argument to binary operator
```
The error message notes that `+` is not defined for non\-numeric values.
### 1\.2\.3 Comparisons
Besides arithmetic, you can also use R to compare values. The comparison
operators are:
* `<` for “less than”
* `>` for “greater than”
* `<=` for “less than or equal to”
* `>=` for “greater than or equal to”
* `==` for “equal to”
* `!=` for “not equal to”
The “equal to” operator uses two equal signs so that R can distinguish it from
`=`, the assignment operator.
Let’s look at a few examples:
```
1.5 < 3
```
```
## [1] TRUE
```
```
"a" > "b"
```
```
## [1] FALSE
```
```
pi == 3.14
```
```
## [1] FALSE
```
```
"hi" == 'hi'
```
```
## [1] TRUE
```
When you make a comparison, R returns a *logical* value, `TRUE` or `FALSE`, to
indicate the result. Logical values are not the same as strings, so they are
not quoted.
Logical values are values, so you can use them in other computations. For
example:
```
TRUE
```
```
## [1] TRUE
```
```
TRUE == FALSE
```
```
## [1] FALSE
```
Section [2\.4\.5](data-structures.html#logic) describes more ways to use and combine logical values.
Beware that the equality operators don’t always return `FALSE` when you compare
two different types of data:
```
"1" == 1
```
```
## [1] TRUE
```
```
"TRUE" <= TRUE
```
```
## [1] TRUE
```
```
"FALSE" <= TRUE
```
```
## [1] TRUE
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains why this happens, and Appendix
[5\.1](appendix.html#more-about-comparisons) explains several other ways to compare
values.
### 1\.2\.4 Calling Functions
Most of R’s features are provided through *functions*, pieces of reusable code.
You can think of a function as a machine that takes some inputs and uses them
to produce some output. In programming jargon, the inputs to a function are
called *arguments*, the output is called the *return value*, and when we use a
function, we say we’re *calling* the function.
To call a function, write its name followed by parentheses. Put any arguments
to the function inside the parentheses. For example, in R, the sine function is
named `sin` (there are also `cos` and `tan`). So we can compute the sine of
\\(\\pi / 4\\) with this code:
```
sin(pi / 4)
```
```
## [1] 0.7071068
```
There are many functions that accept more than one argument. For instance, the
`sum` function accepts any number of arguments and adds them all together. When
you call a function with multiple arguments, separate the arguments with
commas. So another way to compute \\(2 \+ 2\\) in R is:
```
sum(2, 2)
```
```
## [1] 4
```
When you call a function, R assigns each argument to a *parameter*. Parameters
are special variables that represent the inputs to a function and only exist
while that function runs. For example, the `log` function, which computes a
logarithm, has parameters `x` and `base` for the operand and base of the
logaritm, respectively. The next section, Section [1\.3](getting-started.html#getting-help), explains
how to look up the parameters for a function.
By default, R assigns arguments to parameters based on their order. The first
argument is assigned to the function’s first parameter, the second to the
second, and so on. So we can compute the logarithm of 64, base 2, with this
code:
```
log(64, 2)
```
```
## [1] 6
```
The argument 64 is assigned to the parameter `x`, and the argument 2 is
assigned to the parameter `base`. You can also assign arguments to parameters
by name with `=` (not `<-`), overriding their positions. So some other ways we
could write the call above are:
```
log(64, base = 2)
```
```
## [1] 6
```
```
log(x = 64, base = 2)
```
```
## [1] 6
```
```
log(base = 2, x = 64)
```
```
## [1] 6
```
```
log(base = 2, 64)
```
```
## [1] 6
```
All of these are equivalent. When you write code, choose whatever seems the
clearest to you. Leaving parameter names out of calls saves typing, but
including some or all of them can make the code easier to understand.
Parameters are not regular variables, and only exist while their associated
function runs. You can’t set them before a call, nor can you access them after
a call. So this code causes an error:
```
x = 64
log(base = 2)
```
```
## Error in eval(expr, envir, enclos): argument "x" is missing, with no default
```
In the error message, R says that we forgot to assign an argument to the
parameter `x`. We can keep the variable `x` and correct the call by making `x`
an argument (for the parameter `x`):
```
log(x, base = 2)
```
```
## [1] 6
```
Or, written more explicitly:
```
log(x = x, base = 2)
```
```
## [1] 6
```
In summary, variables and parameters are distinct, even if they happen to have
the same name. The variable `x` is not the same thing as the parameter `x`.
1\.3 Getting Help
-----------------
Learning and using a language is hard, so it’s important to know how to get
help. The first place to look for help is R’s built\-in documentation. In the
console, you can access a specific help page by name with `?` followed by the
name of the page.
There are help pages for all of R’s built\-in functions, usually with the same
name as the function itself. So the code to open the help page for the `log`
function is:
```
?log
```
For functions, help pages usually include a brief description, a list of
parameters, a description of the return value, and some examples.
There are also help pages for other topics, such as built\-in mathematical
constants (such as `?pi`), data sets (such as `?iris`), and operators. To look
up the help page for an operator, put the operator’s name in single or double
quotes. For example, this code opens the help page for the arithmetic
operators:
```
?"+"
```
It’s always okay to put quotes around the name of the page when you use `?`,
but they’re only required if it contains non\-alphabetic characters. So `?sqrt`,
`?'sqrt'`, and `?"sqrt"` all open the documentation for `sqrt`, the square root
function.
Sometimes you might not know the name of the help page you want to look up. You
can do a general search of R’s help pages with `??` followed by a string of
search terms. For example, to get a list of all help pages related to linear
models:
```
??"linear model"
```
This search function doesn’t always work well, and it’s often more efficient to
use an online search engine. When you search for help with R online, include
“R” as a search term. Alternatively, you can use [RSeek](https://rseek.org/), which
restricts the search to a selection of R\-related websites.
### 1\.3\.1 When Something Goes Wrong
As a programmer, sooner or later you’ll run some code and get an error message
or result you didn’t expect. Don’t panic! Even experienced programmers make
mistakes regularly, so learning how to diagnose and fix problems is vital.
Try going through these steps:
1. If R returned a warning or error message, read it! If you’re not sure what
the message means, try searching for it online.
2. Check your code for typographical errors, including incorrect capitalization
and missing or extra commas, quotes, and parentheses.
3. Test your code one line at a time, starting from the beginning. After each
line that assigns a variable, check that the value of the variable is what
you expect. Try to determine the exact line where the problem originates
(which may differ from the line that emits an error!).
If none of these steps help, try asking online. [Stack Overflow](https://stackoverflow.com/) is a
popular question and answer website for programmers. Before posting, make sure
to read about [how to ask a good question](https://stackoverflow.com/help/how-to-ask).
1\.4 File Systems
-----------------
Most of the time, you won’t just write code directly into the R console.
Reproducibility and reusability are important benefits of R over
point\-and\-click software, and in order to realize these, you have to save your
code to your computer’s hard drive. Let’s start by reviewing how files on a
computer work. You’ll need to understand that before you can save your code,
and it will also be important later on for loading data sets.
Your computer’s *file system* is a collection of *files* (chunks of data) and
*directories* (or “folders”) that organize those files. For instance, the file
system on a computer shared by [Ada](https://en.wikipedia.org/wiki/Ada_Lovelace) and [Charles](https://en.wikipedia.org/wiki/Charles_Babbage), two pioneers of
computing, might look like this:
Don’t worry if your file system looks a bit different from the picture.
File systems have a tree\-like structure, with a top\-level directory called the
*root directory*. On Ada and Charles’ computer, the root is called `/`, which
is also what it’s called on all macOS and Linux computers. On Windows, the root
is usually called `C:/`, but sometimes other letters, like `D:/`, are also used
depending on the computer’s hardware.
A *path* is a list of directories that leads to a specific file or directory on
a file system (imagine giving directons to someone as they walk through the
file system). We use forward slashes `/` to separate the directories in a path,
rather than commas or spaces. The root directory includes a forward slash as
part of its name, and doesn’t need an extra one.
For example, suppose Ada wants to write a path to the file `cats.csv`. She can
write the path like this:
```
/Users/ada/cats.csv
```
You can read this path from left\-to\-right as, “Starting from the root
directory, go to the `Users` directory, then from there go to the `ada`
directory, and from there go to the file `cats.csv`.” Alternatively, you can
read the path from right\-to\-left as, “The file `cats.csv` inside of the `ada`
directory, which is inside of the `Users` directory, which is in the root
directory.”
As another example, suppose Charles wants a path to the `Programs` directory.
He can write:
```
/Programs/
```
The `/` at the end of this path is reminder that `Programs` is a directory, not
a file. Charles could also write the path like this:
```
/Programs
```
This is still correct, but it’s not as obvious that `Programs` is a directory.
In other words, when a path leads to a directory, including a *trailing slash*
is optional, but makes the meaning of the path clearer. Paths that lead to
files never have a trailing slash.
On Windows computers, paths are usually written with backslashes `\` to
separate directories instead of forward slashes. Fortunately, R uses forward
slashes `/` for all paths, regardless of the operating system. So when you’re
working in R, use forward slashes and don’t worry about the operating system.
This is especially convenient when you want to share code with someone that
uses a different operating system than you.
### 1\.4\.1 Absolute \& Relative Paths
A path that starts from the root directory, like all of the ones we’ve seen so
far, is called an *absolute path*. The path is “absolute” because it
unambiguously describes where a file or directory is located. The downside is
that absolute paths usually don’t work well if you share your code.
For example, suppose Ada uses the path `/Programs/ada/cats.csv` to load the
`cats.csv` file in her code. If she shares her code with another pioneer of
computing, say [Gladys](https://en.wikipedia.org/wiki/Gladys_West), who also has a copy of `cats.csv`, it might
not work. Even though Gladys has the file, she might not have it in a directory
called `ada`, and might not even have a directory called `ada` on her computer.
Because Ada used an absolute path, her code works on her own computer, but
isn’t portable to others.
On the other hand, a *relative path* is one that doesn’t start from the root
directory. The path is “relative” to an unspecified starting point, which
usually depends on the context.
For instance, suppose Ada’s code is saved in the file `analysis.R` (more about
`.R` files in Section [1\.4\.2](getting-started.html#r-scripts)), which is in the same directory as
`cats.csv` on her computer. Then instead of an absolute path, she can use a
relative path in her code:
```
cats.csv
```
The context is the location of `analysis.R`, the file that contains the code.
In other words, the starting point on Ada’s computer is the `ada` directory. On
other computers, the starting point will be different, depending on where the
code is stored.
Now suppose Ada sends her corrected code in `analysis.R` to Gladys, and tells
Gladys to put it in the same directory as `cats.csv`. Since the path `cats.csv`
is relative, the code will still work on Gladys’ computer, as long as the two
files are in the same directory. The name of that directory and its location in
the file system don’t matter, and don’t have to be the same as on Ada’s
computer. Gladys can put the files in a directory `/Users/gladys/from_ada/` and
the path (and code) will still work.
Relative paths can include directories. For example, suppose that Charles wants
to write a relative path from the `Users` directory to a cool selfie he took.
Then he can write:
```
charles/cool_hair_selfie.jpg
```
You can read this path as, “Starting from wherever you are, go to the `charles`
directory, and from there go to the `cool_hair_selfie.jpg` file.” In other
words, the relative path depends on the context of the code or program that
uses it.
When use you paths in R code, they should almost always be relative paths. This
ensures that the code is portable to other computers, which is an important
aspect of reproducibility. Another benefit is that relative paths tend to be
shorter, making your code easier to read (and write).
When you write paths, there are three shortcuts you can use. These are most
useful in relative paths, but also work in absolute paths:
* `.` means the current directory.
* `..` means the directory above the current directory.
* `~` means the *home directory*. Each user has their own home directory, whose
location depends on the operating system and their username. Home directories
are typically found inside `C:/Users/` on Windows, `/Users/` on macOS, and
`/home/` on Linux.
As an example, suppose Ada wants to write a (relative) path from the `ada`
directory to Charles’ cool selfie. Using these shorcuts, she can write:
```
../charles/cool_hair_selfie.jpg
```
Read this as, “Starting from wherever you are, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.” Since
`/Users/ada` is Ada’s home directory, she could also write the path as:
```
~/../charles/cool_hair_selfie.jpg
```
This path has the same effect, but the meaning is slightly different. You can
read it as “Starting from your home directory, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.”
The `..` and `~` shortcut are frequently used and worth remembering. The `.`
shortcut is included here in case you see it in someone else’s code. Since it
means the current directory, a path like `./cats.csv` is identical to
`cats.csv`, and the latter is preferable for being simpler. There are a few
specific situations where `.` is necessary, but they fall outside the scope of
this text.
### 1\.4\.2 R Scripts
Now that you know how file systems and paths work, you’re ready to learn how to
save your R code to a file. R code is usually saved into an *R script*
(extension `.R`) or an *R Markdown file* (extension `.Rmd`). R scripts are
slightly simpler, so let’s focus on those.
In RStudio, you can create a new R script with this menu option:
```
File -> New File -> R Script
```
This will open a new pane in RStudio, like this:
The new pane is the scripts pane, which displays all of the R scripts you’re
editing. Each script appears in a separate tab. In the screenshot, only one
script, the new script, is open.
Editing a script is similar to editing any other text document. You can write,
delete, copy, cut, and paste text. You can also save the file to your
computer’s file system. When you do, pay attention to where you save the file,
as you might need it later.
The contents of an R script should be R code. Anything else you want to write
in the script (notes, documentation, etc.) should be in a *comment*. In R,
comments begin with `#` and extend to the end of the line:
```
# This is a comment.
```
R will ignore comments when you run your code.
When you start a new project, it’s a good idea to create a specific directory
for all of the project’s files. If you’re using R, you should also create one
or more R scripts in that directory. As you work, write your code directly into
a script. Arrange your code in the order of the steps to solve the problem,
even if you write some parts before others. Comment out or delete any lines of
code that you try but ultimately decide you don’t need. Make sure to save the
file periodically so that you don’t lose your work. Following these guidelines
will help you stay organized and make it easier to share your code with others
later.
While editing, you can run the current line in the R console by pressing
`Ctrl`\+`Enter` on Windows and Linux, or `Cmd`\+`Enter` on macOS. This way you
can test and correct your code as you write it.
If you want, you can instead run (or *source*) the entire R script, by calling
the `source` function with the path to the script as an argument. This is also
what the “Source on Save” check box refers to in RStudio. The code runs in
order, only stopping if an error occurs.
For instance, if you save the script as `my_cool_script.R`, then you can run
`source("my_cool_script.R")` in the console to run the entire script (pay
attention to the path—it may be different on your computer).
R Markdown files are an alternative format for storing R code. They provide a
richer set of formatting options, and are usually a better choice than R
scripts if you’re writing a report that contains code. You can learn more
about R Markdown files [here](https://rmarkdown.rstudio.com/).
### 1\.4\.3 The Working Directory
Section [1\.4\.1](getting-started.html#absolute-relative-paths) explained that relative paths have a
starting point that depends on the context where the path is used. We can make
that idea more concrete for R. The *working directory* is the starting point R
uses for relative paths. Think of the working directory as the directory R is
currently “at” or watching.
The function `getwd` returns the absolute path for the current working
directory, as a string. It doesn’t require any arguments:
```
getwd()
```
```
## [1] "/home/nick/workshop/datalab/workshops/r_basics"
```
On your computer, the output from `getwd` will likely be different. This is a
very useful function for getting your bearings when you write relative paths.
If you write a relative path and it doesn’t work as expected, the first thing
to do is check the working directory.
The related `setwd` function changes the working directory. It takes one
argument: a path to the new working directory. Here’s an example:
```
setwd("..")
# Now check the working directory.
getwd()
```
Generally, you should avoid using calls to `setwd` in your R scripts and R
Markdown files. Calling `setwd` makes your code more difficult to understand,
and can always be avoided by using appropriate relative paths. If you call
`setwd` with an absolute path, it also makes your code less portable to other
computers. It’s fine to use `setwd` interactively (in the R console), but avoid
making your saved code dependent on it.
Another function that’s useful for dealing with the working directory and file
system is `list.files`. The `list.files` function returns the names of all of
the files and directories inside of a directory. It accepts a path to a
directory as an argument, or assumes the working directory if you don’t pass a
path. For instance:
```
# List files and directories in /home/.
list.files("/home/")
```
```
## [1] "lost+found" "nick"
```
```
# List files and directories in the working directory.
list.files()
```
```
## [1] "_bookdown_files" "_bookdown.yml"
## [3] "_main.rds" "01_getting-started.Rmd"
## [5] "02_data-structures.Rmd" "03_exploring-data_files"
## [7] "03_exploring-data.Rmd" "04_automating-tasks.Rmd"
## [9] "05_appendix.Rmd" "97_where-to-learn-more.Rmd"
## [11] "98_acknowledgements.Rmd" "99_assessment.Rmd"
## [13] "assessment" "data"
## [15] "docs" "graphviz"
## [17] "img" "index.md"
## [19] "index.Rmd" "knit.R"
## [21] "LICENSE" "makefile"
## [23] "notes" "R"
## [25] "raw" "README.md"
## [27] "rendere73e1982629f.rds" "renv"
## [29] "renv.lock"
```
As usual, since you have a different computer, you’re likely to see different
output if you run this code. If you call `list.files` with an invalid path or
an empty directory, the output is `character(0)`:
```
list.files("/this/path/is/fake/")
```
```
## character(0)
```
Later on, we’ll learn about what `character(0)` means more generally.
1\.5 Reading Files
------------------
Analyzing data sets is one of the most common things to do in R. The first step
is to get R to read your data. Data sets come in a variety of file formats, and
you need to identify the format in order to tell R how to read the data.
Most of the time, you can guess the format of a file by looking at its
*extension*, the characters (usually three) after the last dot `.` in the
filename. For example, the extension `.jpg` or `.jpeg` indicates a [JPEG image
file](https://en.wikipedia.org/wiki/JPEG). Some operating systems hide extensions by default, but you can find
instructions to change this setting online by searching for “show file
extensions” and your operating system’s name. The extension is just part of the
file’s name, so it should be taken as a hint about the file’s format rather
than a guarantee.
R has built\-in functions for reading a variety of formats. The R community also
provides *packages*, shareable and reusable pieces of code, to read even more
formats. You’ll learn more about packages later, in Section [3\.2](exploring-data.html#packages).
For now, let’s focus on data sets that can be read with R’s built\-in functions.
Here are several formats that are frequently used to distribute data, along
with the name of a built\-in function or contributed package that can read the
format:
| Name | Extension | Function or Package | Tabular? | Text? |
| --- | --- | --- | --- | --- |
| Comma\-separated Values | `.csv` | `read.csv` | Yes | Yes |
| Tab\-separated Values | `.tsv` | `read.delim` | Yes | Yes |
| Fixed\-width File | `.fwf` | `read.fwf` | Yes | Yes |
| Microsoft Excel | `.xlsx` | **readr** package | Yes | No |
| Microsoft Excel 1993\-2007 | `.xls` | **readr** package | Yes | No |
| [Apache Arrow](https://arrow.apache.org/) | `.feather` | **arrow** package | Yes | No |
| R Data | `.rds` | `readRDS` | Sometimes | No |
| R Data | `.rda` | `load` | Sometimes | No |
| Plaintext | `.txt` | `readLines` | Sometimes | Yes |
| Extensible Markup Language | `.xml` | **xml2** package | No | Yes |
| JavaScript Object Notation | `.json` | **jsonlite** package | No | Yes |
A *tabular* data set is one that’s structured as a table, with rows and columns.
We’ll focus on tabular data sets for most of this reader, since they’re easier
to get started with. Here’s an example of a tabular data set:
| Fruit | Quantity | Price |
| --- | --- | --- |
| apple | 32 | 1\.49 |
| banana | 541 | 0\.79 |
| pear | 10 | 1\.99 |
A *text* file is one that contains human\-readable lines of text. You can check
this by opening the file with a text editor such as Microsoft Notepad or macOS
TextEdit. Many file formats use text in order to make the format easier to work
with.
For instance, a *comma\-separated values* (CSV) file records a tabular data
using one line per row, with commas separating columns. If you store the table
above in a CSV file and open the file in a text editor, here’s what you’ll see:
```
Fruit,Quantity,Price
apple,32,1.49
banana,541,0.79
pear,10,1.99
```
A *binary* file is one that’s not human\-readable. You can’t just read off the
data if you open a binary file in a text editor, but they have a number of
other advantages. Compared to text files, binary files are often faster to read
and take up less storage space (bytes).
As an example, R’s built\-in binary format is called *RDS* (which may stand for
“R data serialized”). RDS files are extremely useful for backing up work, since
they can store any kind of R object, even ones that are not tabular. You can
learn more about how to create an RDS file on the `?saveRDS` help page, and how
to read one on the `?readRDS` help page.
### 1\.5\.1 Hello, Data!
Let’s read our first data set! Over the next few sections, we’re going to
explore data from the U.S. Bureau of Labor Statistics about median employee
earnings. The data set was prepared as part of the Tidy Tuesday R community
project. You can find more details about the data set [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2021/2021-02-23), and you can
download the data set [here](https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-02-23/earn.csv) (you may need to choose `File -> Save As...` in your browser’s menu).
The data set is a file called `earn.csv`, which suggests it’s a CSV file. In
this case, the extension is correct, so we can read the file into R with the
built\-in `read.csv` function. The first argument is the path to where you saved
the file, which may be different on your computer. The `read.csv` function
returns the data set, but R won’t keep the data in memory unless we assign the
returned result to a variable:
```
earn = read.csv("data/earn.csv")
```
The variable name `earn` here is arbitrary; you can choose something different
if you want. However, in general, it’s a good habit to choose variable names
that describe the contents of the variable somehow.
If you tried running the line of code above and got an error message, pay
attention to what the error message says, and remember the strategies to get
help in Section [1\.3](getting-started.html#getting-help). The most common mistake when reading a
file is incorrectly specifying the path, so first check that you got the path
right.
If you ran the line of code and there was no error message, congratulations,
you’ve read your first data set into R!
1\.6 Data Frames
----------------
Now that we’ve loaded the data, let’s take a look at it. When you’re working
with a new data set, it’s usually not a good idea to print it out directly (by
typing `earn`, in this case) until you know how big it is. Big data sets can
take a long time to print, and the output can be difficult to read.
Instead, you can use the `head` function to print only the beginning, or
*head*, of a data set. Let’s take a peek:
```
head(earn)
```
```
## sex race ethnic_origin age year quarter n_persons
## 1 Both Sexes All Races All Origins 16 years and over 2010 1 96821000
## 2 Both Sexes All Races All Origins 16 years and over 2010 2 99798000
## 3 Both Sexes All Races All Origins 16 years and over 2010 3 101385000
## 4 Both Sexes All Races All Origins 16 years and over 2010 4 100120000
## 5 Both Sexes All Races All Origins 16 years and over 2011 1 98329000
## 6 Both Sexes All Races All Origins 16 years and over 2011 2 100593000
## median_weekly_earn
## 1 754
## 2 740
## 3 740
## 4 752
## 5 755
## 6 753
```
This data set is tabular—as you might have already guessed, since it came
from a CSV file. In R, it’s represented by a *data frame*, a table with rows
and columns. R uses data frames to represent most (but not all) kinds of
tabular data. The `read.csv` function, which we used to read this data, always
returns a data frame.
For a data frame, the `head` function only prints the first six rows. If there
are lots of columns or the columns are wide, as is the case here, R wraps the
output across lines.
When you first read an object into R, you might not know whether it’s a data
frame. One way to check is visually, by printing it, as we just did. A better
way to check is with the `class` function, which returns information about what
an object is. For a data frame, the result will always contain `data.frame`:
```
class(earn)
```
```
## [1] "data.frame"
```
We’ll learn more about classes in Section [2\.2](data-structures.html#data-types-classes), but for
now you can use this function to identify data frames.
By counting the columns in the output from `head(earn)`, we can see that this
data set has eight columns. A more convenient way to check the number of
columns in a data set is with the `ncol` function:
```
ncol(earn)
```
```
## [1] 8
```
The similarly\-named `nrow` function returns the number of rows:
```
nrow(earn)
```
```
## [1] 4224
```
Alternatively, you can get both numbers at the same time with the `dim` (short
for “dimensions”) function.
Since the columns have names, we might also want to get just these. You can do
that with the `names` or `colnames` functions. Both return the same result:
```
names(earn)
```
```
## [1] "sex" "race" "ethnic_origin"
## [4] "age" "year" "quarter"
## [7] "n_persons" "median_weekly_earn"
```
```
colnames(earn)
```
```
## [1] "sex" "race" "ethnic_origin"
## [4] "age" "year" "quarter"
## [7] "n_persons" "median_weekly_earn"
```
If the rows have names, you can get those with the `rownames` function. For
this particular data set, the rows don’t have names.
### 1\.6\.1 Summarizing Data
An efficient way to get a sense of what’s actually in a data set is to have R
compute summary information. This works especially well for data frames, but
also applies to other data. R provides two different functions to get
summaries: `str` and `summary`.
The `str` function returns a *structural summary* of an object. This kind of
summary tells us about the structure of the data—the number of rows, the
number and names of columns, what kind of data is in each column, and some
sample values. Here’s the structural summary for the earnings data:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
This summary lists information about each column, and includes most of what we
found earlier by using several different functions separately. The summary uses
`chr` to indicate columns of text (“characters”) and `int` to indicate columns
of integers.
In contrast to `str`, the `summary` function returns a *statistical summary* of
an object. This summary includes summary statistics for each column, choosing
appropriate statistics based on the kind of data in the column. For numbers,
this is generally the mean, median, and quantiles. For categories, this is the
frequencies. Other kinds of statistics are shown for other kinds of data.
Here’s the statistical summary for the earnings data:
```
summary(earn)
```
```
## sex race ethnic_origin age
## Length:4224 Length:4224 Length:4224 Length:4224
## Class :character Class :character Class :character Class :character
## Mode :character Mode :character Mode :character Mode :character
##
##
##
## year quarter n_persons median_weekly_earn
## Min. :2010 Min. :1.00 Min. : 103000 Min. : 318.0
## 1st Qu.:2012 1st Qu.:1.75 1st Qu.: 2614000 1st Qu.: 605.0
## Median :2015 Median :2.50 Median : 7441000 Median : 755.0
## Mean :2015 Mean :2.50 Mean : 16268338 Mean : 762.2
## 3rd Qu.:2018 3rd Qu.:3.25 3rd Qu.: 17555250 3rd Qu.: 911.0
## Max. :2020 Max. :4.00 Max. :118358000 Max. :1709.0
```
### 1\.6\.2 Selecting Columns
You can select an individual column from a data frame by name with `$`, the
dollar sign operator. The syntax is:
```
VARIABLE$COLUMN_NAME
```
For instance, for the earnings data, `earn$age` selects
the `age` column, and `earn$n_persons` selects the `n_persons` column. So one
way to compute the mean of the `n_persons` column is:
```
mean(earn$n_persons)
```
```
## [1] 16268338
```
Similarly, to compute the range of the `year` column:
```
range(earn$year)
```
```
## [1] 2010 2020
```
You can also use the dollar sign operator to assign values to columns. For
instance, to assign `0` to the entire `quarter` column:
```
earn$quarter = 0
```
Be careful when you do this, as there is no undo. Fortunately, we haven’t
applied any transformations to the earnings data yet, so we can reset the
`earn` variable back to what it was by reloading the data set:
```
earn = read.csv("data/earn.csv")
```
In Section [2\.4](data-structures.html#indexing), we’ll learn how to select rows and individual
elements from a data frame, as well as other ways to select columns.
1\.7 Exercises
--------------
### 1\.7\.1 Exercise
In a string, an *escape sequence* or *escape code* consists of a backslash
followed by one or more characters. Escape sequences make it possible to:
* Write quotes or backslashes within a string
* Write characters that don’t appear on your keyboard (for example, characters
in a foreign language)
For example, the escape sequence `\n` corresponds to the newline character.
There’s a complete list of escape sequences for R in the `?Quotes` help file.
Other programming languages also use escape sequences, and many of them are the
same as in R.
1. Assign a string that contains a newline to the variable `newline`. Then make
R display the value of the variable by entering `newline` at the R prompt.
2. The `message` function prints output to the R console, so it’s one way you
can make your R code report information as it runs. Use the `message`
function to print `newline`.
3. How does the output from part 1 compare to the output from part 2? Why do
you think they differ?
### 1\.7\.2 Exercise
1. Choose a directory on your computer that you’re familiar with, such as one
you created. Determine the path to the directory, then use `list.files` to
display its contents. Do the files displayed match what you see in your
system’s file browser?
2. What does the `all.files` parameter of `list.files` do? Give an example.
### 1\.7\.3 Exercise
The `read.table` function is another function for reading tabular data. Take a
look at the help file for `read.table`. Recall that `read.csv` reads tabular
data where the values are separated by commas, and `read.delim` reads tabular
data where the values are separated by tabs.
1. What value\-separator does `read.table` expect by default?
2. Is it possible to use `read.table` to read a CSV? Explain. If your answer is
yes, show how to use `read.table` to load the employee earnings data from
Section [1\.5\.1](getting-started.html#hello-data).
#### Learning Objectives
* Run code in the R console
* Call functions and create variables
* Check (in)equality of values
* Describe a file system, directory, and working directory
* Write paths to files or directories
* Get or set the R working directory
* Identify RDS, CSV, TSV files and functions for reading these
* Inspect the structure of a data frame
1\.1 Prerequisites
------------------
You can download R for free [here](https://cloud.r-project.org/), and can find
an install guide here.
In addition to R, you’ll need RStudio. RStudio is an *integrated development
environment* (IDE), which means it’s a comprehensive program for writing,
editing, searching, and running code. You can do all of these things without
RStudio, but RStudio makes the process easier. You can download RStudio Desktop
Open\-Source Edition for free
[here](https://www.rstudio.com/products/rstudio/download/), and can find an
install guide here.
1\.2 The R Interface
--------------------
The first time you open RStudio, you’ll see a window divided into several
panes, like this:
Don’t worry if the text in the panes isn’t exactly the same on your computer;
it depends on your operating system and versions of R and RStudio. The console
pane, on the left, is the main interface to R. If you type R code into the
console and press the `Enter` key on your keyboard, R will run your code and
return the result.
On the right are the environment pane and the plots pane. The environment pane
shows data in your R workspace. The plots pane shows any plots you make, and
also has tabs to browse your file system and to view R’s built\-in help files.
We’ll learn more about these gradually, but to get started we’ll focus on the
console pane.
Let’s start by using R to do some arithmetic. In the console, you’ll see that
the cursor is on a line that begins with `>`, called the *prompt*. You can make
R compute the sum \\(2 \+ 2\\) by typing the code `2 + 2` after the prompt and then
pressing the `Enter` key. Your code and the result from R should look like
this:
R always puts the result on a separate line (or lines) from your code. In this
case, the result begins with the tag `[1]`, which is a hint from R that the
result is a *vector* and that this line starts with the *element* at position 1\.
We’ll learn more about vectors in Section [2\.1](data-structures.html#vectors), and eventually learn
about other data types that are displayed differently. The result of the sum,
`4`, is displayed after the tag. In this reader, results from R will usually be
typeset in monospace and further prefixed with `##` to indicate that they
aren’t code.
If you enter an incomplete expression, R will change the prompt to `+`, then
wait for you to type the rest of the expression and press the `Enter` key.
Here’s what it looks like if you only enter `2 +`:
You can finish entering the expression, or you can cancel it by pressing the
`Esc` key (or `Ctrl-c` if you’re using R without RStudio). R can only tell an
expression is incomplete if it’s missing something, like the second operand in
`2 +`. So if you mean to enter `2 + 2` but accidentally enter `2`, which is a
complete expression by itself, don’t expect R to read your mind and wait for
more input!
Try out some other arithmetic in the R console. Besides `+` for addition, the
other arithmetic operators are:
* `-` for subtraction
* `*` for multiplication
* `/` for division
* `%%` for remainder division (modulo)
* `^` or `**` for exponentiation
You can combine these and use parentheses to make more complicated expressions,
just as you would when writing a mathematical expression. When R computes a
result, it follows the standard order of operations: parentheses,
exponentiation, multiplication, division, addition, and finally subtraction.
For example, to estimate the area of a circle with radius 3, you can write:
```
3.14 * 3^2
```
```
## [1] 28.26
```
You can write R expressions with any number of spaces (including none) around
the operators and R will still compute the result. Nevertheless, putting spaces
in your code makes it easier for you and others to read, so it’s good to make
it a habit. Put spaces around most operators, after commas, and after keywords.
### 1\.2\.1 Variables
Since R is designed for mathematics and statistics, you might expect that it
provides a better appoximation for \\(\\pi\\) than `3.14`. R and most other
programming languages allow you to create named values, or *variables*. R
provides a built\-in variable called `pi` for the value of \\(\\pi\\). You can
display a variable’s value by entering its name in the console:
```
pi
```
```
## [1] 3.141593
```
You can also use variables in expressions. For instance, here’s a more precise
expression for the area of a circle with radius 3:
```
pi * 3^2
```
```
## [1] 28.27433
```
You can define your own variables with the assignment operator `=` or `<-`. In
most circumstances these two operators are interchangeable. For clarity, it’s
best to choose one you like and use it consistently in all of your R code. In
this reader, we use `=` for assignment because this is the assignment operator
in most programming languages.
The main reason to use variables is to save results so that you can use them
on other expressions later. For example, to save the area of the circle in a
variable called `area`, we can write:
```
area = pi * 3^2
```
In R, variable names can contain any combination of letters, numbers, dots `.`,
and underscores `_`, but must always start with a letter or a dot. Spaces and
other symbols are not allowed in variable names.
Now we can use the `area` variable anywhere we want the computed area. Notice
that when you assign a result to a variable, R doesn’t automatically display
that result. If you want to see the result as well, you have to enter the
variable’s name as a separate expression:
```
area
```
```
## [1] 28.27433
```
Another reason to use variables is to make an expression more general. For
instance, you might want to compute the area of several circles with different
radii. Then the expression `pi * 3^2` is too specific. You can rewrite it as
`pi * r^2`, and then assign a value to the variable `r` just before you compute
the area. Here’s the code to compute and display the area of a circle with
radius 1 this way:
```
r = 1
area = pi * r^2
area
```
```
## [1] 3.141593
```
Now if you want to compute the area for a different radius, all you have to do
is change `r` and run the code again (R will not change `area` until you do
this). Writing code that’s general enough to reuse across multiple problems can
be a big time\-saver in the long run. Later on, we’ll see ways to make this code
even easier to reuse.
### 1\.2\.2 Strings
R treats anything inside single or double quotes as literal text rather than as
an expression to evaluate. In programming jargon, a piece of literal text is
called a *string*. You can use whichever kind of quotes you prefer, but the
quote at the beginning of the string must match the quote at the end.
```
'Hi'
```
```
## [1] "Hi"
```
```
"Hello!"
```
```
## [1] "Hello!"
```
Numbers and strings are not the same thing, so for example R considers `1`
different from `"1"`. As a result, you can’t use strings with most of R’s
arithmetic operators. For instance, this code causes an error:
```
"1" + 3
```
```
## Error in "1" + 3: non-numeric argument to binary operator
```
The error message notes that `+` is not defined for non\-numeric values.
### 1\.2\.3 Comparisons
Besides arithmetic, you can also use R to compare values. The comparison
operators are:
* `<` for “less than”
* `>` for “greater than”
* `<=` for “less than or equal to”
* `>=` for “greater than or equal to”
* `==` for “equal to”
* `!=` for “not equal to”
The “equal to” operator uses two equal signs so that R can distinguish it from
`=`, the assignment operator.
Let’s look at a few examples:
```
1.5 < 3
```
```
## [1] TRUE
```
```
"a" > "b"
```
```
## [1] FALSE
```
```
pi == 3.14
```
```
## [1] FALSE
```
```
"hi" == 'hi'
```
```
## [1] TRUE
```
When you make a comparison, R returns a *logical* value, `TRUE` or `FALSE`, to
indicate the result. Logical values are not the same as strings, so they are
not quoted.
Logical values are values, so you can use them in other computations. For
example:
```
TRUE
```
```
## [1] TRUE
```
```
TRUE == FALSE
```
```
## [1] FALSE
```
Section [2\.4\.5](data-structures.html#logic) describes more ways to use and combine logical values.
Beware that the equality operators don’t always return `FALSE` when you compare
two different types of data:
```
"1" == 1
```
```
## [1] TRUE
```
```
"TRUE" <= TRUE
```
```
## [1] TRUE
```
```
"FALSE" <= TRUE
```
```
## [1] TRUE
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains why this happens, and Appendix
[5\.1](appendix.html#more-about-comparisons) explains several other ways to compare
values.
### 1\.2\.4 Calling Functions
Most of R’s features are provided through *functions*, pieces of reusable code.
You can think of a function as a machine that takes some inputs and uses them
to produce some output. In programming jargon, the inputs to a function are
called *arguments*, the output is called the *return value*, and when we use a
function, we say we’re *calling* the function.
To call a function, write its name followed by parentheses. Put any arguments
to the function inside the parentheses. For example, in R, the sine function is
named `sin` (there are also `cos` and `tan`). So we can compute the sine of
\\(\\pi / 4\\) with this code:
```
sin(pi / 4)
```
```
## [1] 0.7071068
```
There are many functions that accept more than one argument. For instance, the
`sum` function accepts any number of arguments and adds them all together. When
you call a function with multiple arguments, separate the arguments with
commas. So another way to compute \\(2 \+ 2\\) in R is:
```
sum(2, 2)
```
```
## [1] 4
```
When you call a function, R assigns each argument to a *parameter*. Parameters
are special variables that represent the inputs to a function and only exist
while that function runs. For example, the `log` function, which computes a
logarithm, has parameters `x` and `base` for the operand and base of the
logaritm, respectively. The next section, Section [1\.3](getting-started.html#getting-help), explains
how to look up the parameters for a function.
By default, R assigns arguments to parameters based on their order. The first
argument is assigned to the function’s first parameter, the second to the
second, and so on. So we can compute the logarithm of 64, base 2, with this
code:
```
log(64, 2)
```
```
## [1] 6
```
The argument 64 is assigned to the parameter `x`, and the argument 2 is
assigned to the parameter `base`. You can also assign arguments to parameters
by name with `=` (not `<-`), overriding their positions. So some other ways we
could write the call above are:
```
log(64, base = 2)
```
```
## [1] 6
```
```
log(x = 64, base = 2)
```
```
## [1] 6
```
```
log(base = 2, x = 64)
```
```
## [1] 6
```
```
log(base = 2, 64)
```
```
## [1] 6
```
All of these are equivalent. When you write code, choose whatever seems the
clearest to you. Leaving parameter names out of calls saves typing, but
including some or all of them can make the code easier to understand.
Parameters are not regular variables, and only exist while their associated
function runs. You can’t set them before a call, nor can you access them after
a call. So this code causes an error:
```
x = 64
log(base = 2)
```
```
## Error in eval(expr, envir, enclos): argument "x" is missing, with no default
```
In the error message, R says that we forgot to assign an argument to the
parameter `x`. We can keep the variable `x` and correct the call by making `x`
an argument (for the parameter `x`):
```
log(x, base = 2)
```
```
## [1] 6
```
Or, written more explicitly:
```
log(x = x, base = 2)
```
```
## [1] 6
```
In summary, variables and parameters are distinct, even if they happen to have
the same name. The variable `x` is not the same thing as the parameter `x`.
### 1\.2\.1 Variables
Since R is designed for mathematics and statistics, you might expect that it
provides a better appoximation for \\(\\pi\\) than `3.14`. R and most other
programming languages allow you to create named values, or *variables*. R
provides a built\-in variable called `pi` for the value of \\(\\pi\\). You can
display a variable’s value by entering its name in the console:
```
pi
```
```
## [1] 3.141593
```
You can also use variables in expressions. For instance, here’s a more precise
expression for the area of a circle with radius 3:
```
pi * 3^2
```
```
## [1] 28.27433
```
You can define your own variables with the assignment operator `=` or `<-`. In
most circumstances these two operators are interchangeable. For clarity, it’s
best to choose one you like and use it consistently in all of your R code. In
this reader, we use `=` for assignment because this is the assignment operator
in most programming languages.
The main reason to use variables is to save results so that you can use them
on other expressions later. For example, to save the area of the circle in a
variable called `area`, we can write:
```
area = pi * 3^2
```
In R, variable names can contain any combination of letters, numbers, dots `.`,
and underscores `_`, but must always start with a letter or a dot. Spaces and
other symbols are not allowed in variable names.
Now we can use the `area` variable anywhere we want the computed area. Notice
that when you assign a result to a variable, R doesn’t automatically display
that result. If you want to see the result as well, you have to enter the
variable’s name as a separate expression:
```
area
```
```
## [1] 28.27433
```
Another reason to use variables is to make an expression more general. For
instance, you might want to compute the area of several circles with different
radii. Then the expression `pi * 3^2` is too specific. You can rewrite it as
`pi * r^2`, and then assign a value to the variable `r` just before you compute
the area. Here’s the code to compute and display the area of a circle with
radius 1 this way:
```
r = 1
area = pi * r^2
area
```
```
## [1] 3.141593
```
Now if you want to compute the area for a different radius, all you have to do
is change `r` and run the code again (R will not change `area` until you do
this). Writing code that’s general enough to reuse across multiple problems can
be a big time\-saver in the long run. Later on, we’ll see ways to make this code
even easier to reuse.
### 1\.2\.2 Strings
R treats anything inside single or double quotes as literal text rather than as
an expression to evaluate. In programming jargon, a piece of literal text is
called a *string*. You can use whichever kind of quotes you prefer, but the
quote at the beginning of the string must match the quote at the end.
```
'Hi'
```
```
## [1] "Hi"
```
```
"Hello!"
```
```
## [1] "Hello!"
```
Numbers and strings are not the same thing, so for example R considers `1`
different from `"1"`. As a result, you can’t use strings with most of R’s
arithmetic operators. For instance, this code causes an error:
```
"1" + 3
```
```
## Error in "1" + 3: non-numeric argument to binary operator
```
The error message notes that `+` is not defined for non\-numeric values.
### 1\.2\.3 Comparisons
Besides arithmetic, you can also use R to compare values. The comparison
operators are:
* `<` for “less than”
* `>` for “greater than”
* `<=` for “less than or equal to”
* `>=` for “greater than or equal to”
* `==` for “equal to”
* `!=` for “not equal to”
The “equal to” operator uses two equal signs so that R can distinguish it from
`=`, the assignment operator.
Let’s look at a few examples:
```
1.5 < 3
```
```
## [1] TRUE
```
```
"a" > "b"
```
```
## [1] FALSE
```
```
pi == 3.14
```
```
## [1] FALSE
```
```
"hi" == 'hi'
```
```
## [1] TRUE
```
When you make a comparison, R returns a *logical* value, `TRUE` or `FALSE`, to
indicate the result. Logical values are not the same as strings, so they are
not quoted.
Logical values are values, so you can use them in other computations. For
example:
```
TRUE
```
```
## [1] TRUE
```
```
TRUE == FALSE
```
```
## [1] FALSE
```
Section [2\.4\.5](data-structures.html#logic) describes more ways to use and combine logical values.
Beware that the equality operators don’t always return `FALSE` when you compare
two different types of data:
```
"1" == 1
```
```
## [1] TRUE
```
```
"TRUE" <= TRUE
```
```
## [1] TRUE
```
```
"FALSE" <= TRUE
```
```
## [1] TRUE
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains why this happens, and Appendix
[5\.1](appendix.html#more-about-comparisons) explains several other ways to compare
values.
### 1\.2\.4 Calling Functions
Most of R’s features are provided through *functions*, pieces of reusable code.
You can think of a function as a machine that takes some inputs and uses them
to produce some output. In programming jargon, the inputs to a function are
called *arguments*, the output is called the *return value*, and when we use a
function, we say we’re *calling* the function.
To call a function, write its name followed by parentheses. Put any arguments
to the function inside the parentheses. For example, in R, the sine function is
named `sin` (there are also `cos` and `tan`). So we can compute the sine of
\\(\\pi / 4\\) with this code:
```
sin(pi / 4)
```
```
## [1] 0.7071068
```
There are many functions that accept more than one argument. For instance, the
`sum` function accepts any number of arguments and adds them all together. When
you call a function with multiple arguments, separate the arguments with
commas. So another way to compute \\(2 \+ 2\\) in R is:
```
sum(2, 2)
```
```
## [1] 4
```
When you call a function, R assigns each argument to a *parameter*. Parameters
are special variables that represent the inputs to a function and only exist
while that function runs. For example, the `log` function, which computes a
logarithm, has parameters `x` and `base` for the operand and base of the
logaritm, respectively. The next section, Section [1\.3](getting-started.html#getting-help), explains
how to look up the parameters for a function.
By default, R assigns arguments to parameters based on their order. The first
argument is assigned to the function’s first parameter, the second to the
second, and so on. So we can compute the logarithm of 64, base 2, with this
code:
```
log(64, 2)
```
```
## [1] 6
```
The argument 64 is assigned to the parameter `x`, and the argument 2 is
assigned to the parameter `base`. You can also assign arguments to parameters
by name with `=` (not `<-`), overriding their positions. So some other ways we
could write the call above are:
```
log(64, base = 2)
```
```
## [1] 6
```
```
log(x = 64, base = 2)
```
```
## [1] 6
```
```
log(base = 2, x = 64)
```
```
## [1] 6
```
```
log(base = 2, 64)
```
```
## [1] 6
```
All of these are equivalent. When you write code, choose whatever seems the
clearest to you. Leaving parameter names out of calls saves typing, but
including some or all of them can make the code easier to understand.
Parameters are not regular variables, and only exist while their associated
function runs. You can’t set them before a call, nor can you access them after
a call. So this code causes an error:
```
x = 64
log(base = 2)
```
```
## Error in eval(expr, envir, enclos): argument "x" is missing, with no default
```
In the error message, R says that we forgot to assign an argument to the
parameter `x`. We can keep the variable `x` and correct the call by making `x`
an argument (for the parameter `x`):
```
log(x, base = 2)
```
```
## [1] 6
```
Or, written more explicitly:
```
log(x = x, base = 2)
```
```
## [1] 6
```
In summary, variables and parameters are distinct, even if they happen to have
the same name. The variable `x` is not the same thing as the parameter `x`.
1\.3 Getting Help
-----------------
Learning and using a language is hard, so it’s important to know how to get
help. The first place to look for help is R’s built\-in documentation. In the
console, you can access a specific help page by name with `?` followed by the
name of the page.
There are help pages for all of R’s built\-in functions, usually with the same
name as the function itself. So the code to open the help page for the `log`
function is:
```
?log
```
For functions, help pages usually include a brief description, a list of
parameters, a description of the return value, and some examples.
There are also help pages for other topics, such as built\-in mathematical
constants (such as `?pi`), data sets (such as `?iris`), and operators. To look
up the help page for an operator, put the operator’s name in single or double
quotes. For example, this code opens the help page for the arithmetic
operators:
```
?"+"
```
It’s always okay to put quotes around the name of the page when you use `?`,
but they’re only required if it contains non\-alphabetic characters. So `?sqrt`,
`?'sqrt'`, and `?"sqrt"` all open the documentation for `sqrt`, the square root
function.
Sometimes you might not know the name of the help page you want to look up. You
can do a general search of R’s help pages with `??` followed by a string of
search terms. For example, to get a list of all help pages related to linear
models:
```
??"linear model"
```
This search function doesn’t always work well, and it’s often more efficient to
use an online search engine. When you search for help with R online, include
“R” as a search term. Alternatively, you can use [RSeek](https://rseek.org/), which
restricts the search to a selection of R\-related websites.
### 1\.3\.1 When Something Goes Wrong
As a programmer, sooner or later you’ll run some code and get an error message
or result you didn’t expect. Don’t panic! Even experienced programmers make
mistakes regularly, so learning how to diagnose and fix problems is vital.
Try going through these steps:
1. If R returned a warning or error message, read it! If you’re not sure what
the message means, try searching for it online.
2. Check your code for typographical errors, including incorrect capitalization
and missing or extra commas, quotes, and parentheses.
3. Test your code one line at a time, starting from the beginning. After each
line that assigns a variable, check that the value of the variable is what
you expect. Try to determine the exact line where the problem originates
(which may differ from the line that emits an error!).
If none of these steps help, try asking online. [Stack Overflow](https://stackoverflow.com/) is a
popular question and answer website for programmers. Before posting, make sure
to read about [how to ask a good question](https://stackoverflow.com/help/how-to-ask).
### 1\.3\.1 When Something Goes Wrong
As a programmer, sooner or later you’ll run some code and get an error message
or result you didn’t expect. Don’t panic! Even experienced programmers make
mistakes regularly, so learning how to diagnose and fix problems is vital.
Try going through these steps:
1. If R returned a warning or error message, read it! If you’re not sure what
the message means, try searching for it online.
2. Check your code for typographical errors, including incorrect capitalization
and missing or extra commas, quotes, and parentheses.
3. Test your code one line at a time, starting from the beginning. After each
line that assigns a variable, check that the value of the variable is what
you expect. Try to determine the exact line where the problem originates
(which may differ from the line that emits an error!).
If none of these steps help, try asking online. [Stack Overflow](https://stackoverflow.com/) is a
popular question and answer website for programmers. Before posting, make sure
to read about [how to ask a good question](https://stackoverflow.com/help/how-to-ask).
1\.4 File Systems
-----------------
Most of the time, you won’t just write code directly into the R console.
Reproducibility and reusability are important benefits of R over
point\-and\-click software, and in order to realize these, you have to save your
code to your computer’s hard drive. Let’s start by reviewing how files on a
computer work. You’ll need to understand that before you can save your code,
and it will also be important later on for loading data sets.
Your computer’s *file system* is a collection of *files* (chunks of data) and
*directories* (or “folders”) that organize those files. For instance, the file
system on a computer shared by [Ada](https://en.wikipedia.org/wiki/Ada_Lovelace) and [Charles](https://en.wikipedia.org/wiki/Charles_Babbage), two pioneers of
computing, might look like this:
Don’t worry if your file system looks a bit different from the picture.
File systems have a tree\-like structure, with a top\-level directory called the
*root directory*. On Ada and Charles’ computer, the root is called `/`, which
is also what it’s called on all macOS and Linux computers. On Windows, the root
is usually called `C:/`, but sometimes other letters, like `D:/`, are also used
depending on the computer’s hardware.
A *path* is a list of directories that leads to a specific file or directory on
a file system (imagine giving directons to someone as they walk through the
file system). We use forward slashes `/` to separate the directories in a path,
rather than commas or spaces. The root directory includes a forward slash as
part of its name, and doesn’t need an extra one.
For example, suppose Ada wants to write a path to the file `cats.csv`. She can
write the path like this:
```
/Users/ada/cats.csv
```
You can read this path from left\-to\-right as, “Starting from the root
directory, go to the `Users` directory, then from there go to the `ada`
directory, and from there go to the file `cats.csv`.” Alternatively, you can
read the path from right\-to\-left as, “The file `cats.csv` inside of the `ada`
directory, which is inside of the `Users` directory, which is in the root
directory.”
As another example, suppose Charles wants a path to the `Programs` directory.
He can write:
```
/Programs/
```
The `/` at the end of this path is reminder that `Programs` is a directory, not
a file. Charles could also write the path like this:
```
/Programs
```
This is still correct, but it’s not as obvious that `Programs` is a directory.
In other words, when a path leads to a directory, including a *trailing slash*
is optional, but makes the meaning of the path clearer. Paths that lead to
files never have a trailing slash.
On Windows computers, paths are usually written with backslashes `\` to
separate directories instead of forward slashes. Fortunately, R uses forward
slashes `/` for all paths, regardless of the operating system. So when you’re
working in R, use forward slashes and don’t worry about the operating system.
This is especially convenient when you want to share code with someone that
uses a different operating system than you.
### 1\.4\.1 Absolute \& Relative Paths
A path that starts from the root directory, like all of the ones we’ve seen so
far, is called an *absolute path*. The path is “absolute” because it
unambiguously describes where a file or directory is located. The downside is
that absolute paths usually don’t work well if you share your code.
For example, suppose Ada uses the path `/Programs/ada/cats.csv` to load the
`cats.csv` file in her code. If she shares her code with another pioneer of
computing, say [Gladys](https://en.wikipedia.org/wiki/Gladys_West), who also has a copy of `cats.csv`, it might
not work. Even though Gladys has the file, she might not have it in a directory
called `ada`, and might not even have a directory called `ada` on her computer.
Because Ada used an absolute path, her code works on her own computer, but
isn’t portable to others.
On the other hand, a *relative path* is one that doesn’t start from the root
directory. The path is “relative” to an unspecified starting point, which
usually depends on the context.
For instance, suppose Ada’s code is saved in the file `analysis.R` (more about
`.R` files in Section [1\.4\.2](getting-started.html#r-scripts)), which is in the same directory as
`cats.csv` on her computer. Then instead of an absolute path, she can use a
relative path in her code:
```
cats.csv
```
The context is the location of `analysis.R`, the file that contains the code.
In other words, the starting point on Ada’s computer is the `ada` directory. On
other computers, the starting point will be different, depending on where the
code is stored.
Now suppose Ada sends her corrected code in `analysis.R` to Gladys, and tells
Gladys to put it in the same directory as `cats.csv`. Since the path `cats.csv`
is relative, the code will still work on Gladys’ computer, as long as the two
files are in the same directory. The name of that directory and its location in
the file system don’t matter, and don’t have to be the same as on Ada’s
computer. Gladys can put the files in a directory `/Users/gladys/from_ada/` and
the path (and code) will still work.
Relative paths can include directories. For example, suppose that Charles wants
to write a relative path from the `Users` directory to a cool selfie he took.
Then he can write:
```
charles/cool_hair_selfie.jpg
```
You can read this path as, “Starting from wherever you are, go to the `charles`
directory, and from there go to the `cool_hair_selfie.jpg` file.” In other
words, the relative path depends on the context of the code or program that
uses it.
When use you paths in R code, they should almost always be relative paths. This
ensures that the code is portable to other computers, which is an important
aspect of reproducibility. Another benefit is that relative paths tend to be
shorter, making your code easier to read (and write).
When you write paths, there are three shortcuts you can use. These are most
useful in relative paths, but also work in absolute paths:
* `.` means the current directory.
* `..` means the directory above the current directory.
* `~` means the *home directory*. Each user has their own home directory, whose
location depends on the operating system and their username. Home directories
are typically found inside `C:/Users/` on Windows, `/Users/` on macOS, and
`/home/` on Linux.
As an example, suppose Ada wants to write a (relative) path from the `ada`
directory to Charles’ cool selfie. Using these shorcuts, she can write:
```
../charles/cool_hair_selfie.jpg
```
Read this as, “Starting from wherever you are, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.” Since
`/Users/ada` is Ada’s home directory, she could also write the path as:
```
~/../charles/cool_hair_selfie.jpg
```
This path has the same effect, but the meaning is slightly different. You can
read it as “Starting from your home directory, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.”
The `..` and `~` shortcut are frequently used and worth remembering. The `.`
shortcut is included here in case you see it in someone else’s code. Since it
means the current directory, a path like `./cats.csv` is identical to
`cats.csv`, and the latter is preferable for being simpler. There are a few
specific situations where `.` is necessary, but they fall outside the scope of
this text.
### 1\.4\.2 R Scripts
Now that you know how file systems and paths work, you’re ready to learn how to
save your R code to a file. R code is usually saved into an *R script*
(extension `.R`) or an *R Markdown file* (extension `.Rmd`). R scripts are
slightly simpler, so let’s focus on those.
In RStudio, you can create a new R script with this menu option:
```
File -> New File -> R Script
```
This will open a new pane in RStudio, like this:
The new pane is the scripts pane, which displays all of the R scripts you’re
editing. Each script appears in a separate tab. In the screenshot, only one
script, the new script, is open.
Editing a script is similar to editing any other text document. You can write,
delete, copy, cut, and paste text. You can also save the file to your
computer’s file system. When you do, pay attention to where you save the file,
as you might need it later.
The contents of an R script should be R code. Anything else you want to write
in the script (notes, documentation, etc.) should be in a *comment*. In R,
comments begin with `#` and extend to the end of the line:
```
# This is a comment.
```
R will ignore comments when you run your code.
When you start a new project, it’s a good idea to create a specific directory
for all of the project’s files. If you’re using R, you should also create one
or more R scripts in that directory. As you work, write your code directly into
a script. Arrange your code in the order of the steps to solve the problem,
even if you write some parts before others. Comment out or delete any lines of
code that you try but ultimately decide you don’t need. Make sure to save the
file periodically so that you don’t lose your work. Following these guidelines
will help you stay organized and make it easier to share your code with others
later.
While editing, you can run the current line in the R console by pressing
`Ctrl`\+`Enter` on Windows and Linux, or `Cmd`\+`Enter` on macOS. This way you
can test and correct your code as you write it.
If you want, you can instead run (or *source*) the entire R script, by calling
the `source` function with the path to the script as an argument. This is also
what the “Source on Save” check box refers to in RStudio. The code runs in
order, only stopping if an error occurs.
For instance, if you save the script as `my_cool_script.R`, then you can run
`source("my_cool_script.R")` in the console to run the entire script (pay
attention to the path—it may be different on your computer).
R Markdown files are an alternative format for storing R code. They provide a
richer set of formatting options, and are usually a better choice than R
scripts if you’re writing a report that contains code. You can learn more
about R Markdown files [here](https://rmarkdown.rstudio.com/).
### 1\.4\.3 The Working Directory
Section [1\.4\.1](getting-started.html#absolute-relative-paths) explained that relative paths have a
starting point that depends on the context where the path is used. We can make
that idea more concrete for R. The *working directory* is the starting point R
uses for relative paths. Think of the working directory as the directory R is
currently “at” or watching.
The function `getwd` returns the absolute path for the current working
directory, as a string. It doesn’t require any arguments:
```
getwd()
```
```
## [1] "/home/nick/workshop/datalab/workshops/r_basics"
```
On your computer, the output from `getwd` will likely be different. This is a
very useful function for getting your bearings when you write relative paths.
If you write a relative path and it doesn’t work as expected, the first thing
to do is check the working directory.
The related `setwd` function changes the working directory. It takes one
argument: a path to the new working directory. Here’s an example:
```
setwd("..")
# Now check the working directory.
getwd()
```
Generally, you should avoid using calls to `setwd` in your R scripts and R
Markdown files. Calling `setwd` makes your code more difficult to understand,
and can always be avoided by using appropriate relative paths. If you call
`setwd` with an absolute path, it also makes your code less portable to other
computers. It’s fine to use `setwd` interactively (in the R console), but avoid
making your saved code dependent on it.
Another function that’s useful for dealing with the working directory and file
system is `list.files`. The `list.files` function returns the names of all of
the files and directories inside of a directory. It accepts a path to a
directory as an argument, or assumes the working directory if you don’t pass a
path. For instance:
```
# List files and directories in /home/.
list.files("/home/")
```
```
## [1] "lost+found" "nick"
```
```
# List files and directories in the working directory.
list.files()
```
```
## [1] "_bookdown_files" "_bookdown.yml"
## [3] "_main.rds" "01_getting-started.Rmd"
## [5] "02_data-structures.Rmd" "03_exploring-data_files"
## [7] "03_exploring-data.Rmd" "04_automating-tasks.Rmd"
## [9] "05_appendix.Rmd" "97_where-to-learn-more.Rmd"
## [11] "98_acknowledgements.Rmd" "99_assessment.Rmd"
## [13] "assessment" "data"
## [15] "docs" "graphviz"
## [17] "img" "index.md"
## [19] "index.Rmd" "knit.R"
## [21] "LICENSE" "makefile"
## [23] "notes" "R"
## [25] "raw" "README.md"
## [27] "rendere73e1982629f.rds" "renv"
## [29] "renv.lock"
```
As usual, since you have a different computer, you’re likely to see different
output if you run this code. If you call `list.files` with an invalid path or
an empty directory, the output is `character(0)`:
```
list.files("/this/path/is/fake/")
```
```
## character(0)
```
Later on, we’ll learn about what `character(0)` means more generally.
### 1\.4\.1 Absolute \& Relative Paths
A path that starts from the root directory, like all of the ones we’ve seen so
far, is called an *absolute path*. The path is “absolute” because it
unambiguously describes where a file or directory is located. The downside is
that absolute paths usually don’t work well if you share your code.
For example, suppose Ada uses the path `/Programs/ada/cats.csv` to load the
`cats.csv` file in her code. If she shares her code with another pioneer of
computing, say [Gladys](https://en.wikipedia.org/wiki/Gladys_West), who also has a copy of `cats.csv`, it might
not work. Even though Gladys has the file, she might not have it in a directory
called `ada`, and might not even have a directory called `ada` on her computer.
Because Ada used an absolute path, her code works on her own computer, but
isn’t portable to others.
On the other hand, a *relative path* is one that doesn’t start from the root
directory. The path is “relative” to an unspecified starting point, which
usually depends on the context.
For instance, suppose Ada’s code is saved in the file `analysis.R` (more about
`.R` files in Section [1\.4\.2](getting-started.html#r-scripts)), which is in the same directory as
`cats.csv` on her computer. Then instead of an absolute path, she can use a
relative path in her code:
```
cats.csv
```
The context is the location of `analysis.R`, the file that contains the code.
In other words, the starting point on Ada’s computer is the `ada` directory. On
other computers, the starting point will be different, depending on where the
code is stored.
Now suppose Ada sends her corrected code in `analysis.R` to Gladys, and tells
Gladys to put it in the same directory as `cats.csv`. Since the path `cats.csv`
is relative, the code will still work on Gladys’ computer, as long as the two
files are in the same directory. The name of that directory and its location in
the file system don’t matter, and don’t have to be the same as on Ada’s
computer. Gladys can put the files in a directory `/Users/gladys/from_ada/` and
the path (and code) will still work.
Relative paths can include directories. For example, suppose that Charles wants
to write a relative path from the `Users` directory to a cool selfie he took.
Then he can write:
```
charles/cool_hair_selfie.jpg
```
You can read this path as, “Starting from wherever you are, go to the `charles`
directory, and from there go to the `cool_hair_selfie.jpg` file.” In other
words, the relative path depends on the context of the code or program that
uses it.
When use you paths in R code, they should almost always be relative paths. This
ensures that the code is portable to other computers, which is an important
aspect of reproducibility. Another benefit is that relative paths tend to be
shorter, making your code easier to read (and write).
When you write paths, there are three shortcuts you can use. These are most
useful in relative paths, but also work in absolute paths:
* `.` means the current directory.
* `..` means the directory above the current directory.
* `~` means the *home directory*. Each user has their own home directory, whose
location depends on the operating system and their username. Home directories
are typically found inside `C:/Users/` on Windows, `/Users/` on macOS, and
`/home/` on Linux.
As an example, suppose Ada wants to write a (relative) path from the `ada`
directory to Charles’ cool selfie. Using these shorcuts, she can write:
```
../charles/cool_hair_selfie.jpg
```
Read this as, “Starting from wherever you are, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.” Since
`/Users/ada` is Ada’s home directory, she could also write the path as:
```
~/../charles/cool_hair_selfie.jpg
```
This path has the same effect, but the meaning is slightly different. You can
read it as “Starting from your home directory, go up one directory, then go to
the `charles` directory, and then go to the `cool_hair_selfie.jpg` file.”
The `..` and `~` shortcut are frequently used and worth remembering. The `.`
shortcut is included here in case you see it in someone else’s code. Since it
means the current directory, a path like `./cats.csv` is identical to
`cats.csv`, and the latter is preferable for being simpler. There are a few
specific situations where `.` is necessary, but they fall outside the scope of
this text.
### 1\.4\.2 R Scripts
Now that you know how file systems and paths work, you’re ready to learn how to
save your R code to a file. R code is usually saved into an *R script*
(extension `.R`) or an *R Markdown file* (extension `.Rmd`). R scripts are
slightly simpler, so let’s focus on those.
In RStudio, you can create a new R script with this menu option:
```
File -> New File -> R Script
```
This will open a new pane in RStudio, like this:
The new pane is the scripts pane, which displays all of the R scripts you’re
editing. Each script appears in a separate tab. In the screenshot, only one
script, the new script, is open.
Editing a script is similar to editing any other text document. You can write,
delete, copy, cut, and paste text. You can also save the file to your
computer’s file system. When you do, pay attention to where you save the file,
as you might need it later.
The contents of an R script should be R code. Anything else you want to write
in the script (notes, documentation, etc.) should be in a *comment*. In R,
comments begin with `#` and extend to the end of the line:
```
# This is a comment.
```
R will ignore comments when you run your code.
When you start a new project, it’s a good idea to create a specific directory
for all of the project’s files. If you’re using R, you should also create one
or more R scripts in that directory. As you work, write your code directly into
a script. Arrange your code in the order of the steps to solve the problem,
even if you write some parts before others. Comment out or delete any lines of
code that you try but ultimately decide you don’t need. Make sure to save the
file periodically so that you don’t lose your work. Following these guidelines
will help you stay organized and make it easier to share your code with others
later.
While editing, you can run the current line in the R console by pressing
`Ctrl`\+`Enter` on Windows and Linux, or `Cmd`\+`Enter` on macOS. This way you
can test and correct your code as you write it.
If you want, you can instead run (or *source*) the entire R script, by calling
the `source` function with the path to the script as an argument. This is also
what the “Source on Save” check box refers to in RStudio. The code runs in
order, only stopping if an error occurs.
For instance, if you save the script as `my_cool_script.R`, then you can run
`source("my_cool_script.R")` in the console to run the entire script (pay
attention to the path—it may be different on your computer).
R Markdown files are an alternative format for storing R code. They provide a
richer set of formatting options, and are usually a better choice than R
scripts if you’re writing a report that contains code. You can learn more
about R Markdown files [here](https://rmarkdown.rstudio.com/).
### 1\.4\.3 The Working Directory
Section [1\.4\.1](getting-started.html#absolute-relative-paths) explained that relative paths have a
starting point that depends on the context where the path is used. We can make
that idea more concrete for R. The *working directory* is the starting point R
uses for relative paths. Think of the working directory as the directory R is
currently “at” or watching.
The function `getwd` returns the absolute path for the current working
directory, as a string. It doesn’t require any arguments:
```
getwd()
```
```
## [1] "/home/nick/workshop/datalab/workshops/r_basics"
```
On your computer, the output from `getwd` will likely be different. This is a
very useful function for getting your bearings when you write relative paths.
If you write a relative path and it doesn’t work as expected, the first thing
to do is check the working directory.
The related `setwd` function changes the working directory. It takes one
argument: a path to the new working directory. Here’s an example:
```
setwd("..")
# Now check the working directory.
getwd()
```
Generally, you should avoid using calls to `setwd` in your R scripts and R
Markdown files. Calling `setwd` makes your code more difficult to understand,
and can always be avoided by using appropriate relative paths. If you call
`setwd` with an absolute path, it also makes your code less portable to other
computers. It’s fine to use `setwd` interactively (in the R console), but avoid
making your saved code dependent on it.
Another function that’s useful for dealing with the working directory and file
system is `list.files`. The `list.files` function returns the names of all of
the files and directories inside of a directory. It accepts a path to a
directory as an argument, or assumes the working directory if you don’t pass a
path. For instance:
```
# List files and directories in /home/.
list.files("/home/")
```
```
## [1] "lost+found" "nick"
```
```
# List files and directories in the working directory.
list.files()
```
```
## [1] "_bookdown_files" "_bookdown.yml"
## [3] "_main.rds" "01_getting-started.Rmd"
## [5] "02_data-structures.Rmd" "03_exploring-data_files"
## [7] "03_exploring-data.Rmd" "04_automating-tasks.Rmd"
## [9] "05_appendix.Rmd" "97_where-to-learn-more.Rmd"
## [11] "98_acknowledgements.Rmd" "99_assessment.Rmd"
## [13] "assessment" "data"
## [15] "docs" "graphviz"
## [17] "img" "index.md"
## [19] "index.Rmd" "knit.R"
## [21] "LICENSE" "makefile"
## [23] "notes" "R"
## [25] "raw" "README.md"
## [27] "rendere73e1982629f.rds" "renv"
## [29] "renv.lock"
```
As usual, since you have a different computer, you’re likely to see different
output if you run this code. If you call `list.files` with an invalid path or
an empty directory, the output is `character(0)`:
```
list.files("/this/path/is/fake/")
```
```
## character(0)
```
Later on, we’ll learn about what `character(0)` means more generally.
1\.5 Reading Files
------------------
Analyzing data sets is one of the most common things to do in R. The first step
is to get R to read your data. Data sets come in a variety of file formats, and
you need to identify the format in order to tell R how to read the data.
Most of the time, you can guess the format of a file by looking at its
*extension*, the characters (usually three) after the last dot `.` in the
filename. For example, the extension `.jpg` or `.jpeg` indicates a [JPEG image
file](https://en.wikipedia.org/wiki/JPEG). Some operating systems hide extensions by default, but you can find
instructions to change this setting online by searching for “show file
extensions” and your operating system’s name. The extension is just part of the
file’s name, so it should be taken as a hint about the file’s format rather
than a guarantee.
R has built\-in functions for reading a variety of formats. The R community also
provides *packages*, shareable and reusable pieces of code, to read even more
formats. You’ll learn more about packages later, in Section [3\.2](exploring-data.html#packages).
For now, let’s focus on data sets that can be read with R’s built\-in functions.
Here are several formats that are frequently used to distribute data, along
with the name of a built\-in function or contributed package that can read the
format:
| Name | Extension | Function or Package | Tabular? | Text? |
| --- | --- | --- | --- | --- |
| Comma\-separated Values | `.csv` | `read.csv` | Yes | Yes |
| Tab\-separated Values | `.tsv` | `read.delim` | Yes | Yes |
| Fixed\-width File | `.fwf` | `read.fwf` | Yes | Yes |
| Microsoft Excel | `.xlsx` | **readr** package | Yes | No |
| Microsoft Excel 1993\-2007 | `.xls` | **readr** package | Yes | No |
| [Apache Arrow](https://arrow.apache.org/) | `.feather` | **arrow** package | Yes | No |
| R Data | `.rds` | `readRDS` | Sometimes | No |
| R Data | `.rda` | `load` | Sometimes | No |
| Plaintext | `.txt` | `readLines` | Sometimes | Yes |
| Extensible Markup Language | `.xml` | **xml2** package | No | Yes |
| JavaScript Object Notation | `.json` | **jsonlite** package | No | Yes |
A *tabular* data set is one that’s structured as a table, with rows and columns.
We’ll focus on tabular data sets for most of this reader, since they’re easier
to get started with. Here’s an example of a tabular data set:
| Fruit | Quantity | Price |
| --- | --- | --- |
| apple | 32 | 1\.49 |
| banana | 541 | 0\.79 |
| pear | 10 | 1\.99 |
A *text* file is one that contains human\-readable lines of text. You can check
this by opening the file with a text editor such as Microsoft Notepad or macOS
TextEdit. Many file formats use text in order to make the format easier to work
with.
For instance, a *comma\-separated values* (CSV) file records a tabular data
using one line per row, with commas separating columns. If you store the table
above in a CSV file and open the file in a text editor, here’s what you’ll see:
```
Fruit,Quantity,Price
apple,32,1.49
banana,541,0.79
pear,10,1.99
```
A *binary* file is one that’s not human\-readable. You can’t just read off the
data if you open a binary file in a text editor, but they have a number of
other advantages. Compared to text files, binary files are often faster to read
and take up less storage space (bytes).
As an example, R’s built\-in binary format is called *RDS* (which may stand for
“R data serialized”). RDS files are extremely useful for backing up work, since
they can store any kind of R object, even ones that are not tabular. You can
learn more about how to create an RDS file on the `?saveRDS` help page, and how
to read one on the `?readRDS` help page.
### 1\.5\.1 Hello, Data!
Let’s read our first data set! Over the next few sections, we’re going to
explore data from the U.S. Bureau of Labor Statistics about median employee
earnings. The data set was prepared as part of the Tidy Tuesday R community
project. You can find more details about the data set [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2021/2021-02-23), and you can
download the data set [here](https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-02-23/earn.csv) (you may need to choose `File -> Save As...` in your browser’s menu).
The data set is a file called `earn.csv`, which suggests it’s a CSV file. In
this case, the extension is correct, so we can read the file into R with the
built\-in `read.csv` function. The first argument is the path to where you saved
the file, which may be different on your computer. The `read.csv` function
returns the data set, but R won’t keep the data in memory unless we assign the
returned result to a variable:
```
earn = read.csv("data/earn.csv")
```
The variable name `earn` here is arbitrary; you can choose something different
if you want. However, in general, it’s a good habit to choose variable names
that describe the contents of the variable somehow.
If you tried running the line of code above and got an error message, pay
attention to what the error message says, and remember the strategies to get
help in Section [1\.3](getting-started.html#getting-help). The most common mistake when reading a
file is incorrectly specifying the path, so first check that you got the path
right.
If you ran the line of code and there was no error message, congratulations,
you’ve read your first data set into R!
### 1\.5\.1 Hello, Data!
Let’s read our first data set! Over the next few sections, we’re going to
explore data from the U.S. Bureau of Labor Statistics about median employee
earnings. The data set was prepared as part of the Tidy Tuesday R community
project. You can find more details about the data set [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2021/2021-02-23), and you can
download the data set [here](https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-02-23/earn.csv) (you may need to choose `File -> Save As...` in your browser’s menu).
The data set is a file called `earn.csv`, which suggests it’s a CSV file. In
this case, the extension is correct, so we can read the file into R with the
built\-in `read.csv` function. The first argument is the path to where you saved
the file, which may be different on your computer. The `read.csv` function
returns the data set, but R won’t keep the data in memory unless we assign the
returned result to a variable:
```
earn = read.csv("data/earn.csv")
```
The variable name `earn` here is arbitrary; you can choose something different
if you want. However, in general, it’s a good habit to choose variable names
that describe the contents of the variable somehow.
If you tried running the line of code above and got an error message, pay
attention to what the error message says, and remember the strategies to get
help in Section [1\.3](getting-started.html#getting-help). The most common mistake when reading a
file is incorrectly specifying the path, so first check that you got the path
right.
If you ran the line of code and there was no error message, congratulations,
you’ve read your first data set into R!
1\.6 Data Frames
----------------
Now that we’ve loaded the data, let’s take a look at it. When you’re working
with a new data set, it’s usually not a good idea to print it out directly (by
typing `earn`, in this case) until you know how big it is. Big data sets can
take a long time to print, and the output can be difficult to read.
Instead, you can use the `head` function to print only the beginning, or
*head*, of a data set. Let’s take a peek:
```
head(earn)
```
```
## sex race ethnic_origin age year quarter n_persons
## 1 Both Sexes All Races All Origins 16 years and over 2010 1 96821000
## 2 Both Sexes All Races All Origins 16 years and over 2010 2 99798000
## 3 Both Sexes All Races All Origins 16 years and over 2010 3 101385000
## 4 Both Sexes All Races All Origins 16 years and over 2010 4 100120000
## 5 Both Sexes All Races All Origins 16 years and over 2011 1 98329000
## 6 Both Sexes All Races All Origins 16 years and over 2011 2 100593000
## median_weekly_earn
## 1 754
## 2 740
## 3 740
## 4 752
## 5 755
## 6 753
```
This data set is tabular—as you might have already guessed, since it came
from a CSV file. In R, it’s represented by a *data frame*, a table with rows
and columns. R uses data frames to represent most (but not all) kinds of
tabular data. The `read.csv` function, which we used to read this data, always
returns a data frame.
For a data frame, the `head` function only prints the first six rows. If there
are lots of columns or the columns are wide, as is the case here, R wraps the
output across lines.
When you first read an object into R, you might not know whether it’s a data
frame. One way to check is visually, by printing it, as we just did. A better
way to check is with the `class` function, which returns information about what
an object is. For a data frame, the result will always contain `data.frame`:
```
class(earn)
```
```
## [1] "data.frame"
```
We’ll learn more about classes in Section [2\.2](data-structures.html#data-types-classes), but for
now you can use this function to identify data frames.
By counting the columns in the output from `head(earn)`, we can see that this
data set has eight columns. A more convenient way to check the number of
columns in a data set is with the `ncol` function:
```
ncol(earn)
```
```
## [1] 8
```
The similarly\-named `nrow` function returns the number of rows:
```
nrow(earn)
```
```
## [1] 4224
```
Alternatively, you can get both numbers at the same time with the `dim` (short
for “dimensions”) function.
Since the columns have names, we might also want to get just these. You can do
that with the `names` or `colnames` functions. Both return the same result:
```
names(earn)
```
```
## [1] "sex" "race" "ethnic_origin"
## [4] "age" "year" "quarter"
## [7] "n_persons" "median_weekly_earn"
```
```
colnames(earn)
```
```
## [1] "sex" "race" "ethnic_origin"
## [4] "age" "year" "quarter"
## [7] "n_persons" "median_weekly_earn"
```
If the rows have names, you can get those with the `rownames` function. For
this particular data set, the rows don’t have names.
### 1\.6\.1 Summarizing Data
An efficient way to get a sense of what’s actually in a data set is to have R
compute summary information. This works especially well for data frames, but
also applies to other data. R provides two different functions to get
summaries: `str` and `summary`.
The `str` function returns a *structural summary* of an object. This kind of
summary tells us about the structure of the data—the number of rows, the
number and names of columns, what kind of data is in each column, and some
sample values. Here’s the structural summary for the earnings data:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
This summary lists information about each column, and includes most of what we
found earlier by using several different functions separately. The summary uses
`chr` to indicate columns of text (“characters”) and `int` to indicate columns
of integers.
In contrast to `str`, the `summary` function returns a *statistical summary* of
an object. This summary includes summary statistics for each column, choosing
appropriate statistics based on the kind of data in the column. For numbers,
this is generally the mean, median, and quantiles. For categories, this is the
frequencies. Other kinds of statistics are shown for other kinds of data.
Here’s the statistical summary for the earnings data:
```
summary(earn)
```
```
## sex race ethnic_origin age
## Length:4224 Length:4224 Length:4224 Length:4224
## Class :character Class :character Class :character Class :character
## Mode :character Mode :character Mode :character Mode :character
##
##
##
## year quarter n_persons median_weekly_earn
## Min. :2010 Min. :1.00 Min. : 103000 Min. : 318.0
## 1st Qu.:2012 1st Qu.:1.75 1st Qu.: 2614000 1st Qu.: 605.0
## Median :2015 Median :2.50 Median : 7441000 Median : 755.0
## Mean :2015 Mean :2.50 Mean : 16268338 Mean : 762.2
## 3rd Qu.:2018 3rd Qu.:3.25 3rd Qu.: 17555250 3rd Qu.: 911.0
## Max. :2020 Max. :4.00 Max. :118358000 Max. :1709.0
```
### 1\.6\.2 Selecting Columns
You can select an individual column from a data frame by name with `$`, the
dollar sign operator. The syntax is:
```
VARIABLE$COLUMN_NAME
```
For instance, for the earnings data, `earn$age` selects
the `age` column, and `earn$n_persons` selects the `n_persons` column. So one
way to compute the mean of the `n_persons` column is:
```
mean(earn$n_persons)
```
```
## [1] 16268338
```
Similarly, to compute the range of the `year` column:
```
range(earn$year)
```
```
## [1] 2010 2020
```
You can also use the dollar sign operator to assign values to columns. For
instance, to assign `0` to the entire `quarter` column:
```
earn$quarter = 0
```
Be careful when you do this, as there is no undo. Fortunately, we haven’t
applied any transformations to the earnings data yet, so we can reset the
`earn` variable back to what it was by reloading the data set:
```
earn = read.csv("data/earn.csv")
```
In Section [2\.4](data-structures.html#indexing), we’ll learn how to select rows and individual
elements from a data frame, as well as other ways to select columns.
### 1\.6\.1 Summarizing Data
An efficient way to get a sense of what’s actually in a data set is to have R
compute summary information. This works especially well for data frames, but
also applies to other data. R provides two different functions to get
summaries: `str` and `summary`.
The `str` function returns a *structural summary* of an object. This kind of
summary tells us about the structure of the data—the number of rows, the
number and names of columns, what kind of data is in each column, and some
sample values. Here’s the structural summary for the earnings data:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
This summary lists information about each column, and includes most of what we
found earlier by using several different functions separately. The summary uses
`chr` to indicate columns of text (“characters”) and `int` to indicate columns
of integers.
In contrast to `str`, the `summary` function returns a *statistical summary* of
an object. This summary includes summary statistics for each column, choosing
appropriate statistics based on the kind of data in the column. For numbers,
this is generally the mean, median, and quantiles. For categories, this is the
frequencies. Other kinds of statistics are shown for other kinds of data.
Here’s the statistical summary for the earnings data:
```
summary(earn)
```
```
## sex race ethnic_origin age
## Length:4224 Length:4224 Length:4224 Length:4224
## Class :character Class :character Class :character Class :character
## Mode :character Mode :character Mode :character Mode :character
##
##
##
## year quarter n_persons median_weekly_earn
## Min. :2010 Min. :1.00 Min. : 103000 Min. : 318.0
## 1st Qu.:2012 1st Qu.:1.75 1st Qu.: 2614000 1st Qu.: 605.0
## Median :2015 Median :2.50 Median : 7441000 Median : 755.0
## Mean :2015 Mean :2.50 Mean : 16268338 Mean : 762.2
## 3rd Qu.:2018 3rd Qu.:3.25 3rd Qu.: 17555250 3rd Qu.: 911.0
## Max. :2020 Max. :4.00 Max. :118358000 Max. :1709.0
```
### 1\.6\.2 Selecting Columns
You can select an individual column from a data frame by name with `$`, the
dollar sign operator. The syntax is:
```
VARIABLE$COLUMN_NAME
```
For instance, for the earnings data, `earn$age` selects
the `age` column, and `earn$n_persons` selects the `n_persons` column. So one
way to compute the mean of the `n_persons` column is:
```
mean(earn$n_persons)
```
```
## [1] 16268338
```
Similarly, to compute the range of the `year` column:
```
range(earn$year)
```
```
## [1] 2010 2020
```
You can also use the dollar sign operator to assign values to columns. For
instance, to assign `0` to the entire `quarter` column:
```
earn$quarter = 0
```
Be careful when you do this, as there is no undo. Fortunately, we haven’t
applied any transformations to the earnings data yet, so we can reset the
`earn` variable back to what it was by reloading the data set:
```
earn = read.csv("data/earn.csv")
```
In Section [2\.4](data-structures.html#indexing), we’ll learn how to select rows and individual
elements from a data frame, as well as other ways to select columns.
### 1\.7\.1 Exercise
In a string, an *escape sequence* or *escape code* consists of a backslash
followed by one or more characters. Escape sequences make it possible to:
* Write quotes or backslashes within a string
* Write characters that don’t appear on your keyboard (for example, characters
in a foreign language)
For example, the escape sequence `\n` corresponds to the newline character.
There’s a complete list of escape sequences for R in the `?Quotes` help file.
Other programming languages also use escape sequences, and many of them are the
same as in R.
1. Assign a string that contains a newline to the variable `newline`. Then make
R display the value of the variable by entering `newline` at the R prompt.
2. The `message` function prints output to the R console, so it’s one way you
can make your R code report information as it runs. Use the `message`
function to print `newline`.
3. How does the output from part 1 compare to the output from part 2? Why do
you think they differ?
### 1\.7\.2 Exercise
1. Choose a directory on your computer that you’re familiar with, such as one
you created. Determine the path to the directory, then use `list.files` to
display its contents. Do the files displayed match what you see in your
system’s file browser?
2. What does the `all.files` parameter of `list.files` do? Give an example.
### 1\.7\.3 Exercise
The `read.table` function is another function for reading tabular data. Take a
look at the help file for `read.table`. Recall that `read.csv` reads tabular
data where the values are separated by commas, and `read.delim` reads tabular
data where the values are separated by tabs.
1. What value\-separator does `read.table` expect by default?
2. Is it possible to use `read.table` to read a CSV? Explain. If your answer is
yes, show how to use `read.table` to load the employee earnings data from
Section [1\.5\.1](getting-started.html#hello-data).
| R Programming |
ucdavisdatalab.github.io | https://ucdavisdatalab.github.io/workshop_r_basics/data-structures.html |
2 Data Structures
=================
The previous chapter introduced R and gave you enough background to do some
simple computations on data sets. This chapter focuses on the foundational
knowledge and skills you’ll need in order to use R effectively in the long
term. Specifically, it begins with a deep dive into R’s various data structures
and data types, then explains a variety of ways to get and set their elements.
#### Learning Objectives
* Create vectors, including sequences
* Identify whether a function is vectorized or not
* Check the type and class of an object
* Coerce an object to a different type
* Describe matrices and lists
* Describe and differentiate `NA`, `NaN`, `Inf`, `NULL`
* Identify, create, and relevel factors
* Index vectors with empty, integer, string, and logical arguments
* Negate or combine conditions with logic operators
2\.1 Vectors
------------
A *vector* is a collection of values. Vectors are the fundamental unit of data
in R, and you’ve already used them in the previous sections.
For instance, each column in a data frame is a vector. So the `quarter` column
in the earnings data from Section [1\.6](getting-started.html#data-frames) is a vector. Take a look
at it now. You can use `head` to avoid printing too much. Set the second
argument to `10` so that exactly 10 values are printed:
```
head(earn$quarter, 10)
```
```
## [1] 1 2 3 4 1 2 3 4 1 2
```
Like all vectors, this vector is *ordered*, which just means the values, or
*elements*, have specific positions. The value of the 1st element is `1`, the
2nd is `2`, the 5th is also `1`, and so on.
Notice that the elements of this vector are all integers. This isn’t just a
quirk of the earnings data set. In R, the elements of a vector must all be the
same type of data (we say the elements are *homogeneous*). A vector can contain
integers, decimal numbers, strings, or several other types of data, but not a
mix these all at once.
The other columns in the earnings data frame are also vectors. For instance,
the `age` column is a vector of strings:
```
head(earn$age)
```
```
## [1] "16 years and over" "16 years and over" "16 years and over"
## [4] "16 years and over" "16 years and over" "16 years and over"
```
Vectors can contain any number of elements, including 0 or 1 element. Unlike
mathematics, R does not distinguish between vectors and *scalars* (solitary
values). So as far as R is concerned, a solitary value, like `3`, is a vector
with 1 element.
You can check the length of a vector (and other objects) with the `length`
function:
```
length(3)
```
```
## [1] 1
```
```
length("hello")
```
```
## [1] 1
```
```
length(earn$age)
```
```
## [1] 4224
```
Since the last of these is a column from the data frame `earn`, the length is
the same as the number of rows in `earn`.
### 2\.1\.1 Creating Vectors
Sometimes you’ll want to create your own vectors. You can do this by
concatenating several vectors together with the `c` function. It accepts any
number of vector arguments, and combines them into a single vector:
```
c(1, 2, 19, -3)
```
```
## [1] 1 2 19 -3
```
```
c("hi", "hello")
```
```
## [1] "hi" "hello"
```
```
c(1, 2, c(3, 4))
```
```
## [1] 1 2 3 4
```
If the arguments you pass to the `c` function have different data types, R will
attempt to convert them to a common data type that preserves the information:
```
c(1, "cool", 2.3)
```
```
## [1] "1" "cool" "2.3"
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains the rules for this conversion in more
detail.
The colon operator `:` creates vectors that contain sequences of integers. This
is useful for creating “toy” data to test things on, and later we’ll see that
it’s also important in several other contexts. Here are a few different
sequences:
```
1:3
```
```
## [1] 1 2 3
```
```
-3:5
```
```
## [1] -3 -2 -1 0 1 2 3 4 5
```
```
10:1
```
```
## [1] 10 9 8 7 6 5 4 3 2 1
```
Beware that both endpoints are included in the sequence, even in sequences like
`1:0`, and that the difference between elements is always `-1` or `1`. If you
want more control over the generated sequence, use the `seq` function instead.
### 2\.1\.2 Indexing Vectors
You can access individual elements of a vector with the *indexing operator* `[`
(also called the *square bracket operator*). The syntax is:
```
VECTOR[INDEXES]
```
Here `INDEXES` is a vector of positions of elements you want to get or set.
For example, let’s make a vector with 5 elements and get the 2nd element:
```
x = c(4, 8, 3, 2, 1)
x[2]
```
```
## [1] 8
```
Now let’s get the 3rd and 1st element:
```
x[c(3, 1)]
```
```
## [1] 3 4
```
You can use the indexing operator together with the assignment operator to
assign elements of a vector:
```
x[3] = 0
x
```
```
## [1] 4 8 0 2 1
```
Indexing is among the most frequently used operations in R, so take some time
to try it out with few different vectors and indexes. We’ll revisit indexing in
Section [2\.4](data-structures.html#indexing) to learn a lot more about it.
### 2\.1\.3 Vectorization
Let’s look at what happens if we call a mathematical function, like `sin`, on a
vector:
```
x = c(1, 3, 0, pi)
sin(x)
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
This gives us the same result as if we had called the function separately on
each element. That is, the result is the same as:
```
c(sin(1), sin(3), sin(0), sin(pi))
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
Of course, the first version is much easier to type.
Functions that take a vector argument and get applied element\-by\-element like
this are said to be *vectorized*. Most functions in R are vectorized,
especially math functions. Some examples include `sin`, `cos`, `tan`, `log`,
`exp`, and `sqrt`.
Functions that are not vectorized tend to be ones that combine or aggregate
values in some way. For instance, the `sum`, `mean`, `median`, `length`, and
`class` functions are not vectorized.
A function can be vectorized across multiple arguments. This is easiest to
understand in terms of the arithmetic operators. Let’s see what happens if we
add two vectors together:
```
x = c(1, 2, 3, 4)
y = c(-1, 7, 10, -10)
x + y
```
```
## [1] 0 9 13 -6
```
The elements are paired up and added according to their positions. The other
arithmetic operators are also vectorized:
```
x - y
```
```
## [1] 2 -5 -7 14
```
```
x * y
```
```
## [1] -1 14 30 -40
```
```
x / y
```
```
## [1] -1.0000000 0.2857143 0.3000000 -0.4000000
```
### 2\.1\.4 Recycling
When a function is vectorized across multiple arguments, what happens if the
vectors have different lengths? Whenever you think of a question like this as
you’re learning R, the best way to find out is to create some toy data and test
it yourself. Let’s try that now:
```
x = c(1, 2, 3, 4)
y = c(-1, 1)
x + y
```
```
## [1] 0 3 2 5
```
The elements of the shorter vector are *recycled* to match the length of the
longer vector. That is, after the second element, the elements of `y` are
repeated to make a vector with the same length as `x` (because `x` is longer),
and then vectorized addition is carried out as usual.
Here’s what that looks like written down:
```
1 2 3 4
+ -1 1 -1 1
-----------
0 3 2 5
```
If the length of the longer vector is not a multiple of the length of the
shorter vector, R issues a warning, but still returns the result. The warning
as meant as a reminder, because unintended recycling is a common source of
bugs:
```
x = c(1, 2, 3, 4, 5)
y = c(-1, 1)
x + y
```
```
## Warning in x + y: longer object length is not a multiple of shorter object
## length
```
```
## [1] 0 3 2 5 4
```
Recycling might seem strange at first, but it’s convenient if you want to use a
specific value (or pattern of values) with a vector. For instance, suppose you
want to multiply all the elements of a vector by `2`. Recycling makes this
easy:
```
2 * c(1, 2, 3)
```
```
## [1] 2 4 6
```
When you use recycling, most of the time one of the arguments will be a scalar
like this.
2\.2 Data Types \& Classes
--------------------------
Data can be categorized into different *types* based on sets of shared
characteristics. For instance, statisticians tend to think about whether data
are numeric or categorical:
* numeric
+ continuous (real or complex numbers)
+ discrete (integers)
* categorical
+ nominal (categories with no ordering)
+ ordinal (categories with some ordering)
Of course, other types of data, like graphs (networks) and natural language
(books, speech, and so on), are also possible. Categorizing data this way is
useful for reasoning about which methods to apply to which data.
In R, data objects are categorized in two different ways:
1. The *class* of an R object describes what the object does, or the role that
it plays. Sometimes objects can do more than one thing, so objects can have
more than one class. The `class` function, which debuted in Section
[1\.6](getting-started.html#data-frames), returns the classes of its argument.
2. The *type* of an R object describes what the object is. Technically, the
type corresponds to how the object is stored in your computer’s memory. Each
object has exactly one type. The `typeof` function returns the type of its
argument.
Of the two, classes tend to be more important than types. If you aren’t sure
what an object is, checking its classes should be the first thing you do.
The built\-in classes you’ll use all the time correspond to vectors and lists
(which we’ll learn more about in Section [2\.2\.1](data-structures.html#lists)):
| Class | Example | Description |
| --- | --- | --- |
| logical | `TRUE`, `FALSE` | Logical (or Boolean) values |
| integer | `-1L`, `1L`, `2L` | Integer numbers |
| numeric | `-2.1`, `7`, `34.2` | Real numbers |
| complex | `3-2i`, `-8+0i` | Complex numbers |
| character | `"hi"`, `"YAY"` | Text strings |
| list | `list(TRUE, 1, "hi")` | Ordered collection of heterogeneous elements |
R doesn’t distinguish between scalars and vectors, so the class of a vector is
the same as the class of its elements:
```
class("hi")
```
```
## [1] "character"
```
```
class(c("hello", "hi"))
```
```
## [1] "character"
```
In addition, for most vectors, the class and the type are the same:
```
x = c(TRUE, FALSE)
class(x)
```
```
## [1] "logical"
```
```
typeof(x)
```
```
## [1] "logical"
```
The exception to this rule is numeric vectors, which have type `double` for
historical reasons:
```
class(pi)
```
```
## [1] "numeric"
```
```
typeof(pi)
```
```
## [1] "double"
```
```
typeof(3)
```
```
## [1] "double"
```
The word “double” here stands for [*double\-precision floating point
number*](https://en.wikipedia.org/wiki/Double-precision_floating-point_format), a standard way to represent real numbers on computers.
By default, R assumes any numbers you enter in code are numeric, even if
they’re integer\-valued.
The class `integer` also represents integer numbers, but it’s not used as often
as `numeric`. A few functions, such as the sequence operator `:` and the
`length` function, return integers. You can also force R to create an integer
by adding the suffix `L` to a number, but there are no major drawbacks to using
the `double` default:
```
class(1:3)
```
```
## [1] "integer"
```
```
class(3)
```
```
## [1] "numeric"
```
```
class(3L)
```
```
## [1] "integer"
```
Besides the classes for vectors and lists, there are several built\-in classes
that represent more sophisticated data structures:
| Class | Description |
| --- | --- |
| function | Functions |
| factor | Categorical values |
| matrix | Two\-dimensional ordered collection of homogeneous elements |
| array | Multi\-dimensional ordered collection of homogeneous elements |
| data.frame | Data frames |
For these, the class is usually different from the type. We’ll learn more about
most of these later on.
### 2\.2\.1 Lists
A *list* is an ordered data structure where the elements can have different
types (they are *heterogeneous*). This differs from a vector, where the
elements all have to have the same type, as we saw in Section [2\.1](data-structures.html#vectors).
The tradeoff is that most vectorized functions do not work with lists.
You can make an ordinary list with the `list` function:
```
x = list(1, c("hi", "bye"))
class(x)
```
```
## [1] "list"
```
```
typeof(x)
```
```
## [1] "list"
```
For ordinary lists, the type and the class are both `list`. In Section
[2\.4](data-structures.html#indexing), we’ll learn how to get and set list elements, and in later
sections we’ll learn more about when and why to use lists.
You’ve already seen one list, the earnings data frame:
```
class(earn)
```
```
## [1] "data.frame"
```
```
typeof(earn)
```
```
## [1] "list"
```
Under the hood, data frames are lists, and each column is a list element.
Because the class is `data.frame` rather than `list`, R treats data frames
differently from ordinary lists. This difference is apparent in how data frames
are printed compared to ordinary lists.
### 2\.2\.2 Implicit Coercion
R’s types fall into a natural hierarchy of expressiveness:
Each type on the right is more expressive than the ones to its left. That is,
with the convention that `FALSE` is `0` and `TRUE` is `1`, we can represent any
logical value as an integer. In turn, we can represent any integer as a double,
and any double as a complex number. By writing the number out, we can also
represent any complex number as a string.
The point is that no information is lost as we follow the arrows from left to
right along the types in the hierarchy. In fact, R will automatically and
silently convert from types on the left to types on the right as needed. This
is called *implicit coercion*.
As an example, consider what happens if we add a logical value to a number:
```
TRUE + 2
```
```
## [1] 3
```
R automatically converts the `TRUE` to the numeric value `1`, and then carries
out the arithmetic as usual.
We’ve already seen implicit coercion at work once before, when we learned the
`c` function. Since the elements of a vector all have to have the same type, if
you pass several different types to `c`, then R tries to use implicit coercion
to make them the same:
```
x = c(TRUE, "hi", 1, 1+3i)
class(x)
```
```
## [1] "character"
```
```
x
```
```
## [1] "TRUE" "hi" "1" "1+3i"
```
Implicit coercion is strictly one\-way; it never occurs in the other direction.
If you want to coerce a type on the right to one on the left, you can do it
explicitly with one of the `as.TYPE` functions. For instance, the `as.numeric`
(or `as.double`) function coerces to numeric:
```
as.numeric("3.1")
```
```
## [1] 3.1
```
There are a few types that fall outside of the hierarchy entirely, like
functions. Implicit coercion doesn’t apply to these. If you try to use these
types where it doesn’t make sense to, R generally returns an error:
```
sin + 3
```
```
## Error in sin + 3: non-numeric argument to binary operator
```
If you try to use these types as elements of a vector, you get back a list
instead:
```
x = c(1, 2, sum)
class(x)
```
```
## [1] "list"
```
Understanding how implicit coercion works will help you avoid bugs, and can
also be a time\-saver. For example, we can use implicit coercion to succinctly
count how many elements of a vector satisfy a some condition:
```
x = c(1, 3, -1, 10, -2, 3, 8, 2)
condition = x < 4
sum(condition) # or sum(x < 4)
```
```
## [1] 6
```
If you still don’t quite understand how the code above works, try inspecting
each variable. In general, inspecting each step or variable is a good strategy
for understanding why a piece of code works (or doesn’t work!). Here the
implicit coercion happens in the third line.
### 2\.2\.3 Matrices \& Arrays
A *matrix* is the two\-dimensional analogue of a vector. The elements, which are
arranged into rows and columns, are ordered and homogeneous.
You can create a matrix from a vector with the `matrix` function. By default,
the columns are filled first:
```
# A matrix with 2 rows and 3 columns:
matrix(1:6, 2, 3)
```
```
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
The class of a matrix is always `matrix`, and the type matches the type of the
elements:
```
x = matrix(c("a", "b", NA, "c"), 2, 2)
x
```
```
## [,1] [,2]
## [1,] "a" NA
## [2,] "b" "c"
```
```
class(x)
```
```
## [1] "matrix" "array"
```
```
typeof(x)
```
```
## [1] "character"
```
You can use the matrix multiplication operator `%*%` to multiply two matrices
with compatible dimensions.
An *array* is a further generalization of matrices to higher dimensions. You
can create an array from a vector with the `array` function. The
characteristics of arrays are almost identical to matrices, but the class of an
array is always `array`.
### 2\.2\.4 Factors
A feature is *categorical* if it measures a qualitative category. For example,
the genres `rock`, `blues`, `alternative`, `folk`, `pop` are categories.
R uses the class `factor` to represent categorical data. Visualizations and
statistical models sometimes treat factors differently than other data types,
so it’s important to make sure you have the right data type. If you’re ever
unsure, remember that you can check the class of an object with the `class`
function.
When you load a data set, R usually can’t tell which features are categorical.
That means identifying and converting the categorical features is up to you.
For beginners, it can be difficult to understand whether a feature is
categorical or not. The key is to think about whether you want to use the
feature to divide the data into groups.
For example, if we want to know how many songs are in the `rock` genre, we
first need to divide the songs by genre, and then count the number of songs in
each group (or at least the `rock` group).
As a second example, months recorded as numbers can be categorical or not,
depending on how you want to use them. You might want to treat them as
categorical (for example, to compute max rainfall in each month) or you might
want to treat them as numbers (for example, to compute the number of months
time between two events).
The bottom line is that you have to think about what you’ll be doing in the
analysis. In some cases, you might treat a feature as categorical only for part
of the analysis.
Let’s think about which features are categorical in earnings data set. To
refresh our memory of what’s in the data set, we can look at the structural
summary:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
The columns `n_persons` and `median_weekly_earn` are quantitative rather than
categorical, since they measure quantities of people and dollars, respectively.
The `sex`, `race`, `ethnic_origin`, and `age` columns are all categorical,
since they are all qualitative measurements. We can see this better if we use
the `table` function to compute frequencies for the values in the columns:
```
table(earn$sex)
```
```
##
## Both Sexes Men Women
## 1408 1408 1408
```
```
table(earn$race)
```
```
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
```
```
table(earn$ethnic_origin)
```
```
##
## All Origins Hispanic or Latino
## 3564 660
```
```
table(earn$age)
```
```
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
Each column has only a few unique values, repeated many times. These are ideal
for grouping the data. If age had been recorded as a number, rather than a
range, it would probably be better to treat it as quantitative, since there
would be far more unique values. Columns with many unique values don’t make
good categorical features, because each group will only have a few elements!
That leaves us with the `year` and `quarter` columns. It’s easy to imagine
grouping the data by year or quarter, but these are also clearly numbers. These
columns can be treated as quantitative or categorical data, depending on how we
want to use them to analyze the data.
Let’s convert the `age` column to a factor. To do this, use the `factor`
function:
```
age = factor(earn$age)
head(age)
```
```
## [1] 16 years and over 16 years and over 16 years and over 16 years and over
## [5] 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
Notice that factors are printed differently than strings.
The categories of a factor are called *levels*. You can list the levels with
the `levels` function:
```
levels(age)
```
```
## [1] "16 to 19 years" "16 to 24 years" "16 years and over"
## [4] "20 to 24 years" "25 to 34 years" "25 to 54 years"
## [7] "25 years and over" "35 to 44 years" "45 to 54 years"
## [10] "55 to 64 years" "55 years and over" "65 years and over"
```
Factors remember all possible levels even if you take a subset:
```
age[1:3]
```
```
## [1] 16 years and over 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
This is another way factors are different from strings. Factors “remember” all
possible levels even if they aren’t present. This ensures that if you plot a
factor, the missing levels will still be represented on the plot.
You can make a factor forget levels that aren’t present with the `droplevels`
function:
```
droplevels(age[1:3])
```
```
## [1] 16 years and over 16 years and over 16 years and over
## Levels: 16 years and over
```
2\.3 Special Values
-------------------
R has four *special* values to represent missing or invalid data.
### 2\.3\.1 Missing Values
The value `NA`, called the *missing value*, represents missing entries in a
data set. It’s implied that the entries are missing due to how the data was
collected, although there are exceptions. As an example, imagine the data came
from a survey, and respondents chose not to answer some questions. In the data
set, their answers for those questions can be recorded as `NA`.
The missing value is a chameleon: it can be a logical, integer, numeric,
complex, or character value. By default, the missing value is logical, and the
other types occur through coercion ([2\.2\.2](data-structures.html#implicit-coercion)):
```
class(NA)
```
```
## [1] "logical"
```
```
class(c(1, NA))
```
```
## [1] "numeric"
```
```
class(c("hi", NA, NA))
```
```
## [1] "character"
```
The missing value is also contagious: it represents an unknown quantity, so
using it as an argument to a function usually produces another missing value.
The idea is that if the inputs to a computation are unknown, generally so is
the output:
```
NA - 3
```
```
## [1] NA
```
```
mean(c(1, 2, NA))
```
```
## [1] NA
```
As a consequence, testing whether an object is equal to the missing value with
`==` doesn’t return a meaningful result:
```
5 == NA
```
```
## [1] NA
```
```
NA == NA
```
```
## [1] NA
```
You can use the `is.na` function instead:
```
is.na(5)
```
```
## [1] FALSE
```
```
is.na(NA)
```
```
## [1] TRUE
```
```
is.na(c(1, NA, 3))
```
```
## [1] FALSE TRUE FALSE
```
Missing values are a feature that sets R apart from most other programming
languages.
### 2\.3\.2 Infinity
The value `Inf` represents infinity, and can be numeric or complex. You’re most
likely to encounter it as the result of certain computations:
```
13 / 0
```
```
## [1] Inf
```
```
class(Inf)
```
```
## [1] "numeric"
```
You can use the `is.infinite` function to test whether a value is infinite:
```
is.infinite(3)
```
```
## [1] FALSE
```
```
is.infinite(c(-Inf, 0, Inf))
```
```
## [1] TRUE FALSE TRUE
```
### 2\.3\.3 Not a Number
The value `NaN`, called *not a number*, represents a quantity that’s undefined
mathematically. For instance, dividing 0 by 0 is undefined:
```
0 / 0
```
```
## [1] NaN
```
```
class(NaN)
```
```
## [1] "numeric"
```
Like `Inf`, `NaN` can be numeric or complex.
You can use the `is.nan` function to test whether a value is `NaN`:
```
is.nan(c(10.1, log(-1), 3))
```
```
## Warning in log(-1): NaNs produced
```
```
## [1] FALSE TRUE FALSE
```
### 2\.3\.4 Null
The value `NULL` represents a quantity that’s undefined in R. Most of the time,
`NULL` indicates the absence of a result. For instance, vectors don’t have
dimensions, so the `dim` function returns `NULL` for vectors:
```
dim(c(1, 2))
```
```
## NULL
```
```
class(NULL)
```
```
## [1] "NULL"
```
```
typeof(NULL)
```
```
## [1] "NULL"
```
Unlike the other special values, `NULL` has its own unique type and class.
You can use the `is.null` function to test whether a value is `NULL`:
```
is.null("null")
```
```
## [1] FALSE
```
```
is.null(NULL)
```
```
## [1] TRUE
```
2\.4 Indexing
-------------
The way to get and set elements of a data structure is by *indexing*. Sometimes
this is also called *subsetting* or (element) *extraction*. Indexing is a
fundamental operation in R, key to reasoning about how to solve problems with
the language.
We first saw indexing in Section [1\.6](getting-started.html#data-frames), where we used `$`, the
dollar sign operator, to get and set data frame columns. We saw indexing again
in Section [2\.1\.2](data-structures.html#indexing-vectors), where we used `[`, the indexing or square
bracket operator, to get and set elements of vectors.
The indexing operator `[` is R’s primary operator for indexing. It works in
four different ways, depending on the type of the index you use. These four
ways to select elements are:
1. All elements, with no index
2. By position, with a numeric index
3. By name, with a character index
4. By condition, with a logical index
Let’s examine each in more detail. We’ll use this vector as an example, to keep
things concise:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
x
```
```
## a b c d e
## 10 20 30 40 50
```
Even though we’re using a vector here, the indexing operator works with almost
all data structures, including factors, lists, matrices, and data frames. We’ll
look at unique behavior for some of these later on.
### 2\.4\.1 All Elements
The first way to use `[` to select elements is to leave the index blank. This
selects all elements:
```
x[]
```
```
## a b c d e
## 10 20 30 40 50
```
This way of indexing is rarely used for getting elements, since it’s the same
as entering the variable name without the indexing operator. Instead, its main
use is for setting elements. Suppose we want to set all the elements of `x` to
`5`. You might try writing this:
```
x = 5
x
```
```
## [1] 5
```
Rather than setting each element to `5`, this sets `x` to the scalar `5`, which
is not what we want. Let’s reset the vector and try again, this time using the
indexing operator:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
x[] = 5
x
```
```
## a b c d e
## 5 5 5 5 5
```
As you can see, now all the elements are `5`. So the indexing operator is
necessary to specify that we want to set the elements rather than the whole
variable.
Let’s reset `x` one more time, so that we can use it again in the next example:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.2 By Position
The second way to use `[` is to select elements by position. This happens when
you use an integer or numeric index. We already saw the basics of this in
Section [2\.1\.2](data-structures.html#indexing-vectors).
The positions of the elements in a vector (or other data structure) correspond
to numbers starting from 1 for the first element. This way of indexing is
frequently used together with the sequence operator `:` to get ranges of
values. For instance, let’s get the 2nd through 4th elements of `x`:
```
x[2:4]
```
```
## b c d
## 20 30 40
```
You can also use this way of indexing to set specific elements or ranges of
elements. For example, let’s set the 3rd and 5th elements of `x` to `9` and
`7`, respectively:
```
x[c(3, 5)] = c(9, 7)
x
```
```
## a b c d e
## 10 20 9 40 7
```
When getting elements, you can repeat numbers in the index to get the same
element more than once. You can also use the order of the numbers to control
the order of the elements:
```
x[c(2, 1, 2, 2)]
```
```
## b a b b
## 20 10 20 20
```
Finally, if the index contains only negative numbers, the elements at those
positions are excluded rather than selected. For instance, let’s get all
elements except the 1st and 5th:
```
x[-c(1, 5)]
```
```
## b c d
## 20 9 40
```
When you index by position, the index should always be all positive or all
negative. Using a mix of positive and negative numbers causes R to emit error
rather than returning elements, since it’s unclear what the result should be:
```
x[c(-1, 2)]
```
```
## Error in x[c(-1, 2)]: only 0's may be mixed with negative subscripts
```
### 2\.4\.3 By Name
The third way to use `[` is to select elements by name. This happens when you
use a character vector as the index, and only works with named data structures.
Like indexing by position, you can use indexing by name to get or set elements.
You can also use it to repeat elements or change the order. Let’s get elements
`a`, `c`, `d`, and `a` again from the vector `x`:
```
y = x[c("a", "c", "d", "a")]
y
```
```
## a c d a
## 10 9 40 10
```
Element names are generally unique, but if they’re not, indexing by name gets
or sets the first element whose name matches the index:
```
y["a"]
```
```
## a
## 10
```
Let’s reset `x` again to prepare for learning about the final way to index:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.4 By Condition
The fourth and final way to use `[` is to select elements based on a condition.
This happens when you use a logical vector as the index. The logical vector
should have the same length as what you’re indexing, and will be recycled if it
doesn’t.
#### Congruent Vectors
To understand indexing by condition, we first need to learn about congruent
vectors. Two vectors are *congruent* if they have the same length and they
correspond element\-by\-element.
For example, suppose you do a survey that records each respondent’s favorite
animal and age. These are two different vectors of information, but each person
will have a response for both. So you’ll have two vectors that are the same
length:
```
animal = c("dog", "cat", "iguana")
age = c(31, 24, 72)
```
The 1st element of each vector corresponds to the 1st person, the 2nd to the
2nd person, and so on. These vectors are congruent.
Notice that columns in a data frame are always congruent!
#### Back to Indexing
When you index by condition, the index should generally be congruent to the
object you’re indexing. Elements where the index is `TRUE` are kept and
elements where the index is `FALSE` are dropped.
If you create the index from a condition on the object, it’s automatically
congruent. For instance, let’s make a condition based on the vector `x`:
```
is_small = x < 25
is_small
```
```
## a b c d e
## TRUE TRUE FALSE FALSE FALSE
```
The 1st element in the logical vector `is_small` corresponds to the 1st element
of `x`, the 2nd to the 2nd, and so on. The vectors `x` and `is_small` are
congruent.
It makes sense to use `is_small` as an index for `x`, and it gives us all the
elements less than `25`:
```
x[is_small]
```
```
## a b
## 10 20
```
Of course, you can also avoid using an intermediate variable for the condition:
```
x[x > 10]
```
```
## b c d e
## 20 30 40 50
```
If you create index some other way (not using the object), make sure that it’s
still congruent to the object. Otherwise, the subset returned from indexing
might not be meaningful.
You can also use indexing by condition to set elements, just as the other ways
of indexing can be used to set elements. For instance, let’s set all the
elements of `x` that are greater than `10` to the missing value `NA`:
```
x[x > 10] = NA
x
```
```
## a b c d e
## 10 NA NA NA NA
```
### 2\.4\.5 Logic
All of the conditions we’ve seen so far have been written in terms of a single
test. If you want to use more sophisticated conditions, R provides operators to
negate and combine logical vectors. These operators are useful for working with
logical vectors even outside the context of indexing.
#### Negation
The *NOT operator* `!` converts `TRUE` to `FALSE` and `FALSE` to `TRUE`:
```
x = c(TRUE, FALSE, TRUE, TRUE, NA)
x
```
```
## [1] TRUE FALSE TRUE TRUE NA
```
```
!x
```
```
## [1] FALSE TRUE FALSE FALSE NA
```
You can use `!` with a condition:
```
y = c("hi", "hello")
!(y == "hi")
```
```
## [1] FALSE TRUE
```
The NOT operator is vectorized.
#### Combinations
R also has operators for combining logical values.
The *AND operator* `&` returns `TRUE` only when both arguments are `TRUE`. Here
are some examples:
```
FALSE & FALSE
```
```
## [1] FALSE
```
```
TRUE & FALSE
```
```
## [1] FALSE
```
```
FALSE & TRUE
```
```
## [1] FALSE
```
```
TRUE & TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE, TRUE) & c(TRUE, TRUE, FALSE)
```
```
## [1] TRUE FALSE FALSE
```
The *OR operator* `|` returns `TRUE` when at least one argument is `TRUE`.
Let’s see some examples:
```
FALSE | FALSE
```
```
## [1] FALSE
```
```
TRUE | FALSE
```
```
## [1] TRUE
```
```
FALSE | TRUE
```
```
## [1] TRUE
```
```
TRUE | TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) | c(TRUE, TRUE)
```
```
## [1] TRUE TRUE
```
Be careful: everyday English is less precise than logic. You might say:
> I want all subjects with age over 50 and all subjects that like cats.
But in logic this means:
`(subject age over 50) OR (subject likes cats)`
So think carefully about whether you need both conditions to be true (AND) or
at least one (OR).
Rarely, you might want *exactly one* condition to be true. The *XOR (eXclusive
OR) function* `xor()` returns `TRUE` when exactly one argument is `TRUE`. For
example:
```
xor(FALSE, FALSE)
```
```
## [1] FALSE
```
```
xor(TRUE, FALSE)
```
```
## [1] TRUE
```
```
xor(TRUE, TRUE)
```
```
## [1] FALSE
```
The AND, OR, and XOR operators are vectorized.
#### Short\-circuiting
The second argument is irrelevant in some conditions:
* `FALSE &` is always `FALSE`
* `TRUE |` is always `TRUE`
Now imagine you have `FALSE & long_computation()`. You can save time by
skipping `long_computation()`. A *short\-circuit operator* does exactly that.
R has two short\-circuit operators:
* `&&` is a short\-circuited `&`
* `||` is a short\-circuited `|`
These operators only evaluate the second argument if it is necessary to
determine the result. Here are some of these:
```
TRUE && FALSE
```
```
## [1] FALSE
```
```
TRUE && TRUE
```
```
## [1] TRUE
```
```
TRUE || TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) && c(TRUE, TRUE)
```
```
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
```
```
## [1] TRUE
```
For the final expression, notice R only combines the first element of each
vector. The others are ignored. In other words, the short\-circuit operators are
*not* vectorized! Because of this, generally you **should not use** the
short\-circuit operators for indexing. Their main use is in writing conditions
for if\-statements, which we’ll learn about later on.
2\.5 Exercises
--------------
### 2\.5\.1 Exercise
The `rep` function is another way to create a vector. Read the help file for
the `rep` function.
1. What does the `rep` function do to create a vector? Give an example.
2. The `rep` function has parameters `times` and `each`. What does each do, and
how do they differ? Give examples for both.
3. Can you set both of `times` and `each` in a single call to `rep`? If the
function raises an error, explain what the error message means. If the
function returns a result, explain how the result corresponds to the
arguments you chose.
### 2\.5\.2 Exercise
Considering how implicit coercion works (Section [2\.2\.2](data-structures.html#implicit-coercion)):
1. Why does `"3" + 4` raise an error?
2. Why does `"TRUE" == TRUE` return `TRUE`?
3. Why does `"FALSE" < TRUE` return TRUE?
### 2\.5\.3 Exercise
1. Section [2\.3\.1](data-structures.html#missing-values) described the missing value as a “chameleon”
because it can have many different types. Is `Inf` also a chameleon? Use
examples to justify your answer.
2. The missing value is also “contagious” because using it as an argument
usually produces another missing value. Is `Inf` contagious? Again, use
examples to justify your answer.
### 2\.5\.4 Exercise
The `table` function is useful for counting all sorts of things, not just level
frequencies for a factor. For instance, you can use `table` to count how many
`TRUE` and `FALSE` values there are in a logical vector.
1. For the earnings data, how many rows had median weekly earnings below $750?
2. Based on how the data is structured, is your answer in part 1 the same as
the number of quarters that had median weekly earnings below $750? Explain.
#### Learning Objectives
* Create vectors, including sequences
* Identify whether a function is vectorized or not
* Check the type and class of an object
* Coerce an object to a different type
* Describe matrices and lists
* Describe and differentiate `NA`, `NaN`, `Inf`, `NULL`
* Identify, create, and relevel factors
* Index vectors with empty, integer, string, and logical arguments
* Negate or combine conditions with logic operators
2\.1 Vectors
------------
A *vector* is a collection of values. Vectors are the fundamental unit of data
in R, and you’ve already used them in the previous sections.
For instance, each column in a data frame is a vector. So the `quarter` column
in the earnings data from Section [1\.6](getting-started.html#data-frames) is a vector. Take a look
at it now. You can use `head` to avoid printing too much. Set the second
argument to `10` so that exactly 10 values are printed:
```
head(earn$quarter, 10)
```
```
## [1] 1 2 3 4 1 2 3 4 1 2
```
Like all vectors, this vector is *ordered*, which just means the values, or
*elements*, have specific positions. The value of the 1st element is `1`, the
2nd is `2`, the 5th is also `1`, and so on.
Notice that the elements of this vector are all integers. This isn’t just a
quirk of the earnings data set. In R, the elements of a vector must all be the
same type of data (we say the elements are *homogeneous*). A vector can contain
integers, decimal numbers, strings, or several other types of data, but not a
mix these all at once.
The other columns in the earnings data frame are also vectors. For instance,
the `age` column is a vector of strings:
```
head(earn$age)
```
```
## [1] "16 years and over" "16 years and over" "16 years and over"
## [4] "16 years and over" "16 years and over" "16 years and over"
```
Vectors can contain any number of elements, including 0 or 1 element. Unlike
mathematics, R does not distinguish between vectors and *scalars* (solitary
values). So as far as R is concerned, a solitary value, like `3`, is a vector
with 1 element.
You can check the length of a vector (and other objects) with the `length`
function:
```
length(3)
```
```
## [1] 1
```
```
length("hello")
```
```
## [1] 1
```
```
length(earn$age)
```
```
## [1] 4224
```
Since the last of these is a column from the data frame `earn`, the length is
the same as the number of rows in `earn`.
### 2\.1\.1 Creating Vectors
Sometimes you’ll want to create your own vectors. You can do this by
concatenating several vectors together with the `c` function. It accepts any
number of vector arguments, and combines them into a single vector:
```
c(1, 2, 19, -3)
```
```
## [1] 1 2 19 -3
```
```
c("hi", "hello")
```
```
## [1] "hi" "hello"
```
```
c(1, 2, c(3, 4))
```
```
## [1] 1 2 3 4
```
If the arguments you pass to the `c` function have different data types, R will
attempt to convert them to a common data type that preserves the information:
```
c(1, "cool", 2.3)
```
```
## [1] "1" "cool" "2.3"
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains the rules for this conversion in more
detail.
The colon operator `:` creates vectors that contain sequences of integers. This
is useful for creating “toy” data to test things on, and later we’ll see that
it’s also important in several other contexts. Here are a few different
sequences:
```
1:3
```
```
## [1] 1 2 3
```
```
-3:5
```
```
## [1] -3 -2 -1 0 1 2 3 4 5
```
```
10:1
```
```
## [1] 10 9 8 7 6 5 4 3 2 1
```
Beware that both endpoints are included in the sequence, even in sequences like
`1:0`, and that the difference between elements is always `-1` or `1`. If you
want more control over the generated sequence, use the `seq` function instead.
### 2\.1\.2 Indexing Vectors
You can access individual elements of a vector with the *indexing operator* `[`
(also called the *square bracket operator*). The syntax is:
```
VECTOR[INDEXES]
```
Here `INDEXES` is a vector of positions of elements you want to get or set.
For example, let’s make a vector with 5 elements and get the 2nd element:
```
x = c(4, 8, 3, 2, 1)
x[2]
```
```
## [1] 8
```
Now let’s get the 3rd and 1st element:
```
x[c(3, 1)]
```
```
## [1] 3 4
```
You can use the indexing operator together with the assignment operator to
assign elements of a vector:
```
x[3] = 0
x
```
```
## [1] 4 8 0 2 1
```
Indexing is among the most frequently used operations in R, so take some time
to try it out with few different vectors and indexes. We’ll revisit indexing in
Section [2\.4](data-structures.html#indexing) to learn a lot more about it.
### 2\.1\.3 Vectorization
Let’s look at what happens if we call a mathematical function, like `sin`, on a
vector:
```
x = c(1, 3, 0, pi)
sin(x)
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
This gives us the same result as if we had called the function separately on
each element. That is, the result is the same as:
```
c(sin(1), sin(3), sin(0), sin(pi))
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
Of course, the first version is much easier to type.
Functions that take a vector argument and get applied element\-by\-element like
this are said to be *vectorized*. Most functions in R are vectorized,
especially math functions. Some examples include `sin`, `cos`, `tan`, `log`,
`exp`, and `sqrt`.
Functions that are not vectorized tend to be ones that combine or aggregate
values in some way. For instance, the `sum`, `mean`, `median`, `length`, and
`class` functions are not vectorized.
A function can be vectorized across multiple arguments. This is easiest to
understand in terms of the arithmetic operators. Let’s see what happens if we
add two vectors together:
```
x = c(1, 2, 3, 4)
y = c(-1, 7, 10, -10)
x + y
```
```
## [1] 0 9 13 -6
```
The elements are paired up and added according to their positions. The other
arithmetic operators are also vectorized:
```
x - y
```
```
## [1] 2 -5 -7 14
```
```
x * y
```
```
## [1] -1 14 30 -40
```
```
x / y
```
```
## [1] -1.0000000 0.2857143 0.3000000 -0.4000000
```
### 2\.1\.4 Recycling
When a function is vectorized across multiple arguments, what happens if the
vectors have different lengths? Whenever you think of a question like this as
you’re learning R, the best way to find out is to create some toy data and test
it yourself. Let’s try that now:
```
x = c(1, 2, 3, 4)
y = c(-1, 1)
x + y
```
```
## [1] 0 3 2 5
```
The elements of the shorter vector are *recycled* to match the length of the
longer vector. That is, after the second element, the elements of `y` are
repeated to make a vector with the same length as `x` (because `x` is longer),
and then vectorized addition is carried out as usual.
Here’s what that looks like written down:
```
1 2 3 4
+ -1 1 -1 1
-----------
0 3 2 5
```
If the length of the longer vector is not a multiple of the length of the
shorter vector, R issues a warning, but still returns the result. The warning
as meant as a reminder, because unintended recycling is a common source of
bugs:
```
x = c(1, 2, 3, 4, 5)
y = c(-1, 1)
x + y
```
```
## Warning in x + y: longer object length is not a multiple of shorter object
## length
```
```
## [1] 0 3 2 5 4
```
Recycling might seem strange at first, but it’s convenient if you want to use a
specific value (or pattern of values) with a vector. For instance, suppose you
want to multiply all the elements of a vector by `2`. Recycling makes this
easy:
```
2 * c(1, 2, 3)
```
```
## [1] 2 4 6
```
When you use recycling, most of the time one of the arguments will be a scalar
like this.
### 2\.1\.1 Creating Vectors
Sometimes you’ll want to create your own vectors. You can do this by
concatenating several vectors together with the `c` function. It accepts any
number of vector arguments, and combines them into a single vector:
```
c(1, 2, 19, -3)
```
```
## [1] 1 2 19 -3
```
```
c("hi", "hello")
```
```
## [1] "hi" "hello"
```
```
c(1, 2, c(3, 4))
```
```
## [1] 1 2 3 4
```
If the arguments you pass to the `c` function have different data types, R will
attempt to convert them to a common data type that preserves the information:
```
c(1, "cool", 2.3)
```
```
## [1] "1" "cool" "2.3"
```
Section [2\.2\.2](data-structures.html#implicit-coercion) explains the rules for this conversion in more
detail.
The colon operator `:` creates vectors that contain sequences of integers. This
is useful for creating “toy” data to test things on, and later we’ll see that
it’s also important in several other contexts. Here are a few different
sequences:
```
1:3
```
```
## [1] 1 2 3
```
```
-3:5
```
```
## [1] -3 -2 -1 0 1 2 3 4 5
```
```
10:1
```
```
## [1] 10 9 8 7 6 5 4 3 2 1
```
Beware that both endpoints are included in the sequence, even in sequences like
`1:0`, and that the difference between elements is always `-1` or `1`. If you
want more control over the generated sequence, use the `seq` function instead.
### 2\.1\.2 Indexing Vectors
You can access individual elements of a vector with the *indexing operator* `[`
(also called the *square bracket operator*). The syntax is:
```
VECTOR[INDEXES]
```
Here `INDEXES` is a vector of positions of elements you want to get or set.
For example, let’s make a vector with 5 elements and get the 2nd element:
```
x = c(4, 8, 3, 2, 1)
x[2]
```
```
## [1] 8
```
Now let’s get the 3rd and 1st element:
```
x[c(3, 1)]
```
```
## [1] 3 4
```
You can use the indexing operator together with the assignment operator to
assign elements of a vector:
```
x[3] = 0
x
```
```
## [1] 4 8 0 2 1
```
Indexing is among the most frequently used operations in R, so take some time
to try it out with few different vectors and indexes. We’ll revisit indexing in
Section [2\.4](data-structures.html#indexing) to learn a lot more about it.
### 2\.1\.3 Vectorization
Let’s look at what happens if we call a mathematical function, like `sin`, on a
vector:
```
x = c(1, 3, 0, pi)
sin(x)
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
This gives us the same result as if we had called the function separately on
each element. That is, the result is the same as:
```
c(sin(1), sin(3), sin(0), sin(pi))
```
```
## [1] 8.414710e-01 1.411200e-01 0.000000e+00 1.224647e-16
```
Of course, the first version is much easier to type.
Functions that take a vector argument and get applied element\-by\-element like
this are said to be *vectorized*. Most functions in R are vectorized,
especially math functions. Some examples include `sin`, `cos`, `tan`, `log`,
`exp`, and `sqrt`.
Functions that are not vectorized tend to be ones that combine or aggregate
values in some way. For instance, the `sum`, `mean`, `median`, `length`, and
`class` functions are not vectorized.
A function can be vectorized across multiple arguments. This is easiest to
understand in terms of the arithmetic operators. Let’s see what happens if we
add two vectors together:
```
x = c(1, 2, 3, 4)
y = c(-1, 7, 10, -10)
x + y
```
```
## [1] 0 9 13 -6
```
The elements are paired up and added according to their positions. The other
arithmetic operators are also vectorized:
```
x - y
```
```
## [1] 2 -5 -7 14
```
```
x * y
```
```
## [1] -1 14 30 -40
```
```
x / y
```
```
## [1] -1.0000000 0.2857143 0.3000000 -0.4000000
```
### 2\.1\.4 Recycling
When a function is vectorized across multiple arguments, what happens if the
vectors have different lengths? Whenever you think of a question like this as
you’re learning R, the best way to find out is to create some toy data and test
it yourself. Let’s try that now:
```
x = c(1, 2, 3, 4)
y = c(-1, 1)
x + y
```
```
## [1] 0 3 2 5
```
The elements of the shorter vector are *recycled* to match the length of the
longer vector. That is, after the second element, the elements of `y` are
repeated to make a vector with the same length as `x` (because `x` is longer),
and then vectorized addition is carried out as usual.
Here’s what that looks like written down:
```
1 2 3 4
+ -1 1 -1 1
-----------
0 3 2 5
```
If the length of the longer vector is not a multiple of the length of the
shorter vector, R issues a warning, but still returns the result. The warning
as meant as a reminder, because unintended recycling is a common source of
bugs:
```
x = c(1, 2, 3, 4, 5)
y = c(-1, 1)
x + y
```
```
## Warning in x + y: longer object length is not a multiple of shorter object
## length
```
```
## [1] 0 3 2 5 4
```
Recycling might seem strange at first, but it’s convenient if you want to use a
specific value (or pattern of values) with a vector. For instance, suppose you
want to multiply all the elements of a vector by `2`. Recycling makes this
easy:
```
2 * c(1, 2, 3)
```
```
## [1] 2 4 6
```
When you use recycling, most of the time one of the arguments will be a scalar
like this.
2\.2 Data Types \& Classes
--------------------------
Data can be categorized into different *types* based on sets of shared
characteristics. For instance, statisticians tend to think about whether data
are numeric or categorical:
* numeric
+ continuous (real or complex numbers)
+ discrete (integers)
* categorical
+ nominal (categories with no ordering)
+ ordinal (categories with some ordering)
Of course, other types of data, like graphs (networks) and natural language
(books, speech, and so on), are also possible. Categorizing data this way is
useful for reasoning about which methods to apply to which data.
In R, data objects are categorized in two different ways:
1. The *class* of an R object describes what the object does, or the role that
it plays. Sometimes objects can do more than one thing, so objects can have
more than one class. The `class` function, which debuted in Section
[1\.6](getting-started.html#data-frames), returns the classes of its argument.
2. The *type* of an R object describes what the object is. Technically, the
type corresponds to how the object is stored in your computer’s memory. Each
object has exactly one type. The `typeof` function returns the type of its
argument.
Of the two, classes tend to be more important than types. If you aren’t sure
what an object is, checking its classes should be the first thing you do.
The built\-in classes you’ll use all the time correspond to vectors and lists
(which we’ll learn more about in Section [2\.2\.1](data-structures.html#lists)):
| Class | Example | Description |
| --- | --- | --- |
| logical | `TRUE`, `FALSE` | Logical (or Boolean) values |
| integer | `-1L`, `1L`, `2L` | Integer numbers |
| numeric | `-2.1`, `7`, `34.2` | Real numbers |
| complex | `3-2i`, `-8+0i` | Complex numbers |
| character | `"hi"`, `"YAY"` | Text strings |
| list | `list(TRUE, 1, "hi")` | Ordered collection of heterogeneous elements |
R doesn’t distinguish between scalars and vectors, so the class of a vector is
the same as the class of its elements:
```
class("hi")
```
```
## [1] "character"
```
```
class(c("hello", "hi"))
```
```
## [1] "character"
```
In addition, for most vectors, the class and the type are the same:
```
x = c(TRUE, FALSE)
class(x)
```
```
## [1] "logical"
```
```
typeof(x)
```
```
## [1] "logical"
```
The exception to this rule is numeric vectors, which have type `double` for
historical reasons:
```
class(pi)
```
```
## [1] "numeric"
```
```
typeof(pi)
```
```
## [1] "double"
```
```
typeof(3)
```
```
## [1] "double"
```
The word “double” here stands for [*double\-precision floating point
number*](https://en.wikipedia.org/wiki/Double-precision_floating-point_format), a standard way to represent real numbers on computers.
By default, R assumes any numbers you enter in code are numeric, even if
they’re integer\-valued.
The class `integer` also represents integer numbers, but it’s not used as often
as `numeric`. A few functions, such as the sequence operator `:` and the
`length` function, return integers. You can also force R to create an integer
by adding the suffix `L` to a number, but there are no major drawbacks to using
the `double` default:
```
class(1:3)
```
```
## [1] "integer"
```
```
class(3)
```
```
## [1] "numeric"
```
```
class(3L)
```
```
## [1] "integer"
```
Besides the classes for vectors and lists, there are several built\-in classes
that represent more sophisticated data structures:
| Class | Description |
| --- | --- |
| function | Functions |
| factor | Categorical values |
| matrix | Two\-dimensional ordered collection of homogeneous elements |
| array | Multi\-dimensional ordered collection of homogeneous elements |
| data.frame | Data frames |
For these, the class is usually different from the type. We’ll learn more about
most of these later on.
### 2\.2\.1 Lists
A *list* is an ordered data structure where the elements can have different
types (they are *heterogeneous*). This differs from a vector, where the
elements all have to have the same type, as we saw in Section [2\.1](data-structures.html#vectors).
The tradeoff is that most vectorized functions do not work with lists.
You can make an ordinary list with the `list` function:
```
x = list(1, c("hi", "bye"))
class(x)
```
```
## [1] "list"
```
```
typeof(x)
```
```
## [1] "list"
```
For ordinary lists, the type and the class are both `list`. In Section
[2\.4](data-structures.html#indexing), we’ll learn how to get and set list elements, and in later
sections we’ll learn more about when and why to use lists.
You’ve already seen one list, the earnings data frame:
```
class(earn)
```
```
## [1] "data.frame"
```
```
typeof(earn)
```
```
## [1] "list"
```
Under the hood, data frames are lists, and each column is a list element.
Because the class is `data.frame` rather than `list`, R treats data frames
differently from ordinary lists. This difference is apparent in how data frames
are printed compared to ordinary lists.
### 2\.2\.2 Implicit Coercion
R’s types fall into a natural hierarchy of expressiveness:
Each type on the right is more expressive than the ones to its left. That is,
with the convention that `FALSE` is `0` and `TRUE` is `1`, we can represent any
logical value as an integer. In turn, we can represent any integer as a double,
and any double as a complex number. By writing the number out, we can also
represent any complex number as a string.
The point is that no information is lost as we follow the arrows from left to
right along the types in the hierarchy. In fact, R will automatically and
silently convert from types on the left to types on the right as needed. This
is called *implicit coercion*.
As an example, consider what happens if we add a logical value to a number:
```
TRUE + 2
```
```
## [1] 3
```
R automatically converts the `TRUE` to the numeric value `1`, and then carries
out the arithmetic as usual.
We’ve already seen implicit coercion at work once before, when we learned the
`c` function. Since the elements of a vector all have to have the same type, if
you pass several different types to `c`, then R tries to use implicit coercion
to make them the same:
```
x = c(TRUE, "hi", 1, 1+3i)
class(x)
```
```
## [1] "character"
```
```
x
```
```
## [1] "TRUE" "hi" "1" "1+3i"
```
Implicit coercion is strictly one\-way; it never occurs in the other direction.
If you want to coerce a type on the right to one on the left, you can do it
explicitly with one of the `as.TYPE` functions. For instance, the `as.numeric`
(or `as.double`) function coerces to numeric:
```
as.numeric("3.1")
```
```
## [1] 3.1
```
There are a few types that fall outside of the hierarchy entirely, like
functions. Implicit coercion doesn’t apply to these. If you try to use these
types where it doesn’t make sense to, R generally returns an error:
```
sin + 3
```
```
## Error in sin + 3: non-numeric argument to binary operator
```
If you try to use these types as elements of a vector, you get back a list
instead:
```
x = c(1, 2, sum)
class(x)
```
```
## [1] "list"
```
Understanding how implicit coercion works will help you avoid bugs, and can
also be a time\-saver. For example, we can use implicit coercion to succinctly
count how many elements of a vector satisfy a some condition:
```
x = c(1, 3, -1, 10, -2, 3, 8, 2)
condition = x < 4
sum(condition) # or sum(x < 4)
```
```
## [1] 6
```
If you still don’t quite understand how the code above works, try inspecting
each variable. In general, inspecting each step or variable is a good strategy
for understanding why a piece of code works (or doesn’t work!). Here the
implicit coercion happens in the third line.
### 2\.2\.3 Matrices \& Arrays
A *matrix* is the two\-dimensional analogue of a vector. The elements, which are
arranged into rows and columns, are ordered and homogeneous.
You can create a matrix from a vector with the `matrix` function. By default,
the columns are filled first:
```
# A matrix with 2 rows and 3 columns:
matrix(1:6, 2, 3)
```
```
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
The class of a matrix is always `matrix`, and the type matches the type of the
elements:
```
x = matrix(c("a", "b", NA, "c"), 2, 2)
x
```
```
## [,1] [,2]
## [1,] "a" NA
## [2,] "b" "c"
```
```
class(x)
```
```
## [1] "matrix" "array"
```
```
typeof(x)
```
```
## [1] "character"
```
You can use the matrix multiplication operator `%*%` to multiply two matrices
with compatible dimensions.
An *array* is a further generalization of matrices to higher dimensions. You
can create an array from a vector with the `array` function. The
characteristics of arrays are almost identical to matrices, but the class of an
array is always `array`.
### 2\.2\.4 Factors
A feature is *categorical* if it measures a qualitative category. For example,
the genres `rock`, `blues`, `alternative`, `folk`, `pop` are categories.
R uses the class `factor` to represent categorical data. Visualizations and
statistical models sometimes treat factors differently than other data types,
so it’s important to make sure you have the right data type. If you’re ever
unsure, remember that you can check the class of an object with the `class`
function.
When you load a data set, R usually can’t tell which features are categorical.
That means identifying and converting the categorical features is up to you.
For beginners, it can be difficult to understand whether a feature is
categorical or not. The key is to think about whether you want to use the
feature to divide the data into groups.
For example, if we want to know how many songs are in the `rock` genre, we
first need to divide the songs by genre, and then count the number of songs in
each group (or at least the `rock` group).
As a second example, months recorded as numbers can be categorical or not,
depending on how you want to use them. You might want to treat them as
categorical (for example, to compute max rainfall in each month) or you might
want to treat them as numbers (for example, to compute the number of months
time between two events).
The bottom line is that you have to think about what you’ll be doing in the
analysis. In some cases, you might treat a feature as categorical only for part
of the analysis.
Let’s think about which features are categorical in earnings data set. To
refresh our memory of what’s in the data set, we can look at the structural
summary:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
The columns `n_persons` and `median_weekly_earn` are quantitative rather than
categorical, since they measure quantities of people and dollars, respectively.
The `sex`, `race`, `ethnic_origin`, and `age` columns are all categorical,
since they are all qualitative measurements. We can see this better if we use
the `table` function to compute frequencies for the values in the columns:
```
table(earn$sex)
```
```
##
## Both Sexes Men Women
## 1408 1408 1408
```
```
table(earn$race)
```
```
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
```
```
table(earn$ethnic_origin)
```
```
##
## All Origins Hispanic or Latino
## 3564 660
```
```
table(earn$age)
```
```
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
Each column has only a few unique values, repeated many times. These are ideal
for grouping the data. If age had been recorded as a number, rather than a
range, it would probably be better to treat it as quantitative, since there
would be far more unique values. Columns with many unique values don’t make
good categorical features, because each group will only have a few elements!
That leaves us with the `year` and `quarter` columns. It’s easy to imagine
grouping the data by year or quarter, but these are also clearly numbers. These
columns can be treated as quantitative or categorical data, depending on how we
want to use them to analyze the data.
Let’s convert the `age` column to a factor. To do this, use the `factor`
function:
```
age = factor(earn$age)
head(age)
```
```
## [1] 16 years and over 16 years and over 16 years and over 16 years and over
## [5] 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
Notice that factors are printed differently than strings.
The categories of a factor are called *levels*. You can list the levels with
the `levels` function:
```
levels(age)
```
```
## [1] "16 to 19 years" "16 to 24 years" "16 years and over"
## [4] "20 to 24 years" "25 to 34 years" "25 to 54 years"
## [7] "25 years and over" "35 to 44 years" "45 to 54 years"
## [10] "55 to 64 years" "55 years and over" "65 years and over"
```
Factors remember all possible levels even if you take a subset:
```
age[1:3]
```
```
## [1] 16 years and over 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
This is another way factors are different from strings. Factors “remember” all
possible levels even if they aren’t present. This ensures that if you plot a
factor, the missing levels will still be represented on the plot.
You can make a factor forget levels that aren’t present with the `droplevels`
function:
```
droplevels(age[1:3])
```
```
## [1] 16 years and over 16 years and over 16 years and over
## Levels: 16 years and over
```
### 2\.2\.1 Lists
A *list* is an ordered data structure where the elements can have different
types (they are *heterogeneous*). This differs from a vector, where the
elements all have to have the same type, as we saw in Section [2\.1](data-structures.html#vectors).
The tradeoff is that most vectorized functions do not work with lists.
You can make an ordinary list with the `list` function:
```
x = list(1, c("hi", "bye"))
class(x)
```
```
## [1] "list"
```
```
typeof(x)
```
```
## [1] "list"
```
For ordinary lists, the type and the class are both `list`. In Section
[2\.4](data-structures.html#indexing), we’ll learn how to get and set list elements, and in later
sections we’ll learn more about when and why to use lists.
You’ve already seen one list, the earnings data frame:
```
class(earn)
```
```
## [1] "data.frame"
```
```
typeof(earn)
```
```
## [1] "list"
```
Under the hood, data frames are lists, and each column is a list element.
Because the class is `data.frame` rather than `list`, R treats data frames
differently from ordinary lists. This difference is apparent in how data frames
are printed compared to ordinary lists.
### 2\.2\.2 Implicit Coercion
R’s types fall into a natural hierarchy of expressiveness:
Each type on the right is more expressive than the ones to its left. That is,
with the convention that `FALSE` is `0` and `TRUE` is `1`, we can represent any
logical value as an integer. In turn, we can represent any integer as a double,
and any double as a complex number. By writing the number out, we can also
represent any complex number as a string.
The point is that no information is lost as we follow the arrows from left to
right along the types in the hierarchy. In fact, R will automatically and
silently convert from types on the left to types on the right as needed. This
is called *implicit coercion*.
As an example, consider what happens if we add a logical value to a number:
```
TRUE + 2
```
```
## [1] 3
```
R automatically converts the `TRUE` to the numeric value `1`, and then carries
out the arithmetic as usual.
We’ve already seen implicit coercion at work once before, when we learned the
`c` function. Since the elements of a vector all have to have the same type, if
you pass several different types to `c`, then R tries to use implicit coercion
to make them the same:
```
x = c(TRUE, "hi", 1, 1+3i)
class(x)
```
```
## [1] "character"
```
```
x
```
```
## [1] "TRUE" "hi" "1" "1+3i"
```
Implicit coercion is strictly one\-way; it never occurs in the other direction.
If you want to coerce a type on the right to one on the left, you can do it
explicitly with one of the `as.TYPE` functions. For instance, the `as.numeric`
(or `as.double`) function coerces to numeric:
```
as.numeric("3.1")
```
```
## [1] 3.1
```
There are a few types that fall outside of the hierarchy entirely, like
functions. Implicit coercion doesn’t apply to these. If you try to use these
types where it doesn’t make sense to, R generally returns an error:
```
sin + 3
```
```
## Error in sin + 3: non-numeric argument to binary operator
```
If you try to use these types as elements of a vector, you get back a list
instead:
```
x = c(1, 2, sum)
class(x)
```
```
## [1] "list"
```
Understanding how implicit coercion works will help you avoid bugs, and can
also be a time\-saver. For example, we can use implicit coercion to succinctly
count how many elements of a vector satisfy a some condition:
```
x = c(1, 3, -1, 10, -2, 3, 8, 2)
condition = x < 4
sum(condition) # or sum(x < 4)
```
```
## [1] 6
```
If you still don’t quite understand how the code above works, try inspecting
each variable. In general, inspecting each step or variable is a good strategy
for understanding why a piece of code works (or doesn’t work!). Here the
implicit coercion happens in the third line.
### 2\.2\.3 Matrices \& Arrays
A *matrix* is the two\-dimensional analogue of a vector. The elements, which are
arranged into rows and columns, are ordered and homogeneous.
You can create a matrix from a vector with the `matrix` function. By default,
the columns are filled first:
```
# A matrix with 2 rows and 3 columns:
matrix(1:6, 2, 3)
```
```
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
```
The class of a matrix is always `matrix`, and the type matches the type of the
elements:
```
x = matrix(c("a", "b", NA, "c"), 2, 2)
x
```
```
## [,1] [,2]
## [1,] "a" NA
## [2,] "b" "c"
```
```
class(x)
```
```
## [1] "matrix" "array"
```
```
typeof(x)
```
```
## [1] "character"
```
You can use the matrix multiplication operator `%*%` to multiply two matrices
with compatible dimensions.
An *array* is a further generalization of matrices to higher dimensions. You
can create an array from a vector with the `array` function. The
characteristics of arrays are almost identical to matrices, but the class of an
array is always `array`.
### 2\.2\.4 Factors
A feature is *categorical* if it measures a qualitative category. For example,
the genres `rock`, `blues`, `alternative`, `folk`, `pop` are categories.
R uses the class `factor` to represent categorical data. Visualizations and
statistical models sometimes treat factors differently than other data types,
so it’s important to make sure you have the right data type. If you’re ever
unsure, remember that you can check the class of an object with the `class`
function.
When you load a data set, R usually can’t tell which features are categorical.
That means identifying and converting the categorical features is up to you.
For beginners, it can be difficult to understand whether a feature is
categorical or not. The key is to think about whether you want to use the
feature to divide the data into groups.
For example, if we want to know how many songs are in the `rock` genre, we
first need to divide the songs by genre, and then count the number of songs in
each group (or at least the `rock` group).
As a second example, months recorded as numbers can be categorical or not,
depending on how you want to use them. You might want to treat them as
categorical (for example, to compute max rainfall in each month) or you might
want to treat them as numbers (for example, to compute the number of months
time between two events).
The bottom line is that you have to think about what you’ll be doing in the
analysis. In some cases, you might treat a feature as categorical only for part
of the analysis.
Let’s think about which features are categorical in earnings data set. To
refresh our memory of what’s in the data set, we can look at the structural
summary:
```
str(earn)
```
```
## 'data.frame': 4224 obs. of 8 variables:
## $ sex : chr "Both Sexes" "Both Sexes" "Both Sexes" "Both Sexes" ...
## $ race : chr "All Races" "All Races" "All Races" "All Races" ...
## $ ethnic_origin : chr "All Origins" "All Origins" "All Origins" "All Origins" ...
## $ age : chr "16 years and over" "16 years and over" "16 years and over" "16 years and over" ...
## $ year : int 2010 2010 2010 2010 2011 2011 2011 2011 2012 2012 ...
## $ quarter : int 1 2 3 4 1 2 3 4 1 2 ...
## $ n_persons : int 96821000 99798000 101385000 100120000 98329000 100593000 101447000 101458000 100830000 102769000 ...
## $ median_weekly_earn: int 754 740 740 752 755 753 753 764 769 771 ...
```
The columns `n_persons` and `median_weekly_earn` are quantitative rather than
categorical, since they measure quantities of people and dollars, respectively.
The `sex`, `race`, `ethnic_origin`, and `age` columns are all categorical,
since they are all qualitative measurements. We can see this better if we use
the `table` function to compute frequencies for the values in the columns:
```
table(earn$sex)
```
```
##
## Both Sexes Men Women
## 1408 1408 1408
```
```
table(earn$race)
```
```
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
```
```
table(earn$ethnic_origin)
```
```
##
## All Origins Hispanic or Latino
## 3564 660
```
```
table(earn$age)
```
```
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
Each column has only a few unique values, repeated many times. These are ideal
for grouping the data. If age had been recorded as a number, rather than a
range, it would probably be better to treat it as quantitative, since there
would be far more unique values. Columns with many unique values don’t make
good categorical features, because each group will only have a few elements!
That leaves us with the `year` and `quarter` columns. It’s easy to imagine
grouping the data by year or quarter, but these are also clearly numbers. These
columns can be treated as quantitative or categorical data, depending on how we
want to use them to analyze the data.
Let’s convert the `age` column to a factor. To do this, use the `factor`
function:
```
age = factor(earn$age)
head(age)
```
```
## [1] 16 years and over 16 years and over 16 years and over 16 years and over
## [5] 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
Notice that factors are printed differently than strings.
The categories of a factor are called *levels*. You can list the levels with
the `levels` function:
```
levels(age)
```
```
## [1] "16 to 19 years" "16 to 24 years" "16 years and over"
## [4] "20 to 24 years" "25 to 34 years" "25 to 54 years"
## [7] "25 years and over" "35 to 44 years" "45 to 54 years"
## [10] "55 to 64 years" "55 years and over" "65 years and over"
```
Factors remember all possible levels even if you take a subset:
```
age[1:3]
```
```
## [1] 16 years and over 16 years and over 16 years and over
## 12 Levels: 16 to 19 years 16 to 24 years 16 years and over ... 65 years and over
```
This is another way factors are different from strings. Factors “remember” all
possible levels even if they aren’t present. This ensures that if you plot a
factor, the missing levels will still be represented on the plot.
You can make a factor forget levels that aren’t present with the `droplevels`
function:
```
droplevels(age[1:3])
```
```
## [1] 16 years and over 16 years and over 16 years and over
## Levels: 16 years and over
```
2\.3 Special Values
-------------------
R has four *special* values to represent missing or invalid data.
### 2\.3\.1 Missing Values
The value `NA`, called the *missing value*, represents missing entries in a
data set. It’s implied that the entries are missing due to how the data was
collected, although there are exceptions. As an example, imagine the data came
from a survey, and respondents chose not to answer some questions. In the data
set, their answers for those questions can be recorded as `NA`.
The missing value is a chameleon: it can be a logical, integer, numeric,
complex, or character value. By default, the missing value is logical, and the
other types occur through coercion ([2\.2\.2](data-structures.html#implicit-coercion)):
```
class(NA)
```
```
## [1] "logical"
```
```
class(c(1, NA))
```
```
## [1] "numeric"
```
```
class(c("hi", NA, NA))
```
```
## [1] "character"
```
The missing value is also contagious: it represents an unknown quantity, so
using it as an argument to a function usually produces another missing value.
The idea is that if the inputs to a computation are unknown, generally so is
the output:
```
NA - 3
```
```
## [1] NA
```
```
mean(c(1, 2, NA))
```
```
## [1] NA
```
As a consequence, testing whether an object is equal to the missing value with
`==` doesn’t return a meaningful result:
```
5 == NA
```
```
## [1] NA
```
```
NA == NA
```
```
## [1] NA
```
You can use the `is.na` function instead:
```
is.na(5)
```
```
## [1] FALSE
```
```
is.na(NA)
```
```
## [1] TRUE
```
```
is.na(c(1, NA, 3))
```
```
## [1] FALSE TRUE FALSE
```
Missing values are a feature that sets R apart from most other programming
languages.
### 2\.3\.2 Infinity
The value `Inf` represents infinity, and can be numeric or complex. You’re most
likely to encounter it as the result of certain computations:
```
13 / 0
```
```
## [1] Inf
```
```
class(Inf)
```
```
## [1] "numeric"
```
You can use the `is.infinite` function to test whether a value is infinite:
```
is.infinite(3)
```
```
## [1] FALSE
```
```
is.infinite(c(-Inf, 0, Inf))
```
```
## [1] TRUE FALSE TRUE
```
### 2\.3\.3 Not a Number
The value `NaN`, called *not a number*, represents a quantity that’s undefined
mathematically. For instance, dividing 0 by 0 is undefined:
```
0 / 0
```
```
## [1] NaN
```
```
class(NaN)
```
```
## [1] "numeric"
```
Like `Inf`, `NaN` can be numeric or complex.
You can use the `is.nan` function to test whether a value is `NaN`:
```
is.nan(c(10.1, log(-1), 3))
```
```
## Warning in log(-1): NaNs produced
```
```
## [1] FALSE TRUE FALSE
```
### 2\.3\.4 Null
The value `NULL` represents a quantity that’s undefined in R. Most of the time,
`NULL` indicates the absence of a result. For instance, vectors don’t have
dimensions, so the `dim` function returns `NULL` for vectors:
```
dim(c(1, 2))
```
```
## NULL
```
```
class(NULL)
```
```
## [1] "NULL"
```
```
typeof(NULL)
```
```
## [1] "NULL"
```
Unlike the other special values, `NULL` has its own unique type and class.
You can use the `is.null` function to test whether a value is `NULL`:
```
is.null("null")
```
```
## [1] FALSE
```
```
is.null(NULL)
```
```
## [1] TRUE
```
### 2\.3\.1 Missing Values
The value `NA`, called the *missing value*, represents missing entries in a
data set. It’s implied that the entries are missing due to how the data was
collected, although there are exceptions. As an example, imagine the data came
from a survey, and respondents chose not to answer some questions. In the data
set, their answers for those questions can be recorded as `NA`.
The missing value is a chameleon: it can be a logical, integer, numeric,
complex, or character value. By default, the missing value is logical, and the
other types occur through coercion ([2\.2\.2](data-structures.html#implicit-coercion)):
```
class(NA)
```
```
## [1] "logical"
```
```
class(c(1, NA))
```
```
## [1] "numeric"
```
```
class(c("hi", NA, NA))
```
```
## [1] "character"
```
The missing value is also contagious: it represents an unknown quantity, so
using it as an argument to a function usually produces another missing value.
The idea is that if the inputs to a computation are unknown, generally so is
the output:
```
NA - 3
```
```
## [1] NA
```
```
mean(c(1, 2, NA))
```
```
## [1] NA
```
As a consequence, testing whether an object is equal to the missing value with
`==` doesn’t return a meaningful result:
```
5 == NA
```
```
## [1] NA
```
```
NA == NA
```
```
## [1] NA
```
You can use the `is.na` function instead:
```
is.na(5)
```
```
## [1] FALSE
```
```
is.na(NA)
```
```
## [1] TRUE
```
```
is.na(c(1, NA, 3))
```
```
## [1] FALSE TRUE FALSE
```
Missing values are a feature that sets R apart from most other programming
languages.
### 2\.3\.2 Infinity
The value `Inf` represents infinity, and can be numeric or complex. You’re most
likely to encounter it as the result of certain computations:
```
13 / 0
```
```
## [1] Inf
```
```
class(Inf)
```
```
## [1] "numeric"
```
You can use the `is.infinite` function to test whether a value is infinite:
```
is.infinite(3)
```
```
## [1] FALSE
```
```
is.infinite(c(-Inf, 0, Inf))
```
```
## [1] TRUE FALSE TRUE
```
### 2\.3\.3 Not a Number
The value `NaN`, called *not a number*, represents a quantity that’s undefined
mathematically. For instance, dividing 0 by 0 is undefined:
```
0 / 0
```
```
## [1] NaN
```
```
class(NaN)
```
```
## [1] "numeric"
```
Like `Inf`, `NaN` can be numeric or complex.
You can use the `is.nan` function to test whether a value is `NaN`:
```
is.nan(c(10.1, log(-1), 3))
```
```
## Warning in log(-1): NaNs produced
```
```
## [1] FALSE TRUE FALSE
```
### 2\.3\.4 Null
The value `NULL` represents a quantity that’s undefined in R. Most of the time,
`NULL` indicates the absence of a result. For instance, vectors don’t have
dimensions, so the `dim` function returns `NULL` for vectors:
```
dim(c(1, 2))
```
```
## NULL
```
```
class(NULL)
```
```
## [1] "NULL"
```
```
typeof(NULL)
```
```
## [1] "NULL"
```
Unlike the other special values, `NULL` has its own unique type and class.
You can use the `is.null` function to test whether a value is `NULL`:
```
is.null("null")
```
```
## [1] FALSE
```
```
is.null(NULL)
```
```
## [1] TRUE
```
2\.4 Indexing
-------------
The way to get and set elements of a data structure is by *indexing*. Sometimes
this is also called *subsetting* or (element) *extraction*. Indexing is a
fundamental operation in R, key to reasoning about how to solve problems with
the language.
We first saw indexing in Section [1\.6](getting-started.html#data-frames), where we used `$`, the
dollar sign operator, to get and set data frame columns. We saw indexing again
in Section [2\.1\.2](data-structures.html#indexing-vectors), where we used `[`, the indexing or square
bracket operator, to get and set elements of vectors.
The indexing operator `[` is R’s primary operator for indexing. It works in
four different ways, depending on the type of the index you use. These four
ways to select elements are:
1. All elements, with no index
2. By position, with a numeric index
3. By name, with a character index
4. By condition, with a logical index
Let’s examine each in more detail. We’ll use this vector as an example, to keep
things concise:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
x
```
```
## a b c d e
## 10 20 30 40 50
```
Even though we’re using a vector here, the indexing operator works with almost
all data structures, including factors, lists, matrices, and data frames. We’ll
look at unique behavior for some of these later on.
### 2\.4\.1 All Elements
The first way to use `[` to select elements is to leave the index blank. This
selects all elements:
```
x[]
```
```
## a b c d e
## 10 20 30 40 50
```
This way of indexing is rarely used for getting elements, since it’s the same
as entering the variable name without the indexing operator. Instead, its main
use is for setting elements. Suppose we want to set all the elements of `x` to
`5`. You might try writing this:
```
x = 5
x
```
```
## [1] 5
```
Rather than setting each element to `5`, this sets `x` to the scalar `5`, which
is not what we want. Let’s reset the vector and try again, this time using the
indexing operator:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
x[] = 5
x
```
```
## a b c d e
## 5 5 5 5 5
```
As you can see, now all the elements are `5`. So the indexing operator is
necessary to specify that we want to set the elements rather than the whole
variable.
Let’s reset `x` one more time, so that we can use it again in the next example:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.2 By Position
The second way to use `[` is to select elements by position. This happens when
you use an integer or numeric index. We already saw the basics of this in
Section [2\.1\.2](data-structures.html#indexing-vectors).
The positions of the elements in a vector (or other data structure) correspond
to numbers starting from 1 for the first element. This way of indexing is
frequently used together with the sequence operator `:` to get ranges of
values. For instance, let’s get the 2nd through 4th elements of `x`:
```
x[2:4]
```
```
## b c d
## 20 30 40
```
You can also use this way of indexing to set specific elements or ranges of
elements. For example, let’s set the 3rd and 5th elements of `x` to `9` and
`7`, respectively:
```
x[c(3, 5)] = c(9, 7)
x
```
```
## a b c d e
## 10 20 9 40 7
```
When getting elements, you can repeat numbers in the index to get the same
element more than once. You can also use the order of the numbers to control
the order of the elements:
```
x[c(2, 1, 2, 2)]
```
```
## b a b b
## 20 10 20 20
```
Finally, if the index contains only negative numbers, the elements at those
positions are excluded rather than selected. For instance, let’s get all
elements except the 1st and 5th:
```
x[-c(1, 5)]
```
```
## b c d
## 20 9 40
```
When you index by position, the index should always be all positive or all
negative. Using a mix of positive and negative numbers causes R to emit error
rather than returning elements, since it’s unclear what the result should be:
```
x[c(-1, 2)]
```
```
## Error in x[c(-1, 2)]: only 0's may be mixed with negative subscripts
```
### 2\.4\.3 By Name
The third way to use `[` is to select elements by name. This happens when you
use a character vector as the index, and only works with named data structures.
Like indexing by position, you can use indexing by name to get or set elements.
You can also use it to repeat elements or change the order. Let’s get elements
`a`, `c`, `d`, and `a` again from the vector `x`:
```
y = x[c("a", "c", "d", "a")]
y
```
```
## a c d a
## 10 9 40 10
```
Element names are generally unique, but if they’re not, indexing by name gets
or sets the first element whose name matches the index:
```
y["a"]
```
```
## a
## 10
```
Let’s reset `x` again to prepare for learning about the final way to index:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.4 By Condition
The fourth and final way to use `[` is to select elements based on a condition.
This happens when you use a logical vector as the index. The logical vector
should have the same length as what you’re indexing, and will be recycled if it
doesn’t.
#### Congruent Vectors
To understand indexing by condition, we first need to learn about congruent
vectors. Two vectors are *congruent* if they have the same length and they
correspond element\-by\-element.
For example, suppose you do a survey that records each respondent’s favorite
animal and age. These are two different vectors of information, but each person
will have a response for both. So you’ll have two vectors that are the same
length:
```
animal = c("dog", "cat", "iguana")
age = c(31, 24, 72)
```
The 1st element of each vector corresponds to the 1st person, the 2nd to the
2nd person, and so on. These vectors are congruent.
Notice that columns in a data frame are always congruent!
#### Back to Indexing
When you index by condition, the index should generally be congruent to the
object you’re indexing. Elements where the index is `TRUE` are kept and
elements where the index is `FALSE` are dropped.
If you create the index from a condition on the object, it’s automatically
congruent. For instance, let’s make a condition based on the vector `x`:
```
is_small = x < 25
is_small
```
```
## a b c d e
## TRUE TRUE FALSE FALSE FALSE
```
The 1st element in the logical vector `is_small` corresponds to the 1st element
of `x`, the 2nd to the 2nd, and so on. The vectors `x` and `is_small` are
congruent.
It makes sense to use `is_small` as an index for `x`, and it gives us all the
elements less than `25`:
```
x[is_small]
```
```
## a b
## 10 20
```
Of course, you can also avoid using an intermediate variable for the condition:
```
x[x > 10]
```
```
## b c d e
## 20 30 40 50
```
If you create index some other way (not using the object), make sure that it’s
still congruent to the object. Otherwise, the subset returned from indexing
might not be meaningful.
You can also use indexing by condition to set elements, just as the other ways
of indexing can be used to set elements. For instance, let’s set all the
elements of `x` that are greater than `10` to the missing value `NA`:
```
x[x > 10] = NA
x
```
```
## a b c d e
## 10 NA NA NA NA
```
### 2\.4\.5 Logic
All of the conditions we’ve seen so far have been written in terms of a single
test. If you want to use more sophisticated conditions, R provides operators to
negate and combine logical vectors. These operators are useful for working with
logical vectors even outside the context of indexing.
#### Negation
The *NOT operator* `!` converts `TRUE` to `FALSE` and `FALSE` to `TRUE`:
```
x = c(TRUE, FALSE, TRUE, TRUE, NA)
x
```
```
## [1] TRUE FALSE TRUE TRUE NA
```
```
!x
```
```
## [1] FALSE TRUE FALSE FALSE NA
```
You can use `!` with a condition:
```
y = c("hi", "hello")
!(y == "hi")
```
```
## [1] FALSE TRUE
```
The NOT operator is vectorized.
#### Combinations
R also has operators for combining logical values.
The *AND operator* `&` returns `TRUE` only when both arguments are `TRUE`. Here
are some examples:
```
FALSE & FALSE
```
```
## [1] FALSE
```
```
TRUE & FALSE
```
```
## [1] FALSE
```
```
FALSE & TRUE
```
```
## [1] FALSE
```
```
TRUE & TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE, TRUE) & c(TRUE, TRUE, FALSE)
```
```
## [1] TRUE FALSE FALSE
```
The *OR operator* `|` returns `TRUE` when at least one argument is `TRUE`.
Let’s see some examples:
```
FALSE | FALSE
```
```
## [1] FALSE
```
```
TRUE | FALSE
```
```
## [1] TRUE
```
```
FALSE | TRUE
```
```
## [1] TRUE
```
```
TRUE | TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) | c(TRUE, TRUE)
```
```
## [1] TRUE TRUE
```
Be careful: everyday English is less precise than logic. You might say:
> I want all subjects with age over 50 and all subjects that like cats.
But in logic this means:
`(subject age over 50) OR (subject likes cats)`
So think carefully about whether you need both conditions to be true (AND) or
at least one (OR).
Rarely, you might want *exactly one* condition to be true. The *XOR (eXclusive
OR) function* `xor()` returns `TRUE` when exactly one argument is `TRUE`. For
example:
```
xor(FALSE, FALSE)
```
```
## [1] FALSE
```
```
xor(TRUE, FALSE)
```
```
## [1] TRUE
```
```
xor(TRUE, TRUE)
```
```
## [1] FALSE
```
The AND, OR, and XOR operators are vectorized.
#### Short\-circuiting
The second argument is irrelevant in some conditions:
* `FALSE &` is always `FALSE`
* `TRUE |` is always `TRUE`
Now imagine you have `FALSE & long_computation()`. You can save time by
skipping `long_computation()`. A *short\-circuit operator* does exactly that.
R has two short\-circuit operators:
* `&&` is a short\-circuited `&`
* `||` is a short\-circuited `|`
These operators only evaluate the second argument if it is necessary to
determine the result. Here are some of these:
```
TRUE && FALSE
```
```
## [1] FALSE
```
```
TRUE && TRUE
```
```
## [1] TRUE
```
```
TRUE || TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) && c(TRUE, TRUE)
```
```
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
```
```
## [1] TRUE
```
For the final expression, notice R only combines the first element of each
vector. The others are ignored. In other words, the short\-circuit operators are
*not* vectorized! Because of this, generally you **should not use** the
short\-circuit operators for indexing. Their main use is in writing conditions
for if\-statements, which we’ll learn about later on.
### 2\.4\.1 All Elements
The first way to use `[` to select elements is to leave the index blank. This
selects all elements:
```
x[]
```
```
## a b c d e
## 10 20 30 40 50
```
This way of indexing is rarely used for getting elements, since it’s the same
as entering the variable name without the indexing operator. Instead, its main
use is for setting elements. Suppose we want to set all the elements of `x` to
`5`. You might try writing this:
```
x = 5
x
```
```
## [1] 5
```
Rather than setting each element to `5`, this sets `x` to the scalar `5`, which
is not what we want. Let’s reset the vector and try again, this time using the
indexing operator:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
x[] = 5
x
```
```
## a b c d e
## 5 5 5 5 5
```
As you can see, now all the elements are `5`. So the indexing operator is
necessary to specify that we want to set the elements rather than the whole
variable.
Let’s reset `x` one more time, so that we can use it again in the next example:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.2 By Position
The second way to use `[` is to select elements by position. This happens when
you use an integer or numeric index. We already saw the basics of this in
Section [2\.1\.2](data-structures.html#indexing-vectors).
The positions of the elements in a vector (or other data structure) correspond
to numbers starting from 1 for the first element. This way of indexing is
frequently used together with the sequence operator `:` to get ranges of
values. For instance, let’s get the 2nd through 4th elements of `x`:
```
x[2:4]
```
```
## b c d
## 20 30 40
```
You can also use this way of indexing to set specific elements or ranges of
elements. For example, let’s set the 3rd and 5th elements of `x` to `9` and
`7`, respectively:
```
x[c(3, 5)] = c(9, 7)
x
```
```
## a b c d e
## 10 20 9 40 7
```
When getting elements, you can repeat numbers in the index to get the same
element more than once. You can also use the order of the numbers to control
the order of the elements:
```
x[c(2, 1, 2, 2)]
```
```
## b a b b
## 20 10 20 20
```
Finally, if the index contains only negative numbers, the elements at those
positions are excluded rather than selected. For instance, let’s get all
elements except the 1st and 5th:
```
x[-c(1, 5)]
```
```
## b c d
## 20 9 40
```
When you index by position, the index should always be all positive or all
negative. Using a mix of positive and negative numbers causes R to emit error
rather than returning elements, since it’s unclear what the result should be:
```
x[c(-1, 2)]
```
```
## Error in x[c(-1, 2)]: only 0's may be mixed with negative subscripts
```
### 2\.4\.3 By Name
The third way to use `[` is to select elements by name. This happens when you
use a character vector as the index, and only works with named data structures.
Like indexing by position, you can use indexing by name to get or set elements.
You can also use it to repeat elements or change the order. Let’s get elements
`a`, `c`, `d`, and `a` again from the vector `x`:
```
y = x[c("a", "c", "d", "a")]
y
```
```
## a c d a
## 10 9 40 10
```
Element names are generally unique, but if they’re not, indexing by name gets
or sets the first element whose name matches the index:
```
y["a"]
```
```
## a
## 10
```
Let’s reset `x` again to prepare for learning about the final way to index:
```
x = c(a = 10, b = 20, c = 30, d = 40, e = 50)
```
### 2\.4\.4 By Condition
The fourth and final way to use `[` is to select elements based on a condition.
This happens when you use a logical vector as the index. The logical vector
should have the same length as what you’re indexing, and will be recycled if it
doesn’t.
#### Congruent Vectors
To understand indexing by condition, we first need to learn about congruent
vectors. Two vectors are *congruent* if they have the same length and they
correspond element\-by\-element.
For example, suppose you do a survey that records each respondent’s favorite
animal and age. These are two different vectors of information, but each person
will have a response for both. So you’ll have two vectors that are the same
length:
```
animal = c("dog", "cat", "iguana")
age = c(31, 24, 72)
```
The 1st element of each vector corresponds to the 1st person, the 2nd to the
2nd person, and so on. These vectors are congruent.
Notice that columns in a data frame are always congruent!
#### Back to Indexing
When you index by condition, the index should generally be congruent to the
object you’re indexing. Elements where the index is `TRUE` are kept and
elements where the index is `FALSE` are dropped.
If you create the index from a condition on the object, it’s automatically
congruent. For instance, let’s make a condition based on the vector `x`:
```
is_small = x < 25
is_small
```
```
## a b c d e
## TRUE TRUE FALSE FALSE FALSE
```
The 1st element in the logical vector `is_small` corresponds to the 1st element
of `x`, the 2nd to the 2nd, and so on. The vectors `x` and `is_small` are
congruent.
It makes sense to use `is_small` as an index for `x`, and it gives us all the
elements less than `25`:
```
x[is_small]
```
```
## a b
## 10 20
```
Of course, you can also avoid using an intermediate variable for the condition:
```
x[x > 10]
```
```
## b c d e
## 20 30 40 50
```
If you create index some other way (not using the object), make sure that it’s
still congruent to the object. Otherwise, the subset returned from indexing
might not be meaningful.
You can also use indexing by condition to set elements, just as the other ways
of indexing can be used to set elements. For instance, let’s set all the
elements of `x` that are greater than `10` to the missing value `NA`:
```
x[x > 10] = NA
x
```
```
## a b c d e
## 10 NA NA NA NA
```
#### Congruent Vectors
To understand indexing by condition, we first need to learn about congruent
vectors. Two vectors are *congruent* if they have the same length and they
correspond element\-by\-element.
For example, suppose you do a survey that records each respondent’s favorite
animal and age. These are two different vectors of information, but each person
will have a response for both. So you’ll have two vectors that are the same
length:
```
animal = c("dog", "cat", "iguana")
age = c(31, 24, 72)
```
The 1st element of each vector corresponds to the 1st person, the 2nd to the
2nd person, and so on. These vectors are congruent.
Notice that columns in a data frame are always congruent!
#### Back to Indexing
When you index by condition, the index should generally be congruent to the
object you’re indexing. Elements where the index is `TRUE` are kept and
elements where the index is `FALSE` are dropped.
If you create the index from a condition on the object, it’s automatically
congruent. For instance, let’s make a condition based on the vector `x`:
```
is_small = x < 25
is_small
```
```
## a b c d e
## TRUE TRUE FALSE FALSE FALSE
```
The 1st element in the logical vector `is_small` corresponds to the 1st element
of `x`, the 2nd to the 2nd, and so on. The vectors `x` and `is_small` are
congruent.
It makes sense to use `is_small` as an index for `x`, and it gives us all the
elements less than `25`:
```
x[is_small]
```
```
## a b
## 10 20
```
Of course, you can also avoid using an intermediate variable for the condition:
```
x[x > 10]
```
```
## b c d e
## 20 30 40 50
```
If you create index some other way (not using the object), make sure that it’s
still congruent to the object. Otherwise, the subset returned from indexing
might not be meaningful.
You can also use indexing by condition to set elements, just as the other ways
of indexing can be used to set elements. For instance, let’s set all the
elements of `x` that are greater than `10` to the missing value `NA`:
```
x[x > 10] = NA
x
```
```
## a b c d e
## 10 NA NA NA NA
```
### 2\.4\.5 Logic
All of the conditions we’ve seen so far have been written in terms of a single
test. If you want to use more sophisticated conditions, R provides operators to
negate and combine logical vectors. These operators are useful for working with
logical vectors even outside the context of indexing.
#### Negation
The *NOT operator* `!` converts `TRUE` to `FALSE` and `FALSE` to `TRUE`:
```
x = c(TRUE, FALSE, TRUE, TRUE, NA)
x
```
```
## [1] TRUE FALSE TRUE TRUE NA
```
```
!x
```
```
## [1] FALSE TRUE FALSE FALSE NA
```
You can use `!` with a condition:
```
y = c("hi", "hello")
!(y == "hi")
```
```
## [1] FALSE TRUE
```
The NOT operator is vectorized.
#### Combinations
R also has operators for combining logical values.
The *AND operator* `&` returns `TRUE` only when both arguments are `TRUE`. Here
are some examples:
```
FALSE & FALSE
```
```
## [1] FALSE
```
```
TRUE & FALSE
```
```
## [1] FALSE
```
```
FALSE & TRUE
```
```
## [1] FALSE
```
```
TRUE & TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE, TRUE) & c(TRUE, TRUE, FALSE)
```
```
## [1] TRUE FALSE FALSE
```
The *OR operator* `|` returns `TRUE` when at least one argument is `TRUE`.
Let’s see some examples:
```
FALSE | FALSE
```
```
## [1] FALSE
```
```
TRUE | FALSE
```
```
## [1] TRUE
```
```
FALSE | TRUE
```
```
## [1] TRUE
```
```
TRUE | TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) | c(TRUE, TRUE)
```
```
## [1] TRUE TRUE
```
Be careful: everyday English is less precise than logic. You might say:
> I want all subjects with age over 50 and all subjects that like cats.
But in logic this means:
`(subject age over 50) OR (subject likes cats)`
So think carefully about whether you need both conditions to be true (AND) or
at least one (OR).
Rarely, you might want *exactly one* condition to be true. The *XOR (eXclusive
OR) function* `xor()` returns `TRUE` when exactly one argument is `TRUE`. For
example:
```
xor(FALSE, FALSE)
```
```
## [1] FALSE
```
```
xor(TRUE, FALSE)
```
```
## [1] TRUE
```
```
xor(TRUE, TRUE)
```
```
## [1] FALSE
```
The AND, OR, and XOR operators are vectorized.
#### Short\-circuiting
The second argument is irrelevant in some conditions:
* `FALSE &` is always `FALSE`
* `TRUE |` is always `TRUE`
Now imagine you have `FALSE & long_computation()`. You can save time by
skipping `long_computation()`. A *short\-circuit operator* does exactly that.
R has two short\-circuit operators:
* `&&` is a short\-circuited `&`
* `||` is a short\-circuited `|`
These operators only evaluate the second argument if it is necessary to
determine the result. Here are some of these:
```
TRUE && FALSE
```
```
## [1] FALSE
```
```
TRUE && TRUE
```
```
## [1] TRUE
```
```
TRUE || TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) && c(TRUE, TRUE)
```
```
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
```
```
## [1] TRUE
```
For the final expression, notice R only combines the first element of each
vector. The others are ignored. In other words, the short\-circuit operators are
*not* vectorized! Because of this, generally you **should not use** the
short\-circuit operators for indexing. Their main use is in writing conditions
for if\-statements, which we’ll learn about later on.
#### Negation
The *NOT operator* `!` converts `TRUE` to `FALSE` and `FALSE` to `TRUE`:
```
x = c(TRUE, FALSE, TRUE, TRUE, NA)
x
```
```
## [1] TRUE FALSE TRUE TRUE NA
```
```
!x
```
```
## [1] FALSE TRUE FALSE FALSE NA
```
You can use `!` with a condition:
```
y = c("hi", "hello")
!(y == "hi")
```
```
## [1] FALSE TRUE
```
The NOT operator is vectorized.
#### Combinations
R also has operators for combining logical values.
The *AND operator* `&` returns `TRUE` only when both arguments are `TRUE`. Here
are some examples:
```
FALSE & FALSE
```
```
## [1] FALSE
```
```
TRUE & FALSE
```
```
## [1] FALSE
```
```
FALSE & TRUE
```
```
## [1] FALSE
```
```
TRUE & TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE, TRUE) & c(TRUE, TRUE, FALSE)
```
```
## [1] TRUE FALSE FALSE
```
The *OR operator* `|` returns `TRUE` when at least one argument is `TRUE`.
Let’s see some examples:
```
FALSE | FALSE
```
```
## [1] FALSE
```
```
TRUE | FALSE
```
```
## [1] TRUE
```
```
FALSE | TRUE
```
```
## [1] TRUE
```
```
TRUE | TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) | c(TRUE, TRUE)
```
```
## [1] TRUE TRUE
```
Be careful: everyday English is less precise than logic. You might say:
> I want all subjects with age over 50 and all subjects that like cats.
But in logic this means:
`(subject age over 50) OR (subject likes cats)`
So think carefully about whether you need both conditions to be true (AND) or
at least one (OR).
Rarely, you might want *exactly one* condition to be true. The *XOR (eXclusive
OR) function* `xor()` returns `TRUE` when exactly one argument is `TRUE`. For
example:
```
xor(FALSE, FALSE)
```
```
## [1] FALSE
```
```
xor(TRUE, FALSE)
```
```
## [1] TRUE
```
```
xor(TRUE, TRUE)
```
```
## [1] FALSE
```
The AND, OR, and XOR operators are vectorized.
#### Short\-circuiting
The second argument is irrelevant in some conditions:
* `FALSE &` is always `FALSE`
* `TRUE |` is always `TRUE`
Now imagine you have `FALSE & long_computation()`. You can save time by
skipping `long_computation()`. A *short\-circuit operator* does exactly that.
R has two short\-circuit operators:
* `&&` is a short\-circuited `&`
* `||` is a short\-circuited `|`
These operators only evaluate the second argument if it is necessary to
determine the result. Here are some of these:
```
TRUE && FALSE
```
```
## [1] FALSE
```
```
TRUE && TRUE
```
```
## [1] TRUE
```
```
TRUE || TRUE
```
```
## [1] TRUE
```
```
c(TRUE, FALSE) && c(TRUE, TRUE)
```
```
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
## Warning in c(TRUE, FALSE) && c(TRUE, TRUE): 'length(x) = 2 > 1' in coercion to
## 'logical(1)'
```
```
## [1] TRUE
```
For the final expression, notice R only combines the first element of each
vector. The others are ignored. In other words, the short\-circuit operators are
*not* vectorized! Because of this, generally you **should not use** the
short\-circuit operators for indexing. Their main use is in writing conditions
for if\-statements, which we’ll learn about later on.
2\.5 Exercises
--------------
### 2\.5\.1 Exercise
The `rep` function is another way to create a vector. Read the help file for
the `rep` function.
1. What does the `rep` function do to create a vector? Give an example.
2. The `rep` function has parameters `times` and `each`. What does each do, and
how do they differ? Give examples for both.
3. Can you set both of `times` and `each` in a single call to `rep`? If the
function raises an error, explain what the error message means. If the
function returns a result, explain how the result corresponds to the
arguments you chose.
### 2\.5\.2 Exercise
Considering how implicit coercion works (Section [2\.2\.2](data-structures.html#implicit-coercion)):
1. Why does `"3" + 4` raise an error?
2. Why does `"TRUE" == TRUE` return `TRUE`?
3. Why does `"FALSE" < TRUE` return TRUE?
### 2\.5\.3 Exercise
1. Section [2\.3\.1](data-structures.html#missing-values) described the missing value as a “chameleon”
because it can have many different types. Is `Inf` also a chameleon? Use
examples to justify your answer.
2. The missing value is also “contagious” because using it as an argument
usually produces another missing value. Is `Inf` contagious? Again, use
examples to justify your answer.
### 2\.5\.4 Exercise
The `table` function is useful for counting all sorts of things, not just level
frequencies for a factor. For instance, you can use `table` to count how many
`TRUE` and `FALSE` values there are in a logical vector.
1. For the earnings data, how many rows had median weekly earnings below $750?
2. Based on how the data is structured, is your answer in part 1 the same as
the number of quarters that had median weekly earnings below $750? Explain.
### 2\.5\.1 Exercise
The `rep` function is another way to create a vector. Read the help file for
the `rep` function.
1. What does the `rep` function do to create a vector? Give an example.
2. The `rep` function has parameters `times` and `each`. What does each do, and
how do they differ? Give examples for both.
3. Can you set both of `times` and `each` in a single call to `rep`? If the
function raises an error, explain what the error message means. If the
function returns a result, explain how the result corresponds to the
arguments you chose.
### 2\.5\.2 Exercise
Considering how implicit coercion works (Section [2\.2\.2](data-structures.html#implicit-coercion)):
1. Why does `"3" + 4` raise an error?
2. Why does `"TRUE" == TRUE` return `TRUE`?
3. Why does `"FALSE" < TRUE` return TRUE?
### 2\.5\.3 Exercise
1. Section [2\.3\.1](data-structures.html#missing-values) described the missing value as a “chameleon”
because it can have many different types. Is `Inf` also a chameleon? Use
examples to justify your answer.
2. The missing value is also “contagious” because using it as an argument
usually produces another missing value. Is `Inf` contagious? Again, use
examples to justify your answer.
### 2\.5\.4 Exercise
The `table` function is useful for counting all sorts of things, not just level
frequencies for a factor. For instance, you can use `table` to count how many
`TRUE` and `FALSE` values there are in a logical vector.
1. For the earnings data, how many rows had median weekly earnings below $750?
2. Based on how the data is structured, is your answer in part 1 the same as
the number of quarters that had median weekly earnings below $750? Explain.
| R Programming |
ucdavisdatalab.github.io | https://ucdavisdatalab.github.io/workshop_r_basics/exploring-data.html |
3 Exploring Data
================
Now that you have a solid foundation in the basic functions and data structures
of R, you can move on to its most popular application: data analysis. In this
chapter, you’ll learn how to efficiently explore and summarize data with
visualizations and statistics. Along the way, you’ll also learn how to use
apply functions, which are essential to fluency in R.
#### Learning Objectives
* Describe when to use `[` versus `[[`
* Index data frames to get specific rows, columns, or subsets
* Install and load packages
* Describe the grammar of graphics
* Make a plot
* Save a plot to an image file
* Call a function repeatedly with `sapply` or `lapply`
* Split data into groups and apply a function to each
3\.1 Indexing Data Frames
-------------------------
This section explains how to get and set data in a data frame, expanding on the
indexing techniques you learned in Section [2\.4](data-structures.html#indexing). Under the hood,
every data frame is a list, so first you’ll learn about indexing lists.
### 3\.1\.1 Indexing Lists
Lists are a *container* for other types of R objects. When you select an
element from a list, you can either keep the container (the list) or discard
it. The indexing operator `[` almost always keeps containers.
As an example, let’s get some elements from a small list:
```
x = list(first = c(1, 2, 3), second = sin, third = c("hi", "hello"))
y = x[c(1, 3)]
y
```
```
## $first
## [1] 1 2 3
##
## $third
## [1] "hi" "hello"
```
```
class(y)
```
```
## [1] "list"
```
The result is still a list. Even if we get just one element, the result of
indexing a list with `[` is a list:
```
class(x[1])
```
```
## [1] "list"
```
Sometimes this will be exactly what we want. But what if we want to get the
first element of `x` so that we can use it in a vectorized function? Or in a
function that only accepts numeric arguments? We need to somehow get the
element and discard the container.
The solution to this problem is the *extraction operator* `[[`, which is also
called the *double square bracket operator*. The extraction operator is the
primary way to get and set elements of lists and other containers.
Unlike the indexing operator `[`, the extraction operator always discards the
container:
```
x[[1]]
```
```
## [1] 1 2 3
```
```
class(x[[1]])
```
```
## [1] "numeric"
```
The tradeoff is that the extraction operator can only get or set one element at
a time. Note that the element can be a vector, as above. Because it can only
get or set one element at a time, the extraction operator can only index by
position or name. Blank and logical indexes are not allowed.
The final difference between the index operator `[` and the extraction operator
`[[` has to do with how they handle invalid indexes. The index operator `[`
returns `NA` for invalid vector elements, and `NULL` for invalid list elements:
```
c(1, 2)[10]
```
```
## [1] NA
```
```
x[10]
```
```
## $<NA>
## NULL
```
On the other hand, the extraction operator `[[` raises an error for invalid
elements:
```
x[[10]]
```
```
## Error in x[[10]]: subscript out of bounds
```
The indexing operator `[` and the extraction operator `[[` both work with any
data structure that has elements. However, you’ll generally use the indexing
operator `[` to index vectors, and the extraction operator `[[` to index
containers (such as lists).
### 3\.1\.2 Two\-dimensional Indexing
For two\-dimensional objects, like matrices and data frames, you can pass the
indexing operator `[` or the extraction operator `[[` a separate index for each
dimension. The rows come first:
```
DATA[ROWS, COLUMNS]
```
For instance, let’s get the first 3 rows and all columns of the earnings data:
```
earn[1:3, ]
```
```
## sex race ethnic_origin age year quarter n_persons
## 1 Both Sexes All Races All Origins 16 years and over 2010 1 96821000
## 2 Both Sexes All Races All Origins 16 years and over 2010 2 99798000
## 3 Both Sexes All Races All Origins 16 years and over 2010 3 101385000
## median_weekly_earn
## 1 754
## 2 740
## 3 740
```
As we saw in Section [2\.4\.1](data-structures.html#all-elements), leaving an index blank means all
elements.
As another example, let’s get the 3rd and 5th row, and the 2nd and 4th column:
```
earn[c(3, 5), c(2, 4)]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
Mixing several different ways of indexing is allowed. So for example, we can
get the same above, but use column names instead of positions:
```
earn[c(3, 5), c("race", "age")]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
For data frames, it’s especially common to index the rows by condition and the
columns by name. For instance, let’s get the `sex`, `age`, and `n_persons`
columns for all rows that pertain to women:
```
result = earn[earn$sex == "Women", c("sex", "age", "n_persons")]
head(result)
```
```
## sex age n_persons
## 89 Women 16 years and over 43794000
## 90 Women 16 years and over 44562000
## 91 Women 16 years and over 44912000
## 92 Women 16 years and over 44620000
## 93 Women 16 years and over 44077000
## 94 Women 16 years and over 44539000
```
### 3\.1\.3 The `drop` Parameter
If you use two\-dimensional indexing with `[` to select exactly one column, you
get a vector:
```
result = earn[1:3, 2]
class(result)
```
```
## [1] "character"
```
The container is dropped, even though the indexing operator `[` usually keeps
containers. This also occurs for matrices. You can control this behavior with
the `drop` parameter:
```
result = earn[1:3, 2, drop = FALSE]
class(result)
```
```
## [1] "data.frame"
```
The default is `drop = TRUE`.
3\.2 Packages
-------------
A *package* is a collection of functions for use in R. Packages usually include
documentation, and can also contain examples, vignettes, and data sets. Most
packages are developed by members of the R community, so quality varies. There
are also a few packages that are built into R but provide extra features. We’ll
use a package in Section [3\.3](exploring-data.html#data-visualization), so we’re learning about
them now.
The [Comprehensive R Archive Network](https://cran.r-project.org/), or CRAN, is the main place people
publish packages. As of writing, there were 18,619 packages posted to CRAN.
This number has been steadily increasing as R has grown in popularity.
Packages span a wide variety of topics and disciplines. There are packages
related to statistics, social sciences, geography, genetics, physics, biology,
pharmacology, economics, agriculture, and more. The best way to find packages
is to search online, but the CRAN website also provides [“task
views”](https://cran.r-project.org/web/views/) if you want to browse popular packages related to a
specific discipline.
The `install.packages` function installs one or more packages from CRAN. Its
first argument is the packages to install, as a character vector.
When you run `install.packages`, R will ask you to choose which *mirror* to
download the package from. A mirror is a web server that has the same set of
files as some other server. Mirrors are used to make downloads faster and to
provide redundancy so that if a server stops working, files are still available
somewhere else. CRAN has dozens of mirrors; you should choose one that’s
geographically nearby, since that usually produces the best download speeds. If
you aren’t sure which mirror to choose, you can use the 0\-Cloud mirror, which
attempts to automatically choose a mirror near you.
As an example, here’s the code to install the remotes package:
```
install.packages("remotes")
```
If you run the code above, you’ll be asked to select a mirror, and then see
output that looks something like this:
```
--- Please select a CRAN mirror for use in this session ---
trying URL 'https://cloud.r-project.org/src/contrib/remotes_2.3.0.tar.gz'
Content type 'application/x-gzip' length 148405 bytes (144 KB)
==================================================
downloaded 144 KB
* installing *source* package ‘remotes’ ...
** package ‘remotes’ successfully unpacked and MD5 sums checked
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (remotes)
The downloaded source packages are in
‘/tmp/Rtmp8t6iGa/downloaded_packages’
```
R goes through a variety of steps to install a package, even installing other
packages that the package depends on. You can tell that a package installation
succeeded by the final line `DONE`. When a package installation fails, R prints
an error message explaining the problem instead.
Once a package is installed, it stays on your computer until you remove it or
remove R. This means you only need to install each package once. However, most
packages are periodically updated. You can reinstall a package using
`install.packages` the same way as above to get the latest version.
Alternatively, you can update all of the R packages you have installed at once
by calling the `update.packages` function. Beware that this may take a long
time if you have a lot of packages installed.
The function to remove packages is `remove.packages`. Like `install.packages`,
this function’s first argument is the packages to remove, as a character
vector.
If you want to see which packages are installed, you can use the
`installed.packages` function. It does not require any arguments. It returns a
matrix with one row for each package and columns that contain a variety of
information. Here’s an example:
```
packages = installed.packages()
# Just print the version numbers for 10 packages.
packages[1:10, "Version"]
```
```
## base64enc bookdown bslib cachem cli colorspace cpp11
## "0.1-3" "0.29" "0.4.0" "1.0.6" "3.4.0" "2.0-3" "0.4.2"
## digest evaluate fansi
## "0.6.29" "0.16" "1.0.3"
```
You’ll see a different set of packages, since you have a different computer.
Before you can use the functions (or other resources) in an installed package,
you must load the package with the `library` function. R doesn’t load packages
automatically because each package you load uses memory and may conflict with
other packages. Thus you should only load the packages you need for whatever
it is that you want to do. When you restart R, the loaded packages are cleared
and you must again load any packages you want to use.
Let’s load the remotes package we installed earlier:
```
library("remotes")
```
The `library` function works with or without quotes around the package name, so
you may also see people write things like `library(remotes)`. We recommend
using quotes to make it unambiguous that you are not referring to a variable.
A handful of packages print out a message when loaded, but the vast majority do
not. Thus you can assume the call to `library` was successful if nothing is
printed. If something goes wrong while loading a package, R will print out an
error message explaining the problem.
Finally, not all R packages are published to CRAN. [GitHub](https://github.com/) is another
popular place to publish R packages, especially ones that are experimental or
still in development. Unlike CRAN, GitHub is a general\-purpose website for
publishing code written in any programming language, so it contains much more
than just R packages and is not specifically R\-focused.
The remotes package that we just installed and loaded provides functions to
install packages from GitHub. It is generally better to install packages from
CRAN when they are available there, since the versions on CRAN tend to be more
stable and intended for a wide audience. However, if you want to install a
package from GitHub, you can learn more about the remotes package by reading
its [online documentation](https://remotes.r-lib.org/).
3\.3 Data Visualization
-----------------------
There are three popular systems for creating visualizations in R:
1. The base R functions (primarily the `plot` function)
2. The lattice package
3. The ggplot2 package
These three systems are not interoperable! Consequently, it’s best to choose
one to use exclusively. Compared to base R, both lattice and ggplot2 are better
at handling grouped data and generally require less code to create a
nice\-looking visualization.
The ggplot2 package is so popular that there are now knockoff packages for
other data\-science\-oriented programming languages like Python and Julia. The
package is also part of the [*Tidyverse*](https://www.tidyverse.org/), a popular collection of R
packages designed to work well together. Because of these advantages, we’ll use
ggplot2 for visualizations in this and all future lessons.
ggplot2 has detailed [documentation](https://ggplot2.tidyverse.org/) and also a
[cheatsheet](https://github.com/rstudio/cheatsheets/blob/master/data-visualization-2.1.pdf).
The “gg” in ggplot2 stands for *grammar of graphics*. The idea of a grammar of
graphics is that visualizations can be built up in layers. In ggplot2, the
three layers every plot must have are:
* Data
* Geometry
* Aesthetics
There are also several optional layers. Here are a few:
| Layer | Description |
| --- | --- |
| scales | Title, label, and axis value settings |
| facets | Side\-by\-side plots |
| guides | Axis and legend position settings |
| annotations | Shapes that are not mapped to data |
| coordinates | Coordinate systems (Cartesian, logarithmic, polar) |
As an example, let’s plot the earnings data. First, we need to load ggplot2\. As
always, if this is your first time using the package, you’ll have to install
it. Then you can load the package:
```
# install.packages("ggplot2")
library("ggplot2")
```
What kind of plot should we make? It depends on what data we want the plot to
show. Let’s make a line plot that shows median earnings for each quarter in
2019, with separate lines for men and women.
Before plotting, we need to take a subset of the earnings that only contains
information for 2019:
```
earn19 = earn[earn$year == 2019, ]
```
The data is also broken down across `race`, `ethnic_origin`, and `age`. Since
we aren’t interested in these categories for the plot, we need to further
subset the data:
```
earn19 = earn19[earn19$race == "All Races" &
earn19$ethnic_origin == "All Origins" &
earn19$age == "16 years and over", ]
```
Now we’re ready to make the plot.
### 3\.3\.1 Layer 1: Data
The data layer determines the data set used to make the plot. ggplot and most
other Tidyverse packages are designed for working with *tidy* data frames. Tidy
means:
1. Each observation has its own row.
2. Each feature has its own column.
3. Each value has its own cell.
Tidy data sets are convenient in general. A later lesson will cover how to make
an untidy data set tidy. Until then, we’ll take it for granted that the data
sets we work with are tidy.
To set up the data layer, call the `ggplot` function on a data frame:
```
ggplot(earn19)
```
This returns a blank plot. We still need to add a few more layers.
### 3\.3\.2 Layer 2: Geometry
The **geom**etry layer determines the shape or appearance of the visual
elements of the plot. In other words, the geometry layer determines what kind
of plot to make: one with points, lines, boxes, or something else.
There are many different geometries available in ggplot2\. The package provides
a function for each geometry, always prefixed with `geom_`.
To add a geometry layer to the plot, choose the `geom_` function you want and
add it to the plot with the `+` operator:
```
ggplot(earn19) + geom_line()
```
```
## Error in `check_required_aesthetics()`:
## ! geom_line requires the following missing aesthetics: x and y
```
This returns an error message that we’re missing aesthetics `x` and `y`. We’ll
learn more about aesthetics in the next section, but this error message is
especially helpful: it tells us exactly what we’re missing. When you use a
geometry you’re unfamiliar with, it can be helpful to run the code for just the
data and geometry layer like this, to see exactly which aesthetics need to be
set.
As we’ll see later, it’s possible to add multiple geometries to a plot.
### 3\.3\.3 Layer 3: Aesthetics
The **aes**thetic layer determines the relationship between the data and the
geometry. Use the aesthetic layer to map features in the data to **aesthetics**
(visual elements) of the geometry.
The `aes` function creates an aesthetic layer. The syntax is:
```
aes(AESTHETIC = FEATURE, ...)
```
The names of the aesthetics depend on the geometry, but some common ones are
`x`, `y`, `color`, `fill`, `shape`, and `size`. There is more information about
and examples of aesthetic names in the documentation.
For example, we want to put `quarter` on the x\-axis and `median_weekly_earn` on
the y\-axis. We also want to use a separate line style for each `sex` category.
So the aesthetic layer should be:
```
aes(x = quarter, y = median_weekly_earn, linetype = sex)
```
In the `aes` function, column names are never quoted.
Unlike most layers, the aesthetic layer is not added to the plot with the `+`
operator. Instead, you can pass the aesthetic layer as the second argument to
the `ggplot` function:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line()
```
If you want to set an aesthetic to a constant value, rather than one that’s
data dependent, do so *outside* of the aesthetic layer. For instance, suppose
we want to make the lines blue:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line(color = "blue")
```
If you set an aesthetic to a constant value inside of the aesthetic layer, the
results you get might not be what you expect:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex, color = "blue") +
geom_line()
```
### 3\.3\.4 Layer 4: Scales
The scales layer controls the title, axis labels, and axis scales of the plot.
Most of the functions in the scales layer are prefixed with `scale_`, but not
all of them.
The `labs` function is especially important, because it’s used to set the title
and axis labels:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line() +
labs(x = "Quarter", y = "Median Weekly Salary (USD)",
title = "2019 Median Weekly Salaries, by Sex", linetype = "Sex")
```
### 3\.3\.5 Saving Plots
In ggplot2, use the `ggsave` function to save the most recent plot you created:
```
ggsave("line.png")
```
The file format is selected automatically based on the extension. Common
formats are PNG and PDF.
#### The Plot Device
You can also save a plot with one of R’s “plot device” functions. The steps
are:
1. Call a plot device function: `png`, `jpeg`, `pdf`, `bmp`, `tiff`, or `svg`.
2. Run your code to make the plot.
3. Call `dev.off` to indicate that you’re done plotting.
This strategy works with any of R’s graphics systems (not just ggplot2\).
Here’s an example:
```
# Run these lines in the console, not the notebook!
jpeg("line.jpeg")
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn) +
geom_point()
dev.off()
```
### 3\.3\.6 Example: Bar Plot
Let’s say we want to plot the number of persons for each sex, again using the
earnings data set. A bar plot is an appropriate way to represent this visually.
The geometry for a bar plot is `geom_bar`. Since bar plots are mainly used to
display frequencies, the `geom_bar` function automatically computes frequencies
when given mapped to a categorical feature.
The `n_persons` feature is not categorical, so we don’t need `geom_bar` to
compute frequencies. To prevent `geom_bar` from computing frequencies
automatically, set `stat = "identity"`.
Here’s the code to make the bar plot:
```
ggplot(earn19) +
aes(x = quarter, y = n_persons, fill = sex) +
geom_bar(stat = "identity", position = "dodge") +
labs(x = "Quarter", y = "Number of Workers", fill = "Sex",
title = "Number of Workers by Quarter and Sex in 2019")
```
The setting `position = "dodge"` instructs `geom_bar` to put the bars
side\-by\-side rather than stacking them.
### 3\.3\.7 Visualization Design
Designing high\-quality visualizations goes beyond just mastering which R
functions to call. You also need to think carefully about what kind of data you
have and what message you want to convey. This section provides a few
guidelines.
The first step in data visualization is choosing an appropriate kind of plot.
Here are some suggestions (not rules):
| Feature 1 | Feature 2 | Plot |
| --- | --- | --- |
| categorical | | bar, dot |
| categorical | categorical | bar, dot, mosaic |
| numerical | | box, density, histogram |
| numerical | categorical | box, density, ridge |
| numerical | numerical | line, scatter, smooth scatter |
If you want to add a:
* 3rd numerical feature, use it to change point/line sizes.
* 3rd categorical feature, use it to change point/line styles.
* 4th categorical feature, use side\-by\-side plots.
Once you’ve selected a plot, here are some rules you should almost always
follow:
* Always add a title and axis labels. These should be in plain English, not
variable names!
* Specify units after the axis label if the axis has units. For instance,
“Height (ft)”.
* Don’t forget that many people are colorblind! Also, plots are often printed
in black and white. Use point and line styles to distinguish groups; color is
optional.
* Add a legend whenever you’ve used more than one point or line style.
* Always write a few sentences explaining what the plot reveals. Don’t
describe the plot, because the reader can just look at it. Instead,
explain what they can learn from the plot and point out important details
that are easily overlooked.
* Sometimes points get plotted on top of each other. This is called
*overplotting*. Plots with a lot of overplotting can be hard to read and can
even misrepresent the data by hiding how many points are present. Use a
two\-dimensional density plot or jitter the points to deal with overplotting.
* For side\-by\-side plots, use the same axis scales for both plots so that
comparing them is not deceptive.
Visualization design is a deep topic, and whole books have been written about
it. One resource where you can learn more is DataLab’s [Principles of Data
Visualization Workshop Reader](https://ucdavisdatalab.github.io/workshop_data_viz_principles/).
3\.4 Apply Functions
--------------------
Section [2\.1\.3](data-structures.html#vectorization) introduced vectorization, a convenient and
efficient way to compute multiple results. That section also mentioned that
some of R’s functions—the ones that summarize or aggregate data—are not
vectorized.
The `class` function is an example of a function that’s not vectorized. If we
call the `class` function on the earnings data set, we get just one result for
the data set as a whole:
```
class(earn)
```
```
## [1] "data.frame"
```
What if we want to get the class of each column? We can get the class for a
single column by selecting the column with `$`, the dollar sign operator:
```
class(earn$age)
```
```
## [1] "character"
```
But what if we want the classes for all the columns? We could write a call to
`class` for each column, but that would be tedious. When you’re working with a
programming language, you should try to avoid tedium; there’s usually a better,
more automated way.
Section [2\.2\.1](data-structures.html#lists) pointed out that data frames are technically lists, where
each column is one element. With that in mind, what we need here is a line of
code that calls `class` on each element of the data frame. The idea is similar
to vectorization, but since we have a list and a non\-vectorized function, we
have to do a bit more than just call `class(earn)`.
The `lapply` function calls, or *applies*, a function on each element of a list
or vector. The syntax is:
```
lapply(X, FUN, ...)
```
The function `FUN` is called once for each element of `X`, with the element as
the first argument. The `...` is for additional arguments to `FUN`, which are
held constant across all the elements.
Let’s try this out with the earnings data and the `class` function:
```
lapply(earn, class)
```
```
## $sex
## [1] "character"
##
## $race
## [1] "character"
##
## $ethnic_origin
## [1] "character"
##
## $age
## [1] "character"
##
## $year
## [1] "integer"
##
## $quarter
## [1] "integer"
##
## $n_persons
## [1] "integer"
##
## $median_weekly_earn
## [1] "integer"
```
The result is similar to if the `class` function was vectorized. In fact, if we
use a vector and a vectorized function with `lapply`, the result is nearly
identical to the result from vectorization:
```
x = c(1, 2, pi)
sin(x)
```
```
## [1] 8.414710e-01 9.092974e-01 1.224647e-16
```
```
lapply(x, sin)
```
```
## [[1]]
## [1] 0.841471
##
## [[2]]
## [1] 0.9092974
##
## [[3]]
## [1] 1.224647e-16
```
The only difference is that the result from `lapply` is a list. In fact, the
`lapply` function always returns a list with one element for each element of
the input data. The “l” in `lapply` stands for “list”.
The `lapply` function is one member of a family of functions called *apply
functions*. All of the apply functions provide ways to apply a function
repeatedly to different parts of a data structure. We’ll meet a few more apply
functions soon.
When you have a choice between using vectorization or an apply function, you
should always choose vectorization. Vectorization is clearer—compare the two
lines of code above—and it’s also significantly more efficient. In fact,
vectorization is the most efficient way to call a function repeatedly in R.
As we saw with the `class` function, there are some situations where
vectorization is not possible. That’s when you should think about using an
apply function.
### 3\.4\.1 The `sapply` Function
The related `sapply` function calls a function on each element of a list or
vector, and simplifies the result. That last part is the crucial difference
compared to `lapply`. When results from the calls all have the same type and
length, `sapply` returns a vector or matrix instead of a list. When the results
have different types or lengths, the result is the same as for `lapply`. The
“s” in `sapply` stands for “simplify”.
For instance, if we use `sapply` to find the classes of the columns in the
earnings data, we get a character vector:
```
sapply(earn, class)
```
```
## sex race ethnic_origin age
## "character" "character" "character" "character"
## year quarter n_persons median_weekly_earn
## "integer" "integer" "integer" "integer"
```
Likewise, if we use `sapply` to compute the `sin` values, we get a numeric
vector, the same as from vectorization:
```
sapply(x, sin)
```
```
## [1] 8.414710e-01 9.092974e-01 1.224647e-16
```
In spite of that, vectorization is still more efficient than `sapply`, so use
vectorization instead when possible.
Apply functions are incredibly useful for summarizing data. For example,
suppose we want to compute the frequencies for all of the columns in the
earnings data set that aren’t numeric.
First, we need to identify the columns. One way to do this is with the
`is.numeric` function. Despite the name, this function actually tests whether
its argument is a real number, not whether it its argument is a numeric vector.
In other words, it also returns true for integer values. We can use `sapply` to
apply this function to all of the columns in the earnings data set:
```
is_not_number = !sapply(earn, is.numeric)
is_not_number
```
```
## sex race ethnic_origin age
## TRUE TRUE TRUE TRUE
## year quarter n_persons median_weekly_earn
## FALSE FALSE FALSE FALSE
```
Is it worth using R code to identify the non\-numeric columns? Since there are
only 8 columns in the earnings data set, maybe not. But if the data set was
larger, with say 100 columns, it definitely would be.
In general, it’s a good habit to use R to do things rather than do them
manually. You’ll get more practice programming, and your code will be more
flexible if you want to adapt it to other data sets.
Now that we know which columns are non\-numeric, we can use the `table` function
to compute frequencies. We only want to compute frequencies for those columns,
so we need to subset the data:
```
lapply(earn[, is_not_number], table)
```
```
## $sex
##
## Both Sexes Men Women
## 1408 1408 1408
##
## $race
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
##
## $ethnic_origin
##
## All Origins Hispanic or Latino
## 3564 660
##
## $age
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
We use `lapply` rather than `sapply` for this step because the table for each
column will have a different length (but try `sapply` and see what happens!).
### 3\.4\.2 The Split\-Apply Pattern
In a data set with categorical features, it’s often useful to compute something
for each category. The `lapply` and `sapply` functions can compute something
for each element of a data structure, but categories are not necessarily
elements.
For example, the earnings data set has three different categories in the `sex`
column. If we want all of the rows in one category, one way to get them is by
indexing:
```
women = earn[earn$sex == "Women", ]
head(women)
```
```
## sex race ethnic_origin age year quarter n_persons
## 89 Women All Races All Origins 16 years and over 2010 1 43794000
## 90 Women All Races All Origins 16 years and over 2010 2 44562000
## 91 Women All Races All Origins 16 years and over 2010 3 44912000
## 92 Women All Races All Origins 16 years and over 2010 4 44620000
## 93 Women All Races All Origins 16 years and over 2011 1 44077000
## 94 Women All Races All Origins 16 years and over 2011 2 44539000
## median_weekly_earn
## 89 665
## 90 672
## 91 662
## 92 679
## 93 683
## 94 689
```
To get all three categories, we’d have to do this three times. If we want to
compute something for each category, say the mean of the `n_persons` column, we
also have to repeat that computation three times. Here’s what it would look
like for just the `women` category:
```
mean(women$n_persons)
```
```
## [1] 10758771
```
If the categories were elements, we could avoid writing code to index each
category, and just use the `sapply` (or `lapply`) function to apply the `mean`
function to each.
The `split` function splits a vector or data frame into groups based on a
vector of categories. The first argument to `split` is the data, and the
second argument is a congruent vector of categories.
We can use `split` to elegantly compute means of `n_persons` broken down by
sex. First, we split the data by category. Since we only want to compute on the
`n_persons` column, we only split that column:
```
by_sex = split(earn$n_persons, earn$sex)
class(by_sex)
```
```
## [1] "list"
```
```
names(by_sex)
```
```
## [1] "Both Sexes" "Men" "Women"
```
The result from `split` is a list with one element for each category. The
individual elements contain pieces of the original `n_persons` column:
```
head(by_sex$Women)
```
```
## [1] 43794000 44562000 44912000 44620000 44077000 44539000
```
Since the categories are elements in the split data, now we can use `sapply`
the same way we did in previous examples:
```
sapply(by_sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
This two\-step process is an R idiom called the *split\-apply pattern*. First you
use `split` to convert categories into list elements, then you use an apply
function to compute something on each category. Any time you want to compute
results by category, you should think of this pattern.
The split\-apply pattern is so useful that R provides the `tapply` function as a
shortcut. The `tapply` function is equivalent to calling `split` and then
`sapply`. Like `split`, the first argument is the data and the second argument
is a congruent vector of categories. The third argument is a function to apply,
like the function argument in `sapply`.
We can use `tapply` to compute the `n_persons` means by `sex` for the earnings
data:
```
tapply(earn$n_persons, earn$sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
Notice that the result is identical to the one we computed before.
The “t” in `tapply` stands for “table”, because the `tapply` function is a
generalization of the `table` function. If you use `length` as the third
argument to `tapply`, you get the same results as you would from using the
`table` function on the category vector.
The `aggregate` function is closely related to `tapply`. It computes the same
results, but organizes them into a data frame with one row for each category.
In some cases, this format is more convenient. The arguments are the same,
except that the second argument must be a list or data frame rather than a
vector.
As an example, here’s the result of using `aggregate` to compute the
`n_persons` means:
```
aggregate(earn$n_persons, list(earn$sex), mean)
```
```
## Group.1 x
## 1 Both Sexes 24402515
## 2 Men 13643727
## 3 Women 10758771
```
The `lapply`, `sapply`, and `tapply` functions are the three most important
functions in the family of apply functions, but there are many more. You can
learn more about all of R’s apply functions by reading [this StackOverflow
post](https://stackoverflow.com/a/7141669).
3\.5 Exercises
--------------
### 3\.5\.1 Exercise
Count how many rows of the `earn` data have `median_weekly_earn` greater than $700\.
### 3\.5\.2 Exercise
Calculate the median of `median_weekly_earn` for men in 2018\.
### 3\.5\.3 Exercise
1. Adjust the line plot of weekly earnings by quarter so that there is one line per age group.
2. Further adjust this plot so that each age group line is a different color.
#### Learning Objectives
* Describe when to use `[` versus `[[`
* Index data frames to get specific rows, columns, or subsets
* Install and load packages
* Describe the grammar of graphics
* Make a plot
* Save a plot to an image file
* Call a function repeatedly with `sapply` or `lapply`
* Split data into groups and apply a function to each
3\.1 Indexing Data Frames
-------------------------
This section explains how to get and set data in a data frame, expanding on the
indexing techniques you learned in Section [2\.4](data-structures.html#indexing). Under the hood,
every data frame is a list, so first you’ll learn about indexing lists.
### 3\.1\.1 Indexing Lists
Lists are a *container* for other types of R objects. When you select an
element from a list, you can either keep the container (the list) or discard
it. The indexing operator `[` almost always keeps containers.
As an example, let’s get some elements from a small list:
```
x = list(first = c(1, 2, 3), second = sin, third = c("hi", "hello"))
y = x[c(1, 3)]
y
```
```
## $first
## [1] 1 2 3
##
## $third
## [1] "hi" "hello"
```
```
class(y)
```
```
## [1] "list"
```
The result is still a list. Even if we get just one element, the result of
indexing a list with `[` is a list:
```
class(x[1])
```
```
## [1] "list"
```
Sometimes this will be exactly what we want. But what if we want to get the
first element of `x` so that we can use it in a vectorized function? Or in a
function that only accepts numeric arguments? We need to somehow get the
element and discard the container.
The solution to this problem is the *extraction operator* `[[`, which is also
called the *double square bracket operator*. The extraction operator is the
primary way to get and set elements of lists and other containers.
Unlike the indexing operator `[`, the extraction operator always discards the
container:
```
x[[1]]
```
```
## [1] 1 2 3
```
```
class(x[[1]])
```
```
## [1] "numeric"
```
The tradeoff is that the extraction operator can only get or set one element at
a time. Note that the element can be a vector, as above. Because it can only
get or set one element at a time, the extraction operator can only index by
position or name. Blank and logical indexes are not allowed.
The final difference between the index operator `[` and the extraction operator
`[[` has to do with how they handle invalid indexes. The index operator `[`
returns `NA` for invalid vector elements, and `NULL` for invalid list elements:
```
c(1, 2)[10]
```
```
## [1] NA
```
```
x[10]
```
```
## $<NA>
## NULL
```
On the other hand, the extraction operator `[[` raises an error for invalid
elements:
```
x[[10]]
```
```
## Error in x[[10]]: subscript out of bounds
```
The indexing operator `[` and the extraction operator `[[` both work with any
data structure that has elements. However, you’ll generally use the indexing
operator `[` to index vectors, and the extraction operator `[[` to index
containers (such as lists).
### 3\.1\.2 Two\-dimensional Indexing
For two\-dimensional objects, like matrices and data frames, you can pass the
indexing operator `[` or the extraction operator `[[` a separate index for each
dimension. The rows come first:
```
DATA[ROWS, COLUMNS]
```
For instance, let’s get the first 3 rows and all columns of the earnings data:
```
earn[1:3, ]
```
```
## sex race ethnic_origin age year quarter n_persons
## 1 Both Sexes All Races All Origins 16 years and over 2010 1 96821000
## 2 Both Sexes All Races All Origins 16 years and over 2010 2 99798000
## 3 Both Sexes All Races All Origins 16 years and over 2010 3 101385000
## median_weekly_earn
## 1 754
## 2 740
## 3 740
```
As we saw in Section [2\.4\.1](data-structures.html#all-elements), leaving an index blank means all
elements.
As another example, let’s get the 3rd and 5th row, and the 2nd and 4th column:
```
earn[c(3, 5), c(2, 4)]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
Mixing several different ways of indexing is allowed. So for example, we can
get the same above, but use column names instead of positions:
```
earn[c(3, 5), c("race", "age")]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
For data frames, it’s especially common to index the rows by condition and the
columns by name. For instance, let’s get the `sex`, `age`, and `n_persons`
columns for all rows that pertain to women:
```
result = earn[earn$sex == "Women", c("sex", "age", "n_persons")]
head(result)
```
```
## sex age n_persons
## 89 Women 16 years and over 43794000
## 90 Women 16 years and over 44562000
## 91 Women 16 years and over 44912000
## 92 Women 16 years and over 44620000
## 93 Women 16 years and over 44077000
## 94 Women 16 years and over 44539000
```
### 3\.1\.3 The `drop` Parameter
If you use two\-dimensional indexing with `[` to select exactly one column, you
get a vector:
```
result = earn[1:3, 2]
class(result)
```
```
## [1] "character"
```
The container is dropped, even though the indexing operator `[` usually keeps
containers. This also occurs for matrices. You can control this behavior with
the `drop` parameter:
```
result = earn[1:3, 2, drop = FALSE]
class(result)
```
```
## [1] "data.frame"
```
The default is `drop = TRUE`.
### 3\.1\.1 Indexing Lists
Lists are a *container* for other types of R objects. When you select an
element from a list, you can either keep the container (the list) or discard
it. The indexing operator `[` almost always keeps containers.
As an example, let’s get some elements from a small list:
```
x = list(first = c(1, 2, 3), second = sin, third = c("hi", "hello"))
y = x[c(1, 3)]
y
```
```
## $first
## [1] 1 2 3
##
## $third
## [1] "hi" "hello"
```
```
class(y)
```
```
## [1] "list"
```
The result is still a list. Even if we get just one element, the result of
indexing a list with `[` is a list:
```
class(x[1])
```
```
## [1] "list"
```
Sometimes this will be exactly what we want. But what if we want to get the
first element of `x` so that we can use it in a vectorized function? Or in a
function that only accepts numeric arguments? We need to somehow get the
element and discard the container.
The solution to this problem is the *extraction operator* `[[`, which is also
called the *double square bracket operator*. The extraction operator is the
primary way to get and set elements of lists and other containers.
Unlike the indexing operator `[`, the extraction operator always discards the
container:
```
x[[1]]
```
```
## [1] 1 2 3
```
```
class(x[[1]])
```
```
## [1] "numeric"
```
The tradeoff is that the extraction operator can only get or set one element at
a time. Note that the element can be a vector, as above. Because it can only
get or set one element at a time, the extraction operator can only index by
position or name. Blank and logical indexes are not allowed.
The final difference between the index operator `[` and the extraction operator
`[[` has to do with how they handle invalid indexes. The index operator `[`
returns `NA` for invalid vector elements, and `NULL` for invalid list elements:
```
c(1, 2)[10]
```
```
## [1] NA
```
```
x[10]
```
```
## $<NA>
## NULL
```
On the other hand, the extraction operator `[[` raises an error for invalid
elements:
```
x[[10]]
```
```
## Error in x[[10]]: subscript out of bounds
```
The indexing operator `[` and the extraction operator `[[` both work with any
data structure that has elements. However, you’ll generally use the indexing
operator `[` to index vectors, and the extraction operator `[[` to index
containers (such as lists).
### 3\.1\.2 Two\-dimensional Indexing
For two\-dimensional objects, like matrices and data frames, you can pass the
indexing operator `[` or the extraction operator `[[` a separate index for each
dimension. The rows come first:
```
DATA[ROWS, COLUMNS]
```
For instance, let’s get the first 3 rows and all columns of the earnings data:
```
earn[1:3, ]
```
```
## sex race ethnic_origin age year quarter n_persons
## 1 Both Sexes All Races All Origins 16 years and over 2010 1 96821000
## 2 Both Sexes All Races All Origins 16 years and over 2010 2 99798000
## 3 Both Sexes All Races All Origins 16 years and over 2010 3 101385000
## median_weekly_earn
## 1 754
## 2 740
## 3 740
```
As we saw in Section [2\.4\.1](data-structures.html#all-elements), leaving an index blank means all
elements.
As another example, let’s get the 3rd and 5th row, and the 2nd and 4th column:
```
earn[c(3, 5), c(2, 4)]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
Mixing several different ways of indexing is allowed. So for example, we can
get the same above, but use column names instead of positions:
```
earn[c(3, 5), c("race", "age")]
```
```
## race age
## 3 All Races 16 years and over
## 5 All Races 16 years and over
```
For data frames, it’s especially common to index the rows by condition and the
columns by name. For instance, let’s get the `sex`, `age`, and `n_persons`
columns for all rows that pertain to women:
```
result = earn[earn$sex == "Women", c("sex", "age", "n_persons")]
head(result)
```
```
## sex age n_persons
## 89 Women 16 years and over 43794000
## 90 Women 16 years and over 44562000
## 91 Women 16 years and over 44912000
## 92 Women 16 years and over 44620000
## 93 Women 16 years and over 44077000
## 94 Women 16 years and over 44539000
```
### 3\.1\.3 The `drop` Parameter
If you use two\-dimensional indexing with `[` to select exactly one column, you
get a vector:
```
result = earn[1:3, 2]
class(result)
```
```
## [1] "character"
```
The container is dropped, even though the indexing operator `[` usually keeps
containers. This also occurs for matrices. You can control this behavior with
the `drop` parameter:
```
result = earn[1:3, 2, drop = FALSE]
class(result)
```
```
## [1] "data.frame"
```
The default is `drop = TRUE`.
3\.2 Packages
-------------
A *package* is a collection of functions for use in R. Packages usually include
documentation, and can also contain examples, vignettes, and data sets. Most
packages are developed by members of the R community, so quality varies. There
are also a few packages that are built into R but provide extra features. We’ll
use a package in Section [3\.3](exploring-data.html#data-visualization), so we’re learning about
them now.
The [Comprehensive R Archive Network](https://cran.r-project.org/), or CRAN, is the main place people
publish packages. As of writing, there were 18,619 packages posted to CRAN.
This number has been steadily increasing as R has grown in popularity.
Packages span a wide variety of topics and disciplines. There are packages
related to statistics, social sciences, geography, genetics, physics, biology,
pharmacology, economics, agriculture, and more. The best way to find packages
is to search online, but the CRAN website also provides [“task
views”](https://cran.r-project.org/web/views/) if you want to browse popular packages related to a
specific discipline.
The `install.packages` function installs one or more packages from CRAN. Its
first argument is the packages to install, as a character vector.
When you run `install.packages`, R will ask you to choose which *mirror* to
download the package from. A mirror is a web server that has the same set of
files as some other server. Mirrors are used to make downloads faster and to
provide redundancy so that if a server stops working, files are still available
somewhere else. CRAN has dozens of mirrors; you should choose one that’s
geographically nearby, since that usually produces the best download speeds. If
you aren’t sure which mirror to choose, you can use the 0\-Cloud mirror, which
attempts to automatically choose a mirror near you.
As an example, here’s the code to install the remotes package:
```
install.packages("remotes")
```
If you run the code above, you’ll be asked to select a mirror, and then see
output that looks something like this:
```
--- Please select a CRAN mirror for use in this session ---
trying URL 'https://cloud.r-project.org/src/contrib/remotes_2.3.0.tar.gz'
Content type 'application/x-gzip' length 148405 bytes (144 KB)
==================================================
downloaded 144 KB
* installing *source* package ‘remotes’ ...
** package ‘remotes’ successfully unpacked and MD5 sums checked
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (remotes)
The downloaded source packages are in
‘/tmp/Rtmp8t6iGa/downloaded_packages’
```
R goes through a variety of steps to install a package, even installing other
packages that the package depends on. You can tell that a package installation
succeeded by the final line `DONE`. When a package installation fails, R prints
an error message explaining the problem instead.
Once a package is installed, it stays on your computer until you remove it or
remove R. This means you only need to install each package once. However, most
packages are periodically updated. You can reinstall a package using
`install.packages` the same way as above to get the latest version.
Alternatively, you can update all of the R packages you have installed at once
by calling the `update.packages` function. Beware that this may take a long
time if you have a lot of packages installed.
The function to remove packages is `remove.packages`. Like `install.packages`,
this function’s first argument is the packages to remove, as a character
vector.
If you want to see which packages are installed, you can use the
`installed.packages` function. It does not require any arguments. It returns a
matrix with one row for each package and columns that contain a variety of
information. Here’s an example:
```
packages = installed.packages()
# Just print the version numbers for 10 packages.
packages[1:10, "Version"]
```
```
## base64enc bookdown bslib cachem cli colorspace cpp11
## "0.1-3" "0.29" "0.4.0" "1.0.6" "3.4.0" "2.0-3" "0.4.2"
## digest evaluate fansi
## "0.6.29" "0.16" "1.0.3"
```
You’ll see a different set of packages, since you have a different computer.
Before you can use the functions (or other resources) in an installed package,
you must load the package with the `library` function. R doesn’t load packages
automatically because each package you load uses memory and may conflict with
other packages. Thus you should only load the packages you need for whatever
it is that you want to do. When you restart R, the loaded packages are cleared
and you must again load any packages you want to use.
Let’s load the remotes package we installed earlier:
```
library("remotes")
```
The `library` function works with or without quotes around the package name, so
you may also see people write things like `library(remotes)`. We recommend
using quotes to make it unambiguous that you are not referring to a variable.
A handful of packages print out a message when loaded, but the vast majority do
not. Thus you can assume the call to `library` was successful if nothing is
printed. If something goes wrong while loading a package, R will print out an
error message explaining the problem.
Finally, not all R packages are published to CRAN. [GitHub](https://github.com/) is another
popular place to publish R packages, especially ones that are experimental or
still in development. Unlike CRAN, GitHub is a general\-purpose website for
publishing code written in any programming language, so it contains much more
than just R packages and is not specifically R\-focused.
The remotes package that we just installed and loaded provides functions to
install packages from GitHub. It is generally better to install packages from
CRAN when they are available there, since the versions on CRAN tend to be more
stable and intended for a wide audience. However, if you want to install a
package from GitHub, you can learn more about the remotes package by reading
its [online documentation](https://remotes.r-lib.org/).
3\.3 Data Visualization
-----------------------
There are three popular systems for creating visualizations in R:
1. The base R functions (primarily the `plot` function)
2. The lattice package
3. The ggplot2 package
These three systems are not interoperable! Consequently, it’s best to choose
one to use exclusively. Compared to base R, both lattice and ggplot2 are better
at handling grouped data and generally require less code to create a
nice\-looking visualization.
The ggplot2 package is so popular that there are now knockoff packages for
other data\-science\-oriented programming languages like Python and Julia. The
package is also part of the [*Tidyverse*](https://www.tidyverse.org/), a popular collection of R
packages designed to work well together. Because of these advantages, we’ll use
ggplot2 for visualizations in this and all future lessons.
ggplot2 has detailed [documentation](https://ggplot2.tidyverse.org/) and also a
[cheatsheet](https://github.com/rstudio/cheatsheets/blob/master/data-visualization-2.1.pdf).
The “gg” in ggplot2 stands for *grammar of graphics*. The idea of a grammar of
graphics is that visualizations can be built up in layers. In ggplot2, the
three layers every plot must have are:
* Data
* Geometry
* Aesthetics
There are also several optional layers. Here are a few:
| Layer | Description |
| --- | --- |
| scales | Title, label, and axis value settings |
| facets | Side\-by\-side plots |
| guides | Axis and legend position settings |
| annotations | Shapes that are not mapped to data |
| coordinates | Coordinate systems (Cartesian, logarithmic, polar) |
As an example, let’s plot the earnings data. First, we need to load ggplot2\. As
always, if this is your first time using the package, you’ll have to install
it. Then you can load the package:
```
# install.packages("ggplot2")
library("ggplot2")
```
What kind of plot should we make? It depends on what data we want the plot to
show. Let’s make a line plot that shows median earnings for each quarter in
2019, with separate lines for men and women.
Before plotting, we need to take a subset of the earnings that only contains
information for 2019:
```
earn19 = earn[earn$year == 2019, ]
```
The data is also broken down across `race`, `ethnic_origin`, and `age`. Since
we aren’t interested in these categories for the plot, we need to further
subset the data:
```
earn19 = earn19[earn19$race == "All Races" &
earn19$ethnic_origin == "All Origins" &
earn19$age == "16 years and over", ]
```
Now we’re ready to make the plot.
### 3\.3\.1 Layer 1: Data
The data layer determines the data set used to make the plot. ggplot and most
other Tidyverse packages are designed for working with *tidy* data frames. Tidy
means:
1. Each observation has its own row.
2. Each feature has its own column.
3. Each value has its own cell.
Tidy data sets are convenient in general. A later lesson will cover how to make
an untidy data set tidy. Until then, we’ll take it for granted that the data
sets we work with are tidy.
To set up the data layer, call the `ggplot` function on a data frame:
```
ggplot(earn19)
```
This returns a blank plot. We still need to add a few more layers.
### 3\.3\.2 Layer 2: Geometry
The **geom**etry layer determines the shape or appearance of the visual
elements of the plot. In other words, the geometry layer determines what kind
of plot to make: one with points, lines, boxes, or something else.
There are many different geometries available in ggplot2\. The package provides
a function for each geometry, always prefixed with `geom_`.
To add a geometry layer to the plot, choose the `geom_` function you want and
add it to the plot with the `+` operator:
```
ggplot(earn19) + geom_line()
```
```
## Error in `check_required_aesthetics()`:
## ! geom_line requires the following missing aesthetics: x and y
```
This returns an error message that we’re missing aesthetics `x` and `y`. We’ll
learn more about aesthetics in the next section, but this error message is
especially helpful: it tells us exactly what we’re missing. When you use a
geometry you’re unfamiliar with, it can be helpful to run the code for just the
data and geometry layer like this, to see exactly which aesthetics need to be
set.
As we’ll see later, it’s possible to add multiple geometries to a plot.
### 3\.3\.3 Layer 3: Aesthetics
The **aes**thetic layer determines the relationship between the data and the
geometry. Use the aesthetic layer to map features in the data to **aesthetics**
(visual elements) of the geometry.
The `aes` function creates an aesthetic layer. The syntax is:
```
aes(AESTHETIC = FEATURE, ...)
```
The names of the aesthetics depend on the geometry, but some common ones are
`x`, `y`, `color`, `fill`, `shape`, and `size`. There is more information about
and examples of aesthetic names in the documentation.
For example, we want to put `quarter` on the x\-axis and `median_weekly_earn` on
the y\-axis. We also want to use a separate line style for each `sex` category.
So the aesthetic layer should be:
```
aes(x = quarter, y = median_weekly_earn, linetype = sex)
```
In the `aes` function, column names are never quoted.
Unlike most layers, the aesthetic layer is not added to the plot with the `+`
operator. Instead, you can pass the aesthetic layer as the second argument to
the `ggplot` function:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line()
```
If you want to set an aesthetic to a constant value, rather than one that’s
data dependent, do so *outside* of the aesthetic layer. For instance, suppose
we want to make the lines blue:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line(color = "blue")
```
If you set an aesthetic to a constant value inside of the aesthetic layer, the
results you get might not be what you expect:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex, color = "blue") +
geom_line()
```
### 3\.3\.4 Layer 4: Scales
The scales layer controls the title, axis labels, and axis scales of the plot.
Most of the functions in the scales layer are prefixed with `scale_`, but not
all of them.
The `labs` function is especially important, because it’s used to set the title
and axis labels:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line() +
labs(x = "Quarter", y = "Median Weekly Salary (USD)",
title = "2019 Median Weekly Salaries, by Sex", linetype = "Sex")
```
### 3\.3\.5 Saving Plots
In ggplot2, use the `ggsave` function to save the most recent plot you created:
```
ggsave("line.png")
```
The file format is selected automatically based on the extension. Common
formats are PNG and PDF.
#### The Plot Device
You can also save a plot with one of R’s “plot device” functions. The steps
are:
1. Call a plot device function: `png`, `jpeg`, `pdf`, `bmp`, `tiff`, or `svg`.
2. Run your code to make the plot.
3. Call `dev.off` to indicate that you’re done plotting.
This strategy works with any of R’s graphics systems (not just ggplot2\).
Here’s an example:
```
# Run these lines in the console, not the notebook!
jpeg("line.jpeg")
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn) +
geom_point()
dev.off()
```
### 3\.3\.6 Example: Bar Plot
Let’s say we want to plot the number of persons for each sex, again using the
earnings data set. A bar plot is an appropriate way to represent this visually.
The geometry for a bar plot is `geom_bar`. Since bar plots are mainly used to
display frequencies, the `geom_bar` function automatically computes frequencies
when given mapped to a categorical feature.
The `n_persons` feature is not categorical, so we don’t need `geom_bar` to
compute frequencies. To prevent `geom_bar` from computing frequencies
automatically, set `stat = "identity"`.
Here’s the code to make the bar plot:
```
ggplot(earn19) +
aes(x = quarter, y = n_persons, fill = sex) +
geom_bar(stat = "identity", position = "dodge") +
labs(x = "Quarter", y = "Number of Workers", fill = "Sex",
title = "Number of Workers by Quarter and Sex in 2019")
```
The setting `position = "dodge"` instructs `geom_bar` to put the bars
side\-by\-side rather than stacking them.
### 3\.3\.7 Visualization Design
Designing high\-quality visualizations goes beyond just mastering which R
functions to call. You also need to think carefully about what kind of data you
have and what message you want to convey. This section provides a few
guidelines.
The first step in data visualization is choosing an appropriate kind of plot.
Here are some suggestions (not rules):
| Feature 1 | Feature 2 | Plot |
| --- | --- | --- |
| categorical | | bar, dot |
| categorical | categorical | bar, dot, mosaic |
| numerical | | box, density, histogram |
| numerical | categorical | box, density, ridge |
| numerical | numerical | line, scatter, smooth scatter |
If you want to add a:
* 3rd numerical feature, use it to change point/line sizes.
* 3rd categorical feature, use it to change point/line styles.
* 4th categorical feature, use side\-by\-side plots.
Once you’ve selected a plot, here are some rules you should almost always
follow:
* Always add a title and axis labels. These should be in plain English, not
variable names!
* Specify units after the axis label if the axis has units. For instance,
“Height (ft)”.
* Don’t forget that many people are colorblind! Also, plots are often printed
in black and white. Use point and line styles to distinguish groups; color is
optional.
* Add a legend whenever you’ve used more than one point or line style.
* Always write a few sentences explaining what the plot reveals. Don’t
describe the plot, because the reader can just look at it. Instead,
explain what they can learn from the plot and point out important details
that are easily overlooked.
* Sometimes points get plotted on top of each other. This is called
*overplotting*. Plots with a lot of overplotting can be hard to read and can
even misrepresent the data by hiding how many points are present. Use a
two\-dimensional density plot or jitter the points to deal with overplotting.
* For side\-by\-side plots, use the same axis scales for both plots so that
comparing them is not deceptive.
Visualization design is a deep topic, and whole books have been written about
it. One resource where you can learn more is DataLab’s [Principles of Data
Visualization Workshop Reader](https://ucdavisdatalab.github.io/workshop_data_viz_principles/).
### 3\.3\.1 Layer 1: Data
The data layer determines the data set used to make the plot. ggplot and most
other Tidyverse packages are designed for working with *tidy* data frames. Tidy
means:
1. Each observation has its own row.
2. Each feature has its own column.
3. Each value has its own cell.
Tidy data sets are convenient in general. A later lesson will cover how to make
an untidy data set tidy. Until then, we’ll take it for granted that the data
sets we work with are tidy.
To set up the data layer, call the `ggplot` function on a data frame:
```
ggplot(earn19)
```
This returns a blank plot. We still need to add a few more layers.
### 3\.3\.2 Layer 2: Geometry
The **geom**etry layer determines the shape or appearance of the visual
elements of the plot. In other words, the geometry layer determines what kind
of plot to make: one with points, lines, boxes, or something else.
There are many different geometries available in ggplot2\. The package provides
a function for each geometry, always prefixed with `geom_`.
To add a geometry layer to the plot, choose the `geom_` function you want and
add it to the plot with the `+` operator:
```
ggplot(earn19) + geom_line()
```
```
## Error in `check_required_aesthetics()`:
## ! geom_line requires the following missing aesthetics: x and y
```
This returns an error message that we’re missing aesthetics `x` and `y`. We’ll
learn more about aesthetics in the next section, but this error message is
especially helpful: it tells us exactly what we’re missing. When you use a
geometry you’re unfamiliar with, it can be helpful to run the code for just the
data and geometry layer like this, to see exactly which aesthetics need to be
set.
As we’ll see later, it’s possible to add multiple geometries to a plot.
### 3\.3\.3 Layer 3: Aesthetics
The **aes**thetic layer determines the relationship between the data and the
geometry. Use the aesthetic layer to map features in the data to **aesthetics**
(visual elements) of the geometry.
The `aes` function creates an aesthetic layer. The syntax is:
```
aes(AESTHETIC = FEATURE, ...)
```
The names of the aesthetics depend on the geometry, but some common ones are
`x`, `y`, `color`, `fill`, `shape`, and `size`. There is more information about
and examples of aesthetic names in the documentation.
For example, we want to put `quarter` on the x\-axis and `median_weekly_earn` on
the y\-axis. We also want to use a separate line style for each `sex` category.
So the aesthetic layer should be:
```
aes(x = quarter, y = median_weekly_earn, linetype = sex)
```
In the `aes` function, column names are never quoted.
Unlike most layers, the aesthetic layer is not added to the plot with the `+`
operator. Instead, you can pass the aesthetic layer as the second argument to
the `ggplot` function:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line()
```
If you want to set an aesthetic to a constant value, rather than one that’s
data dependent, do so *outside* of the aesthetic layer. For instance, suppose
we want to make the lines blue:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line(color = "blue")
```
If you set an aesthetic to a constant value inside of the aesthetic layer, the
results you get might not be what you expect:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex, color = "blue") +
geom_line()
```
### 3\.3\.4 Layer 4: Scales
The scales layer controls the title, axis labels, and axis scales of the plot.
Most of the functions in the scales layer are prefixed with `scale_`, but not
all of them.
The `labs` function is especially important, because it’s used to set the title
and axis labels:
```
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn, linetype = sex) +
geom_line() +
labs(x = "Quarter", y = "Median Weekly Salary (USD)",
title = "2019 Median Weekly Salaries, by Sex", linetype = "Sex")
```
### 3\.3\.5 Saving Plots
In ggplot2, use the `ggsave` function to save the most recent plot you created:
```
ggsave("line.png")
```
The file format is selected automatically based on the extension. Common
formats are PNG and PDF.
#### The Plot Device
You can also save a plot with one of R’s “plot device” functions. The steps
are:
1. Call a plot device function: `png`, `jpeg`, `pdf`, `bmp`, `tiff`, or `svg`.
2. Run your code to make the plot.
3. Call `dev.off` to indicate that you’re done plotting.
This strategy works with any of R’s graphics systems (not just ggplot2\).
Here’s an example:
```
# Run these lines in the console, not the notebook!
jpeg("line.jpeg")
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn) +
geom_point()
dev.off()
```
#### The Plot Device
You can also save a plot with one of R’s “plot device” functions. The steps
are:
1. Call a plot device function: `png`, `jpeg`, `pdf`, `bmp`, `tiff`, or `svg`.
2. Run your code to make the plot.
3. Call `dev.off` to indicate that you’re done plotting.
This strategy works with any of R’s graphics systems (not just ggplot2\).
Here’s an example:
```
# Run these lines in the console, not the notebook!
jpeg("line.jpeg")
ggplot(earn19) +
aes(x = quarter, y = median_weekly_earn) +
geom_point()
dev.off()
```
### 3\.3\.6 Example: Bar Plot
Let’s say we want to plot the number of persons for each sex, again using the
earnings data set. A bar plot is an appropriate way to represent this visually.
The geometry for a bar plot is `geom_bar`. Since bar plots are mainly used to
display frequencies, the `geom_bar` function automatically computes frequencies
when given mapped to a categorical feature.
The `n_persons` feature is not categorical, so we don’t need `geom_bar` to
compute frequencies. To prevent `geom_bar` from computing frequencies
automatically, set `stat = "identity"`.
Here’s the code to make the bar plot:
```
ggplot(earn19) +
aes(x = quarter, y = n_persons, fill = sex) +
geom_bar(stat = "identity", position = "dodge") +
labs(x = "Quarter", y = "Number of Workers", fill = "Sex",
title = "Number of Workers by Quarter and Sex in 2019")
```
The setting `position = "dodge"` instructs `geom_bar` to put the bars
side\-by\-side rather than stacking them.
### 3\.3\.7 Visualization Design
Designing high\-quality visualizations goes beyond just mastering which R
functions to call. You also need to think carefully about what kind of data you
have and what message you want to convey. This section provides a few
guidelines.
The first step in data visualization is choosing an appropriate kind of plot.
Here are some suggestions (not rules):
| Feature 1 | Feature 2 | Plot |
| --- | --- | --- |
| categorical | | bar, dot |
| categorical | categorical | bar, dot, mosaic |
| numerical | | box, density, histogram |
| numerical | categorical | box, density, ridge |
| numerical | numerical | line, scatter, smooth scatter |
If you want to add a:
* 3rd numerical feature, use it to change point/line sizes.
* 3rd categorical feature, use it to change point/line styles.
* 4th categorical feature, use side\-by\-side plots.
Once you’ve selected a plot, here are some rules you should almost always
follow:
* Always add a title and axis labels. These should be in plain English, not
variable names!
* Specify units after the axis label if the axis has units. For instance,
“Height (ft)”.
* Don’t forget that many people are colorblind! Also, plots are often printed
in black and white. Use point and line styles to distinguish groups; color is
optional.
* Add a legend whenever you’ve used more than one point or line style.
* Always write a few sentences explaining what the plot reveals. Don’t
describe the plot, because the reader can just look at it. Instead,
explain what they can learn from the plot and point out important details
that are easily overlooked.
* Sometimes points get plotted on top of each other. This is called
*overplotting*. Plots with a lot of overplotting can be hard to read and can
even misrepresent the data by hiding how many points are present. Use a
two\-dimensional density plot or jitter the points to deal with overplotting.
* For side\-by\-side plots, use the same axis scales for both plots so that
comparing them is not deceptive.
Visualization design is a deep topic, and whole books have been written about
it. One resource where you can learn more is DataLab’s [Principles of Data
Visualization Workshop Reader](https://ucdavisdatalab.github.io/workshop_data_viz_principles/).
3\.4 Apply Functions
--------------------
Section [2\.1\.3](data-structures.html#vectorization) introduced vectorization, a convenient and
efficient way to compute multiple results. That section also mentioned that
some of R’s functions—the ones that summarize or aggregate data—are not
vectorized.
The `class` function is an example of a function that’s not vectorized. If we
call the `class` function on the earnings data set, we get just one result for
the data set as a whole:
```
class(earn)
```
```
## [1] "data.frame"
```
What if we want to get the class of each column? We can get the class for a
single column by selecting the column with `$`, the dollar sign operator:
```
class(earn$age)
```
```
## [1] "character"
```
But what if we want the classes for all the columns? We could write a call to
`class` for each column, but that would be tedious. When you’re working with a
programming language, you should try to avoid tedium; there’s usually a better,
more automated way.
Section [2\.2\.1](data-structures.html#lists) pointed out that data frames are technically lists, where
each column is one element. With that in mind, what we need here is a line of
code that calls `class` on each element of the data frame. The idea is similar
to vectorization, but since we have a list and a non\-vectorized function, we
have to do a bit more than just call `class(earn)`.
The `lapply` function calls, or *applies*, a function on each element of a list
or vector. The syntax is:
```
lapply(X, FUN, ...)
```
The function `FUN` is called once for each element of `X`, with the element as
the first argument. The `...` is for additional arguments to `FUN`, which are
held constant across all the elements.
Let’s try this out with the earnings data and the `class` function:
```
lapply(earn, class)
```
```
## $sex
## [1] "character"
##
## $race
## [1] "character"
##
## $ethnic_origin
## [1] "character"
##
## $age
## [1] "character"
##
## $year
## [1] "integer"
##
## $quarter
## [1] "integer"
##
## $n_persons
## [1] "integer"
##
## $median_weekly_earn
## [1] "integer"
```
The result is similar to if the `class` function was vectorized. In fact, if we
use a vector and a vectorized function with `lapply`, the result is nearly
identical to the result from vectorization:
```
x = c(1, 2, pi)
sin(x)
```
```
## [1] 8.414710e-01 9.092974e-01 1.224647e-16
```
```
lapply(x, sin)
```
```
## [[1]]
## [1] 0.841471
##
## [[2]]
## [1] 0.9092974
##
## [[3]]
## [1] 1.224647e-16
```
The only difference is that the result from `lapply` is a list. In fact, the
`lapply` function always returns a list with one element for each element of
the input data. The “l” in `lapply` stands for “list”.
The `lapply` function is one member of a family of functions called *apply
functions*. All of the apply functions provide ways to apply a function
repeatedly to different parts of a data structure. We’ll meet a few more apply
functions soon.
When you have a choice between using vectorization or an apply function, you
should always choose vectorization. Vectorization is clearer—compare the two
lines of code above—and it’s also significantly more efficient. In fact,
vectorization is the most efficient way to call a function repeatedly in R.
As we saw with the `class` function, there are some situations where
vectorization is not possible. That’s when you should think about using an
apply function.
### 3\.4\.1 The `sapply` Function
The related `sapply` function calls a function on each element of a list or
vector, and simplifies the result. That last part is the crucial difference
compared to `lapply`. When results from the calls all have the same type and
length, `sapply` returns a vector or matrix instead of a list. When the results
have different types or lengths, the result is the same as for `lapply`. The
“s” in `sapply` stands for “simplify”.
For instance, if we use `sapply` to find the classes of the columns in the
earnings data, we get a character vector:
```
sapply(earn, class)
```
```
## sex race ethnic_origin age
## "character" "character" "character" "character"
## year quarter n_persons median_weekly_earn
## "integer" "integer" "integer" "integer"
```
Likewise, if we use `sapply` to compute the `sin` values, we get a numeric
vector, the same as from vectorization:
```
sapply(x, sin)
```
```
## [1] 8.414710e-01 9.092974e-01 1.224647e-16
```
In spite of that, vectorization is still more efficient than `sapply`, so use
vectorization instead when possible.
Apply functions are incredibly useful for summarizing data. For example,
suppose we want to compute the frequencies for all of the columns in the
earnings data set that aren’t numeric.
First, we need to identify the columns. One way to do this is with the
`is.numeric` function. Despite the name, this function actually tests whether
its argument is a real number, not whether it its argument is a numeric vector.
In other words, it also returns true for integer values. We can use `sapply` to
apply this function to all of the columns in the earnings data set:
```
is_not_number = !sapply(earn, is.numeric)
is_not_number
```
```
## sex race ethnic_origin age
## TRUE TRUE TRUE TRUE
## year quarter n_persons median_weekly_earn
## FALSE FALSE FALSE FALSE
```
Is it worth using R code to identify the non\-numeric columns? Since there are
only 8 columns in the earnings data set, maybe not. But if the data set was
larger, with say 100 columns, it definitely would be.
In general, it’s a good habit to use R to do things rather than do them
manually. You’ll get more practice programming, and your code will be more
flexible if you want to adapt it to other data sets.
Now that we know which columns are non\-numeric, we can use the `table` function
to compute frequencies. We only want to compute frequencies for those columns,
so we need to subset the data:
```
lapply(earn[, is_not_number], table)
```
```
## $sex
##
## Both Sexes Men Women
## 1408 1408 1408
##
## $race
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
##
## $ethnic_origin
##
## All Origins Hispanic or Latino
## 3564 660
##
## $age
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
We use `lapply` rather than `sapply` for this step because the table for each
column will have a different length (but try `sapply` and see what happens!).
### 3\.4\.2 The Split\-Apply Pattern
In a data set with categorical features, it’s often useful to compute something
for each category. The `lapply` and `sapply` functions can compute something
for each element of a data structure, but categories are not necessarily
elements.
For example, the earnings data set has three different categories in the `sex`
column. If we want all of the rows in one category, one way to get them is by
indexing:
```
women = earn[earn$sex == "Women", ]
head(women)
```
```
## sex race ethnic_origin age year quarter n_persons
## 89 Women All Races All Origins 16 years and over 2010 1 43794000
## 90 Women All Races All Origins 16 years and over 2010 2 44562000
## 91 Women All Races All Origins 16 years and over 2010 3 44912000
## 92 Women All Races All Origins 16 years and over 2010 4 44620000
## 93 Women All Races All Origins 16 years and over 2011 1 44077000
## 94 Women All Races All Origins 16 years and over 2011 2 44539000
## median_weekly_earn
## 89 665
## 90 672
## 91 662
## 92 679
## 93 683
## 94 689
```
To get all three categories, we’d have to do this three times. If we want to
compute something for each category, say the mean of the `n_persons` column, we
also have to repeat that computation three times. Here’s what it would look
like for just the `women` category:
```
mean(women$n_persons)
```
```
## [1] 10758771
```
If the categories were elements, we could avoid writing code to index each
category, and just use the `sapply` (or `lapply`) function to apply the `mean`
function to each.
The `split` function splits a vector or data frame into groups based on a
vector of categories. The first argument to `split` is the data, and the
second argument is a congruent vector of categories.
We can use `split` to elegantly compute means of `n_persons` broken down by
sex. First, we split the data by category. Since we only want to compute on the
`n_persons` column, we only split that column:
```
by_sex = split(earn$n_persons, earn$sex)
class(by_sex)
```
```
## [1] "list"
```
```
names(by_sex)
```
```
## [1] "Both Sexes" "Men" "Women"
```
The result from `split` is a list with one element for each category. The
individual elements contain pieces of the original `n_persons` column:
```
head(by_sex$Women)
```
```
## [1] 43794000 44562000 44912000 44620000 44077000 44539000
```
Since the categories are elements in the split data, now we can use `sapply`
the same way we did in previous examples:
```
sapply(by_sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
This two\-step process is an R idiom called the *split\-apply pattern*. First you
use `split` to convert categories into list elements, then you use an apply
function to compute something on each category. Any time you want to compute
results by category, you should think of this pattern.
The split\-apply pattern is so useful that R provides the `tapply` function as a
shortcut. The `tapply` function is equivalent to calling `split` and then
`sapply`. Like `split`, the first argument is the data and the second argument
is a congruent vector of categories. The third argument is a function to apply,
like the function argument in `sapply`.
We can use `tapply` to compute the `n_persons` means by `sex` for the earnings
data:
```
tapply(earn$n_persons, earn$sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
Notice that the result is identical to the one we computed before.
The “t” in `tapply` stands for “table”, because the `tapply` function is a
generalization of the `table` function. If you use `length` as the third
argument to `tapply`, you get the same results as you would from using the
`table` function on the category vector.
The `aggregate` function is closely related to `tapply`. It computes the same
results, but organizes them into a data frame with one row for each category.
In some cases, this format is more convenient. The arguments are the same,
except that the second argument must be a list or data frame rather than a
vector.
As an example, here’s the result of using `aggregate` to compute the
`n_persons` means:
```
aggregate(earn$n_persons, list(earn$sex), mean)
```
```
## Group.1 x
## 1 Both Sexes 24402515
## 2 Men 13643727
## 3 Women 10758771
```
The `lapply`, `sapply`, and `tapply` functions are the three most important
functions in the family of apply functions, but there are many more. You can
learn more about all of R’s apply functions by reading [this StackOverflow
post](https://stackoverflow.com/a/7141669).
### 3\.4\.1 The `sapply` Function
The related `sapply` function calls a function on each element of a list or
vector, and simplifies the result. That last part is the crucial difference
compared to `lapply`. When results from the calls all have the same type and
length, `sapply` returns a vector or matrix instead of a list. When the results
have different types or lengths, the result is the same as for `lapply`. The
“s” in `sapply` stands for “simplify”.
For instance, if we use `sapply` to find the classes of the columns in the
earnings data, we get a character vector:
```
sapply(earn, class)
```
```
## sex race ethnic_origin age
## "character" "character" "character" "character"
## year quarter n_persons median_weekly_earn
## "integer" "integer" "integer" "integer"
```
Likewise, if we use `sapply` to compute the `sin` values, we get a numeric
vector, the same as from vectorization:
```
sapply(x, sin)
```
```
## [1] 8.414710e-01 9.092974e-01 1.224647e-16
```
In spite of that, vectorization is still more efficient than `sapply`, so use
vectorization instead when possible.
Apply functions are incredibly useful for summarizing data. For example,
suppose we want to compute the frequencies for all of the columns in the
earnings data set that aren’t numeric.
First, we need to identify the columns. One way to do this is with the
`is.numeric` function. Despite the name, this function actually tests whether
its argument is a real number, not whether it its argument is a numeric vector.
In other words, it also returns true for integer values. We can use `sapply` to
apply this function to all of the columns in the earnings data set:
```
is_not_number = !sapply(earn, is.numeric)
is_not_number
```
```
## sex race ethnic_origin age
## TRUE TRUE TRUE TRUE
## year quarter n_persons median_weekly_earn
## FALSE FALSE FALSE FALSE
```
Is it worth using R code to identify the non\-numeric columns? Since there are
only 8 columns in the earnings data set, maybe not. But if the data set was
larger, with say 100 columns, it definitely would be.
In general, it’s a good habit to use R to do things rather than do them
manually. You’ll get more practice programming, and your code will be more
flexible if you want to adapt it to other data sets.
Now that we know which columns are non\-numeric, we can use the `table` function
to compute frequencies. We only want to compute frequencies for those columns,
so we need to subset the data:
```
lapply(earn[, is_not_number], table)
```
```
## $sex
##
## Both Sexes Men Women
## 1408 1408 1408
##
## $race
##
## All Races Asian Black or African American
## 2244 660 660
## White
## 660
##
## $ethnic_origin
##
## All Origins Hispanic or Latino
## 3564 660
##
## $age
##
## 16 to 19 years 16 to 24 years 16 years and over 20 to 24 years
## 132 660 660 132
## 25 to 34 years 25 to 54 years 25 years and over 35 to 44 years
## 132 660 660 132
## 45 to 54 years 55 to 64 years 55 years and over 65 years and over
## 132 132 660 132
```
We use `lapply` rather than `sapply` for this step because the table for each
column will have a different length (but try `sapply` and see what happens!).
### 3\.4\.2 The Split\-Apply Pattern
In a data set with categorical features, it’s often useful to compute something
for each category. The `lapply` and `sapply` functions can compute something
for each element of a data structure, but categories are not necessarily
elements.
For example, the earnings data set has three different categories in the `sex`
column. If we want all of the rows in one category, one way to get them is by
indexing:
```
women = earn[earn$sex == "Women", ]
head(women)
```
```
## sex race ethnic_origin age year quarter n_persons
## 89 Women All Races All Origins 16 years and over 2010 1 43794000
## 90 Women All Races All Origins 16 years and over 2010 2 44562000
## 91 Women All Races All Origins 16 years and over 2010 3 44912000
## 92 Women All Races All Origins 16 years and over 2010 4 44620000
## 93 Women All Races All Origins 16 years and over 2011 1 44077000
## 94 Women All Races All Origins 16 years and over 2011 2 44539000
## median_weekly_earn
## 89 665
## 90 672
## 91 662
## 92 679
## 93 683
## 94 689
```
To get all three categories, we’d have to do this three times. If we want to
compute something for each category, say the mean of the `n_persons` column, we
also have to repeat that computation three times. Here’s what it would look
like for just the `women` category:
```
mean(women$n_persons)
```
```
## [1] 10758771
```
If the categories were elements, we could avoid writing code to index each
category, and just use the `sapply` (or `lapply`) function to apply the `mean`
function to each.
The `split` function splits a vector or data frame into groups based on a
vector of categories. The first argument to `split` is the data, and the
second argument is a congruent vector of categories.
We can use `split` to elegantly compute means of `n_persons` broken down by
sex. First, we split the data by category. Since we only want to compute on the
`n_persons` column, we only split that column:
```
by_sex = split(earn$n_persons, earn$sex)
class(by_sex)
```
```
## [1] "list"
```
```
names(by_sex)
```
```
## [1] "Both Sexes" "Men" "Women"
```
The result from `split` is a list with one element for each category. The
individual elements contain pieces of the original `n_persons` column:
```
head(by_sex$Women)
```
```
## [1] 43794000 44562000 44912000 44620000 44077000 44539000
```
Since the categories are elements in the split data, now we can use `sapply`
the same way we did in previous examples:
```
sapply(by_sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
This two\-step process is an R idiom called the *split\-apply pattern*. First you
use `split` to convert categories into list elements, then you use an apply
function to compute something on each category. Any time you want to compute
results by category, you should think of this pattern.
The split\-apply pattern is so useful that R provides the `tapply` function as a
shortcut. The `tapply` function is equivalent to calling `split` and then
`sapply`. Like `split`, the first argument is the data and the second argument
is a congruent vector of categories. The third argument is a function to apply,
like the function argument in `sapply`.
We can use `tapply` to compute the `n_persons` means by `sex` for the earnings
data:
```
tapply(earn$n_persons, earn$sex, mean)
```
```
## Both Sexes Men Women
## 24402515 13643727 10758771
```
Notice that the result is identical to the one we computed before.
The “t” in `tapply` stands for “table”, because the `tapply` function is a
generalization of the `table` function. If you use `length` as the third
argument to `tapply`, you get the same results as you would from using the
`table` function on the category vector.
The `aggregate` function is closely related to `tapply`. It computes the same
results, but organizes them into a data frame with one row for each category.
In some cases, this format is more convenient. The arguments are the same,
except that the second argument must be a list or data frame rather than a
vector.
As an example, here’s the result of using `aggregate` to compute the
`n_persons` means:
```
aggregate(earn$n_persons, list(earn$sex), mean)
```
```
## Group.1 x
## 1 Both Sexes 24402515
## 2 Men 13643727
## 3 Women 10758771
```
The `lapply`, `sapply`, and `tapply` functions are the three most important
functions in the family of apply functions, but there are many more. You can
learn more about all of R’s apply functions by reading [this StackOverflow
post](https://stackoverflow.com/a/7141669).
3\.5 Exercises
--------------
### 3\.5\.1 Exercise
Count how many rows of the `earn` data have `median_weekly_earn` greater than $700\.
### 3\.5\.2 Exercise
Calculate the median of `median_weekly_earn` for men in 2018\.
### 3\.5\.3 Exercise
1. Adjust the line plot of weekly earnings by quarter so that there is one line per age group.
2. Further adjust this plot so that each age group line is a different color.
### 3\.5\.1 Exercise
Count how many rows of the `earn` data have `median_weekly_earn` greater than $700\.
### 3\.5\.2 Exercise
Calculate the median of `median_weekly_earn` for men in 2018\.
### 3\.5\.3 Exercise
1. Adjust the line plot of weekly earnings by quarter so that there is one line per age group.
2. Further adjust this plot so that each age group line is a different color.
| R Programming |
ucdavisdatalab.github.io | https://ucdavisdatalab.github.io/workshop_r_basics/automating-tasks.html |
4 Automating Tasks
==================
By now, you’ve learned all of the basic skills necessary to explore a data set
in R. The focus of this chapter is how to organize your code so that it’s
concise, clear, and easy to automate. This will help you and your collaborators
avoid tedious, redundant work, reproduce results efficiently, and run code in
specialized environments for scientific computing, such as high\-performance
computing clusters.
#### Learning Objectives
* Create code that only runs when a condition is satisfied
* Create custom functions in order to organize and reuse code
* Run code repeatedly in a for\-loop
* Describe the different types of loops and how to choose between them
4\.1 Conditional Expressions
----------------------------
Sometimes you’ll need code to do different things, depending on a condition.
*If\-statements* provide a way to write conditional code.
For example, suppose we want to greet one person differently from the others:
```
name = "Nick"
if (name == "Nick") {
# If name is Nick:
message("We went down the TRUE branch")
msg = "Hi Nick, nice to see you again!"
} else {
# Anything else:
msg = "Nice to meet you!"
}
```
```
## We went down the TRUE branch
```
Indent code inside of the if\-statement by 2 or 4 spaces. Indentation makes your
code easier to read.
The condition in an if\-statement has to be a scalar:
```
name = c("Nick", "Susan")
if (name == "Nick") {
msg = "Hi Nick!"
} else {
msg = "Nice to meet you!"
}
```
```
## Error in if (name == "Nick") {: the condition has length > 1
```
You can chain together if\-statements:
```
name = "Susan"
if (name == "Nick") {
msg = "Hi Nick, nice to see you again!"
} else if (name == "Peter") {
msg = "Go away Peter, I'm busy!"
} else {
msg = "Nice to meet you!"
}
msg
```
```
## [1] "Nice to meet you!"
```
If\-statements return the value of the last expression in the evaluated block:
```
name = "Tom"
msg = if (name == "Nick") {
"Hi Nick, nice to see you again!"
} else {
"Nice to meet you!"
}
msg
```
```
## [1] "Nice to meet you!"
```
Curly braces `{ }` are optional for single\-line expressions:
```
name = "Nick"
msg = if (name == "Nick") "Hi Nick, nice to see you again!" else
"Nice to meet you!"
msg
```
```
## [1] "Hi Nick, nice to see you again!"
```
But you have to be careful if you don’t use them:
```
# NO GOOD:
msg = if (name == "Nick")
"Hi Nick, nice to see you again!"
else
"Nice to meet you!"
```
```
## Error: <text>:4:1: unexpected 'else'
## 3: "Hi Nick, nice to see you again!"
## 4: else
## ^
```
The `else` block is optional:
```
msg = "Hi"
name = "Tom"
if (name == "Nick")
msg = "Hi Nick, nice to see you again!"
msg
```
```
## [1] "Hi"
```
When there’s no `else` block, the value of the `else` block is `NULL`:
```
name = "Tom"
msg = if (name == "Nick")
"Hi Nick, nice to see you again!"
msg
```
```
## NULL
```
4\.2 Functions
--------------
The main way to interact with R is by calling functions, which was first
explained way back in Section [1\.2\.4](getting-started.html#calling-functions). Since then, you’ve
learned how to use many of R’s built\-in functions. This section explains how
you can write your own functions.
To start, let’s briefly review what functions are, and some of the jargon
associated with them. It’s useful to think of functions as factories: raw
materials (inputs) go in, products (outputs) come out. We can also represent
this visually:
Programmers use several specific terms to describe the parts and usage of
functions:
* *Parameters* are placeholder variables for inputs.
+ *Arguments* are the actual values assigned to the parameters in a call.
* The *return value* is the output.
* The *body* is the code inside.
* *Calling* a function means using a function to compute something.
Almost every command in R is a function, even the arithmetic operators and the
parentheses! You can view the body of a function by typing its name without
trailing parentheses (in contrast to how you call functions). The body of a
function is usually surrounded by curly braces `{}`, although they’re optional
if the body only contains one line of code. Indenting code inside of curly
braces by 2\-4 spaces also helps make it visually distinct from other code.
For example, let’s look at the body of the `append` function, which appends a
value to the end of a list or vector:
```
append
```
```
## function (x, values, after = length(x))
## {
## lengx <- length(x)
## if (!after)
## c(values, x)
## else if (after >= lengx)
## c(x, values)
## else c(x[1L:after], values, x[(after + 1L):lengx])
## }
## <bytecode: 0x5612956fee98>
## <environment: namespace:base>
```
Don’t worry if you can’t understand everything the `append` function’s code
does yet. It will make more sense later on, after you’ve written a few
functions of your own.
Many of R’s built\-in functions are not entirely written in R code. You can spot
these by calls to the special `.Primitive` or `.Internal` functions in their
code.
For instance, the `sum` function is not written in R code:
```
sum
```
```
## function (..., na.rm = FALSE) .Primitive("sum")
```
The `function` keyword creates a new function. Here’s the syntax:
```
function(parameter1, parameter2, ...) {
# Your code goes here
# The result goes here
}
```
A function can have any number of parameters, and will automatically return the
value of the last line of its body.
A function is a value, and like any other value, if you want to reuse it, you
need to assign it to variable. Choosing descriptive variable names is a good
habit. For functions, that means choosing a name that describes what the
function does. It often makes sense to use verbs in function names.
Let’s write a function that gets the largest values in a vector. The inputs or
arguments to the function will be the vector in question and also the number of
values to get. Let’s call these `vec` and `n`, respectively. The result will be
a vector of the `n` largest elements. Here’s one way to write the function:
```
get_largest = function(vec, n) {
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
```
The name of the function, `get_largest`, describes what the function does and
includes a verb. If this function will be used frequently, a shorter name, such
as `largest`, might be preferable (compare to the `head` function).
Any time you write a function, the first thing you should do afterwards is test
that it actually works. Let’s try the `get_largest` function on a few test
cases:
```
x = c(1, 10, 20, -3)
get_largest(x, 2)
```
```
## [1] 20 10
```
```
get_largest(x, 3)
```
```
## [1] 20 10 1
```
```
y = c(-1, -2, -3)
get_largest(y, 2)
```
```
## [1] -1 -2
```
```
z = c("d", "a", "t", "a", "l", "a", "b")
get_largest(z, 3)
```
```
## [1] "t" "l" "d"
```
Notice that the parameters `vec` and `n` inside the function do not exist as
variables outside of the function:
```
vec
```
```
## Error in eval(expr, envir, enclos): object 'vec' not found
```
In general, R keeps parameters and variables you define inside of a function
separate from variables you define outside of a function. You can read more
about the specific rules for how R searches for variables in Section
[5\.2](appendix.html#variable-scope-lookup).
As a function for quickly summarizing data, `get_largest` would be more
convenient if the parameter `n` for the number of values to return was optional
(again, compare to the `head` function). You can make the parameter `n`
optional by setting a *default argument*: an argument assigned to the parameter
if no argument is assigned in the call to the function. You can use `=` to
assign default arguments to parameters when you define a function with the
`function` keyword. Here’s a new definition of the function with the default `n = 5`:
```
get_largest = function(vec, n = 5) {
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
```
After making this change, it’s a good idea to test the function again:
```
get_largest(x)
```
```
## [1] 20 10 1 -3
```
```
get_largest(y)
```
```
## [1] -1 -2 -3
```
```
get_largest(z)
```
```
## [1] "t" "l" "d" "b" "a"
```
### 4\.2\.1 Returning Values
We’ve already seen that a function will automatically return the value of its
last line.
The `return` keyword causes a function to return a result immediately, without
running any subsequent code in its body. It only makes sense to use `return`
from inside of an if\-statement. If your function doesn’t have any
if\-statements, you don’t need to use `return`.
For example, suppose you want the `get_largest` function to immediately return
`NULL` if the argument for `vec` is a list. Here’s the code, along with some
test cases:
```
get_largest = function(vec, n = 5) {
if (is.list(vec))
return(NULL)
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
get_largest(x)
```
```
## [1] 20 10 1 -3
```
```
get_largest(z)
```
```
## [1] "t" "l" "d" "b" "a"
```
```
get_largest(list(1, 2))
```
```
## NULL
```
Alternatively, you could make the function raise an error by calling the `stop`
function. Whether it makes more sense to return `NULL` or print an error
depends on how you plan to use the `get_largest` function.
Notice that the last line of the `get_largest` function still doesn’t use the
`return` keyword. It’s idiomatic to only use `return` when strictly necessary.
A function returns one R object, but sometimes computations have multiple
results. In that case, return the results in a vector, list, or other data
structure.
For example, let’s make a function that computes the mean and median for a
vector. We’ll return the results in a named list, although we could also use a
named vector:
```
compute_mean_med = function(x) {
m1 = mean(x)
m2 = median(x)
list(mean = m1, median = m2)
}
compute_mean_med(c(1, 2, 3, 1))
```
```
## $mean
## [1] 1.75
##
## $median
## [1] 1.5
```
The names make the result easier to understand for the caller of the function,
although they certainly aren’t required here.
### 4\.2\.2 Planning Your Functions
Before you write a function, it’s useful to go through several steps:
1. Write down what you want to do, in detail. It can also help to
draw a picture of what needs to happen.
2. Check whether there’s already a built\-in function. Search online and in the
R documentation.
3. Write the code to handle a simple case first. For data science
problems, use a small dataset at this step.
Let’s apply this in one final example: a function that detects leap years. A
year is a leap year if either of these conditions is true:
* It is divisible by 4 and not 100
* It is divisible by 400
That means the years 2004 and 2000 are leap years, but the year 2200 is not.
Here’s the code and a few test cases:
```
# If year is divisible by 4 and not 100 -> leap
# If year is divisible by 400 -> leap
year = 2004
is_leap = function(year) {
if (year %% 4 == 0 & year %% 100 != 0) {
leap = TRUE
} else if (year %% 400 == 0) {
leap = TRUE
} else {
leap = FALSE
}
leap
}
is_leap(400)
```
```
## [1] TRUE
```
```
is_leap(1997)
```
```
## [1] FALSE
```
Functions are the building blocks for solving larger problems. Take a
divide\-and\-conquer approach, breaking large problems into smaller steps. Use a
short function for each step. This approach makes it easier to:
* Test that each step works correctly.
* Modify, reuse, or repurpose a step.
4\.3 Loops
----------
One major benefit of using a programming language like R is that repetitive
tasks can be automated. We’ve already seen two ways to do this:
1. Vectorization, introduced in Section [2\.1\.3](data-structures.html#vectorization)
2. Apply functions, introduced in Section [3\.4](exploring-data.html#apply-functions)
Both of these are *iteration strategies*. They *iterate* over some object, and
compute something for each element. Each one of these computations is one
*iteration*. Vectorization is the most efficient iteration strategy, but only
works with vectorized functions and vectors. Apply functions are more
flexible—they work with any function and any data structure with
elements—but less efficient and less concise.
A *loop* is another iteration strategy, one that’s even more flexible than
apply functions. Besides being flexible, loops are a feature of almost all
modern programming languages, so it’s useful to understand them. In R, there
are two kinds of loops. We’ll learn both.
### 4\.3\.1 For\-loops
A *for\-loop* runs a block of code once for each element of a vector or list.
The `for` keyword creates a for\-loop. Here’s the syntax:
```
for (I in DATA) {
# Your code goes here
}
```
The variable `I` is called the *induction variable*. At the beginning of each
iteration, `I` is assigned the next element of the vector or list `DATA`. The
loop iterates once for each element of `DATA`, unless you use a keyword to exit
the loop early (more about this in Section [4\.3\.4](automating-tasks.html#break-next)). As with
if\-statements and functions, the curly braces `{ }` are only required if the
body contains multiple lines of code.
Unlike the other iteration strategies, loops do not automatically return a
result. You have complete control over the output, which means that anything
you want to save must be assigned to a variable.
For example, let’s make a loop that repeatedly adds a number to a running total
and squares the new total. We’ll use a variable `total` to keep track of the
running total as the loop iterates:
```
numbers = c(-1, 1, -3, 2)
total = 0
for (number in numbers) {
total = (total + number)^2
}
total
```
```
## [1] 9
```
Use for\-loops when some or all of the iterations depend on results from other
iterations. If the iterations are not dependent, use one of:
1. Vectorization (because it’s faster)
2. Apply functions (because they’re idiomatic)
In some cases, you can use vectorization even when the iterations are
dependent. For example, you can use vectorization to compute the sum of the
cubes of several numbers:
```
sum(numbers^3)
```
```
## [1] -19
```
### 4\.3\.2 While\-loops
A *while\-loop* runs a block of code repeatedly as long as some condition is
`TRUE`. The `while` keyword creates a while\-loop. Here’s the syntax:
```
while (CONDITION) {
# Your code goes here
}
```
The `CONDITION` should be a scalar logical value or an expression that returns
one. At the beginning of each iteration, `CONDITION` is checked, and the loop
exits if it is `FALSE`. As always, the curly braces `{ }` are only required if
the body contains multiple lines of code.
For example, suppose you want to add up numbers from 0 to 50, but stop as soon
as the total is greater than 50:
```
num50 = seq(0, 50)
total = 0
i = 1
while (total < 50) {
total = total + num50[i]
message("i is ", i, " total is ", total)
i = i + 1
}
```
```
## i is 1 total is 0
```
```
## i is 2 total is 1
```
```
## i is 3 total is 3
```
```
## i is 4 total is 6
```
```
## i is 5 total is 10
```
```
## i is 6 total is 15
```
```
## i is 7 total is 21
```
```
## i is 8 total is 28
```
```
## i is 9 total is 36
```
```
## i is 10 total is 45
```
```
## i is 11 total is 55
```
```
total
```
```
## [1] 55
```
```
i
```
```
## [1] 12
```
While\-loops are a generalization of for\-loops. They tend to be most useful when
you don’t know how many iterations will be necessary. For example, suppose you
want to repeat a computation until the result falls within some range of
values.
### 4\.3\.3 Saving Multiple Results
Loops often produce a different result for each iteration. If you want to save
more than one result, there are a few things you must do.
First, set up an index vector. The index vector should usually be congruent to
the number of iterations or the input. The `seq_along` function returns a
congruent index vector when passed a vector or list. For instance, let’s make
in index for the `numbers` vector from Section [4\.3\.1](automating-tasks.html#for-loops):
```
index = seq_along(numbers)
```
The loop will iterate over the index rather than the input, so the induction
variable will track the current iteration number. On the first iteration, the
induction variable will be 1, on the second it will be 2, and so on. Then you
can use the induction variable and indexing to get the input for each
iteration.
Second, set up an empty output vector or list. This should usually be congruent
to the input, or one element longer (the extra element comes from the initial
value). R has several functions for creating vectors. We’ve already seen a few,
but here are more:
* `logical`, `integer`, `numeric`, `complex`, and `character` to create an
empty vector with a specific type and length
* `vector` to create an empty vector with a specific type and length
* `rep` to create a vector by repeating elements of some other vector
Empty vectors are filled with `FALSE`, `0`, or `""`, depending on the type of
the vector. Here are some examples:
```
logical(3)
```
```
## [1] FALSE FALSE FALSE
```
```
numeric(4)
```
```
## [1] 0 0 0 0
```
```
rep(c(1, 2), 2)
```
```
## [1] 1 2 1 2
```
Let’s create an empty numeric vector congruent to `numbers`:
```
n = length(numbers)
result = numeric(n)
```
As with the input, you can use the induction variable and indexing to set the
output for each iteration.
Creating a vector or list in advance to store something, as we’ve just done, is
called *preallocation*. Preallocation is extremely important for efficiency in
loops. Avoid the temptation to use `c` or `append` to build up the output bit
by bit in each iteration.
Finally, write the loop, making sure to get the input and set the output.
Here’s the loop for the squared sums example:
```
for (i in index) {
prev = if (i > 1) result[i - 1] else 0
result[i] = (numbers[i] + prev)^2
}
result
```
```
## [1] 1 4 1 9
```
### 4\.3\.4 Break \& Next
The `break` keyword causes a loop to immediately exit. It only makes sense to
use `break` inside of an if\-statement.
For example, suppose we want to print each string in a vector, but stop at the
first missing value. We can do this with `break`:
```
my_messages = c("Hi", "Hello", NA, "Goodbye")
for (msg in my_messages) {
if (is.na(msg))
break
message(msg)
}
```
```
## Hi
```
```
## Hello
```
The `next` keyword causes a loop to immediately go to the next iteration. As
with `break`, it only makes sense to use `next` inside of an if\-statement.
Let’s modify the previous example so that missing values are skipped, but don’t
cause printing to stop. Here’s the code:
```
for (msg in my_messages) {
if (is.na(msg))
next
message(msg)
}
```
```
## Hi
```
```
## Hello
```
```
## Goodbye
```
These keywords work with both for\-loops and while\-loops.
### 4\.3\.5 Example: The Collatz Conjecture
The Collatz Conjecture is a conjecture in math that was introduced
in 1937 by Lothar Collatz and remains unproven today, despite being relatively
easy to explain. Here’s a statement of the conjecture:
> Start from any positive integer. If the integer is even, divide by 2\. If the
> integer is odd, multiply by 3 and add 1\.
>
>
> If the result is not 1, repeat using the result as the new starting value.
>
>
> The result will always reach 1 eventually, regardless of the starting value.
The sequences of numbers this process generates are called *Collatz sequences*.
For instance, the Collatz sequence starting from 2 is `2, 1`. The Collatz
sequence starting from 12 is `12, 6, 3, 10, 5, 16, 8, 4, 2, 1`.
As a final loop example, let’s use a while\-loop to compute Collatz sequences.
Here’s the code:
```
n = 5
i = 0
while (n != 1) {
i = i + 1
if (n %% 2 == 0) {
n = n / 2
} else {
n = 3 * n + 1
}
message(paste0(n, " "))
}
```
```
## 16
```
```
## 8
```
```
## 4
```
```
## 2
```
```
## 1
```
As of 2020, scientists have used computers to check the Collatz sequences for
every number up to approximately \\(2^{64}\\). For more details about the Collatz
Conjecture, check out [this video](https://www.youtube.com/watch?v=094y1Z2wpJg).
4\.4 Planning for Iteration
---------------------------
At first it may seem difficult to decide if and what kind of iteration to use.
Start by thinking about whether you need to do something over and over. If you
don’t, then you probably don’t need to use iteration. If you do, then try
iteration strategies in this order:
1. vectorization
2. apply functions
* Try an apply function if iterations are independent.
3. for/while\-loops
* Try a for\-loop if some iterations depend on others.
* Try a while\-loop if the number of iterations is unknown.
4. recursion (which isn’t covered here)
* Convenient for naturally recursive problems (like Fibonacci),
but often there are faster solutions.
Start by writing the code for just one iteration. Make sure that code works;
it’s easy to test code for one iteration.
When you have one iteration working, then try using the code with an iteration
strategy (you will have to make some small changes). If it doesn’t work, try to
figure out which iteration is causing the problem. One way to do this is to use
`message` to print out information. Then try to write the code for the broken
iteration, get that iteration working, and repeat this whole process.
4\.5 Exercises
--------------
*These exercises are meant to challenge you, so they’re quite difficult
compared to the previous ones. Don’t get disheartened, and if you’re able to
complete them, excellent work!*
### 4\.5\.1 Exercise
Create a function `compute_day` which uses the [Doomsday algorithm](https://en.wikipedia.org/wiki/Doomsday_rule)
to compute the day of week for any given date in the 1900s. The function’s
parameters should be `year`, `month`, and `day`. The function’s return value
should be a day of week, as a string (for example, `"Saturday"`).
*Hint: the modulo operator is `%%` in R.*
#### Learning Objectives
* Create code that only runs when a condition is satisfied
* Create custom functions in order to organize and reuse code
* Run code repeatedly in a for\-loop
* Describe the different types of loops and how to choose between them
4\.1 Conditional Expressions
----------------------------
Sometimes you’ll need code to do different things, depending on a condition.
*If\-statements* provide a way to write conditional code.
For example, suppose we want to greet one person differently from the others:
```
name = "Nick"
if (name == "Nick") {
# If name is Nick:
message("We went down the TRUE branch")
msg = "Hi Nick, nice to see you again!"
} else {
# Anything else:
msg = "Nice to meet you!"
}
```
```
## We went down the TRUE branch
```
Indent code inside of the if\-statement by 2 or 4 spaces. Indentation makes your
code easier to read.
The condition in an if\-statement has to be a scalar:
```
name = c("Nick", "Susan")
if (name == "Nick") {
msg = "Hi Nick!"
} else {
msg = "Nice to meet you!"
}
```
```
## Error in if (name == "Nick") {: the condition has length > 1
```
You can chain together if\-statements:
```
name = "Susan"
if (name == "Nick") {
msg = "Hi Nick, nice to see you again!"
} else if (name == "Peter") {
msg = "Go away Peter, I'm busy!"
} else {
msg = "Nice to meet you!"
}
msg
```
```
## [1] "Nice to meet you!"
```
If\-statements return the value of the last expression in the evaluated block:
```
name = "Tom"
msg = if (name == "Nick") {
"Hi Nick, nice to see you again!"
} else {
"Nice to meet you!"
}
msg
```
```
## [1] "Nice to meet you!"
```
Curly braces `{ }` are optional for single\-line expressions:
```
name = "Nick"
msg = if (name == "Nick") "Hi Nick, nice to see you again!" else
"Nice to meet you!"
msg
```
```
## [1] "Hi Nick, nice to see you again!"
```
But you have to be careful if you don’t use them:
```
# NO GOOD:
msg = if (name == "Nick")
"Hi Nick, nice to see you again!"
else
"Nice to meet you!"
```
```
## Error: <text>:4:1: unexpected 'else'
## 3: "Hi Nick, nice to see you again!"
## 4: else
## ^
```
The `else` block is optional:
```
msg = "Hi"
name = "Tom"
if (name == "Nick")
msg = "Hi Nick, nice to see you again!"
msg
```
```
## [1] "Hi"
```
When there’s no `else` block, the value of the `else` block is `NULL`:
```
name = "Tom"
msg = if (name == "Nick")
"Hi Nick, nice to see you again!"
msg
```
```
## NULL
```
4\.2 Functions
--------------
The main way to interact with R is by calling functions, which was first
explained way back in Section [1\.2\.4](getting-started.html#calling-functions). Since then, you’ve
learned how to use many of R’s built\-in functions. This section explains how
you can write your own functions.
To start, let’s briefly review what functions are, and some of the jargon
associated with them. It’s useful to think of functions as factories: raw
materials (inputs) go in, products (outputs) come out. We can also represent
this visually:
Programmers use several specific terms to describe the parts and usage of
functions:
* *Parameters* are placeholder variables for inputs.
+ *Arguments* are the actual values assigned to the parameters in a call.
* The *return value* is the output.
* The *body* is the code inside.
* *Calling* a function means using a function to compute something.
Almost every command in R is a function, even the arithmetic operators and the
parentheses! You can view the body of a function by typing its name without
trailing parentheses (in contrast to how you call functions). The body of a
function is usually surrounded by curly braces `{}`, although they’re optional
if the body only contains one line of code. Indenting code inside of curly
braces by 2\-4 spaces also helps make it visually distinct from other code.
For example, let’s look at the body of the `append` function, which appends a
value to the end of a list or vector:
```
append
```
```
## function (x, values, after = length(x))
## {
## lengx <- length(x)
## if (!after)
## c(values, x)
## else if (after >= lengx)
## c(x, values)
## else c(x[1L:after], values, x[(after + 1L):lengx])
## }
## <bytecode: 0x5612956fee98>
## <environment: namespace:base>
```
Don’t worry if you can’t understand everything the `append` function’s code
does yet. It will make more sense later on, after you’ve written a few
functions of your own.
Many of R’s built\-in functions are not entirely written in R code. You can spot
these by calls to the special `.Primitive` or `.Internal` functions in their
code.
For instance, the `sum` function is not written in R code:
```
sum
```
```
## function (..., na.rm = FALSE) .Primitive("sum")
```
The `function` keyword creates a new function. Here’s the syntax:
```
function(parameter1, parameter2, ...) {
# Your code goes here
# The result goes here
}
```
A function can have any number of parameters, and will automatically return the
value of the last line of its body.
A function is a value, and like any other value, if you want to reuse it, you
need to assign it to variable. Choosing descriptive variable names is a good
habit. For functions, that means choosing a name that describes what the
function does. It often makes sense to use verbs in function names.
Let’s write a function that gets the largest values in a vector. The inputs or
arguments to the function will be the vector in question and also the number of
values to get. Let’s call these `vec` and `n`, respectively. The result will be
a vector of the `n` largest elements. Here’s one way to write the function:
```
get_largest = function(vec, n) {
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
```
The name of the function, `get_largest`, describes what the function does and
includes a verb. If this function will be used frequently, a shorter name, such
as `largest`, might be preferable (compare to the `head` function).
Any time you write a function, the first thing you should do afterwards is test
that it actually works. Let’s try the `get_largest` function on a few test
cases:
```
x = c(1, 10, 20, -3)
get_largest(x, 2)
```
```
## [1] 20 10
```
```
get_largest(x, 3)
```
```
## [1] 20 10 1
```
```
y = c(-1, -2, -3)
get_largest(y, 2)
```
```
## [1] -1 -2
```
```
z = c("d", "a", "t", "a", "l", "a", "b")
get_largest(z, 3)
```
```
## [1] "t" "l" "d"
```
Notice that the parameters `vec` and `n` inside the function do not exist as
variables outside of the function:
```
vec
```
```
## Error in eval(expr, envir, enclos): object 'vec' not found
```
In general, R keeps parameters and variables you define inside of a function
separate from variables you define outside of a function. You can read more
about the specific rules for how R searches for variables in Section
[5\.2](appendix.html#variable-scope-lookup).
As a function for quickly summarizing data, `get_largest` would be more
convenient if the parameter `n` for the number of values to return was optional
(again, compare to the `head` function). You can make the parameter `n`
optional by setting a *default argument*: an argument assigned to the parameter
if no argument is assigned in the call to the function. You can use `=` to
assign default arguments to parameters when you define a function with the
`function` keyword. Here’s a new definition of the function with the default `n = 5`:
```
get_largest = function(vec, n = 5) {
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
```
After making this change, it’s a good idea to test the function again:
```
get_largest(x)
```
```
## [1] 20 10 1 -3
```
```
get_largest(y)
```
```
## [1] -1 -2 -3
```
```
get_largest(z)
```
```
## [1] "t" "l" "d" "b" "a"
```
### 4\.2\.1 Returning Values
We’ve already seen that a function will automatically return the value of its
last line.
The `return` keyword causes a function to return a result immediately, without
running any subsequent code in its body. It only makes sense to use `return`
from inside of an if\-statement. If your function doesn’t have any
if\-statements, you don’t need to use `return`.
For example, suppose you want the `get_largest` function to immediately return
`NULL` if the argument for `vec` is a list. Here’s the code, along with some
test cases:
```
get_largest = function(vec, n = 5) {
if (is.list(vec))
return(NULL)
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
get_largest(x)
```
```
## [1] 20 10 1 -3
```
```
get_largest(z)
```
```
## [1] "t" "l" "d" "b" "a"
```
```
get_largest(list(1, 2))
```
```
## NULL
```
Alternatively, you could make the function raise an error by calling the `stop`
function. Whether it makes more sense to return `NULL` or print an error
depends on how you plan to use the `get_largest` function.
Notice that the last line of the `get_largest` function still doesn’t use the
`return` keyword. It’s idiomatic to only use `return` when strictly necessary.
A function returns one R object, but sometimes computations have multiple
results. In that case, return the results in a vector, list, or other data
structure.
For example, let’s make a function that computes the mean and median for a
vector. We’ll return the results in a named list, although we could also use a
named vector:
```
compute_mean_med = function(x) {
m1 = mean(x)
m2 = median(x)
list(mean = m1, median = m2)
}
compute_mean_med(c(1, 2, 3, 1))
```
```
## $mean
## [1] 1.75
##
## $median
## [1] 1.5
```
The names make the result easier to understand for the caller of the function,
although they certainly aren’t required here.
### 4\.2\.2 Planning Your Functions
Before you write a function, it’s useful to go through several steps:
1. Write down what you want to do, in detail. It can also help to
draw a picture of what needs to happen.
2. Check whether there’s already a built\-in function. Search online and in the
R documentation.
3. Write the code to handle a simple case first. For data science
problems, use a small dataset at this step.
Let’s apply this in one final example: a function that detects leap years. A
year is a leap year if either of these conditions is true:
* It is divisible by 4 and not 100
* It is divisible by 400
That means the years 2004 and 2000 are leap years, but the year 2200 is not.
Here’s the code and a few test cases:
```
# If year is divisible by 4 and not 100 -> leap
# If year is divisible by 400 -> leap
year = 2004
is_leap = function(year) {
if (year %% 4 == 0 & year %% 100 != 0) {
leap = TRUE
} else if (year %% 400 == 0) {
leap = TRUE
} else {
leap = FALSE
}
leap
}
is_leap(400)
```
```
## [1] TRUE
```
```
is_leap(1997)
```
```
## [1] FALSE
```
Functions are the building blocks for solving larger problems. Take a
divide\-and\-conquer approach, breaking large problems into smaller steps. Use a
short function for each step. This approach makes it easier to:
* Test that each step works correctly.
* Modify, reuse, or repurpose a step.
### 4\.2\.1 Returning Values
We’ve already seen that a function will automatically return the value of its
last line.
The `return` keyword causes a function to return a result immediately, without
running any subsequent code in its body. It only makes sense to use `return`
from inside of an if\-statement. If your function doesn’t have any
if\-statements, you don’t need to use `return`.
For example, suppose you want the `get_largest` function to immediately return
`NULL` if the argument for `vec` is a list. Here’s the code, along with some
test cases:
```
get_largest = function(vec, n = 5) {
if (is.list(vec))
return(NULL)
sorted = sort(vec, decreasing = TRUE)
head(sorted, n)
}
get_largest(x)
```
```
## [1] 20 10 1 -3
```
```
get_largest(z)
```
```
## [1] "t" "l" "d" "b" "a"
```
```
get_largest(list(1, 2))
```
```
## NULL
```
Alternatively, you could make the function raise an error by calling the `stop`
function. Whether it makes more sense to return `NULL` or print an error
depends on how you plan to use the `get_largest` function.
Notice that the last line of the `get_largest` function still doesn’t use the
`return` keyword. It’s idiomatic to only use `return` when strictly necessary.
A function returns one R object, but sometimes computations have multiple
results. In that case, return the results in a vector, list, or other data
structure.
For example, let’s make a function that computes the mean and median for a
vector. We’ll return the results in a named list, although we could also use a
named vector:
```
compute_mean_med = function(x) {
m1 = mean(x)
m2 = median(x)
list(mean = m1, median = m2)
}
compute_mean_med(c(1, 2, 3, 1))
```
```
## $mean
## [1] 1.75
##
## $median
## [1] 1.5
```
The names make the result easier to understand for the caller of the function,
although they certainly aren’t required here.
### 4\.2\.2 Planning Your Functions
Before you write a function, it’s useful to go through several steps:
1. Write down what you want to do, in detail. It can also help to
draw a picture of what needs to happen.
2. Check whether there’s already a built\-in function. Search online and in the
R documentation.
3. Write the code to handle a simple case first. For data science
problems, use a small dataset at this step.
Let’s apply this in one final example: a function that detects leap years. A
year is a leap year if either of these conditions is true:
* It is divisible by 4 and not 100
* It is divisible by 400
That means the years 2004 and 2000 are leap years, but the year 2200 is not.
Here’s the code and a few test cases:
```
# If year is divisible by 4 and not 100 -> leap
# If year is divisible by 400 -> leap
year = 2004
is_leap = function(year) {
if (year %% 4 == 0 & year %% 100 != 0) {
leap = TRUE
} else if (year %% 400 == 0) {
leap = TRUE
} else {
leap = FALSE
}
leap
}
is_leap(400)
```
```
## [1] TRUE
```
```
is_leap(1997)
```
```
## [1] FALSE
```
Functions are the building blocks for solving larger problems. Take a
divide\-and\-conquer approach, breaking large problems into smaller steps. Use a
short function for each step. This approach makes it easier to:
* Test that each step works correctly.
* Modify, reuse, or repurpose a step.
4\.3 Loops
----------
One major benefit of using a programming language like R is that repetitive
tasks can be automated. We’ve already seen two ways to do this:
1. Vectorization, introduced in Section [2\.1\.3](data-structures.html#vectorization)
2. Apply functions, introduced in Section [3\.4](exploring-data.html#apply-functions)
Both of these are *iteration strategies*. They *iterate* over some object, and
compute something for each element. Each one of these computations is one
*iteration*. Vectorization is the most efficient iteration strategy, but only
works with vectorized functions and vectors. Apply functions are more
flexible—they work with any function and any data structure with
elements—but less efficient and less concise.
A *loop* is another iteration strategy, one that’s even more flexible than
apply functions. Besides being flexible, loops are a feature of almost all
modern programming languages, so it’s useful to understand them. In R, there
are two kinds of loops. We’ll learn both.
### 4\.3\.1 For\-loops
A *for\-loop* runs a block of code once for each element of a vector or list.
The `for` keyword creates a for\-loop. Here’s the syntax:
```
for (I in DATA) {
# Your code goes here
}
```
The variable `I` is called the *induction variable*. At the beginning of each
iteration, `I` is assigned the next element of the vector or list `DATA`. The
loop iterates once for each element of `DATA`, unless you use a keyword to exit
the loop early (more about this in Section [4\.3\.4](automating-tasks.html#break-next)). As with
if\-statements and functions, the curly braces `{ }` are only required if the
body contains multiple lines of code.
Unlike the other iteration strategies, loops do not automatically return a
result. You have complete control over the output, which means that anything
you want to save must be assigned to a variable.
For example, let’s make a loop that repeatedly adds a number to a running total
and squares the new total. We’ll use a variable `total` to keep track of the
running total as the loop iterates:
```
numbers = c(-1, 1, -3, 2)
total = 0
for (number in numbers) {
total = (total + number)^2
}
total
```
```
## [1] 9
```
Use for\-loops when some or all of the iterations depend on results from other
iterations. If the iterations are not dependent, use one of:
1. Vectorization (because it’s faster)
2. Apply functions (because they’re idiomatic)
In some cases, you can use vectorization even when the iterations are
dependent. For example, you can use vectorization to compute the sum of the
cubes of several numbers:
```
sum(numbers^3)
```
```
## [1] -19
```
### 4\.3\.2 While\-loops
A *while\-loop* runs a block of code repeatedly as long as some condition is
`TRUE`. The `while` keyword creates a while\-loop. Here’s the syntax:
```
while (CONDITION) {
# Your code goes here
}
```
The `CONDITION` should be a scalar logical value or an expression that returns
one. At the beginning of each iteration, `CONDITION` is checked, and the loop
exits if it is `FALSE`. As always, the curly braces `{ }` are only required if
the body contains multiple lines of code.
For example, suppose you want to add up numbers from 0 to 50, but stop as soon
as the total is greater than 50:
```
num50 = seq(0, 50)
total = 0
i = 1
while (total < 50) {
total = total + num50[i]
message("i is ", i, " total is ", total)
i = i + 1
}
```
```
## i is 1 total is 0
```
```
## i is 2 total is 1
```
```
## i is 3 total is 3
```
```
## i is 4 total is 6
```
```
## i is 5 total is 10
```
```
## i is 6 total is 15
```
```
## i is 7 total is 21
```
```
## i is 8 total is 28
```
```
## i is 9 total is 36
```
```
## i is 10 total is 45
```
```
## i is 11 total is 55
```
```
total
```
```
## [1] 55
```
```
i
```
```
## [1] 12
```
While\-loops are a generalization of for\-loops. They tend to be most useful when
you don’t know how many iterations will be necessary. For example, suppose you
want to repeat a computation until the result falls within some range of
values.
### 4\.3\.3 Saving Multiple Results
Loops often produce a different result for each iteration. If you want to save
more than one result, there are a few things you must do.
First, set up an index vector. The index vector should usually be congruent to
the number of iterations or the input. The `seq_along` function returns a
congruent index vector when passed a vector or list. For instance, let’s make
in index for the `numbers` vector from Section [4\.3\.1](automating-tasks.html#for-loops):
```
index = seq_along(numbers)
```
The loop will iterate over the index rather than the input, so the induction
variable will track the current iteration number. On the first iteration, the
induction variable will be 1, on the second it will be 2, and so on. Then you
can use the induction variable and indexing to get the input for each
iteration.
Second, set up an empty output vector or list. This should usually be congruent
to the input, or one element longer (the extra element comes from the initial
value). R has several functions for creating vectors. We’ve already seen a few,
but here are more:
* `logical`, `integer`, `numeric`, `complex`, and `character` to create an
empty vector with a specific type and length
* `vector` to create an empty vector with a specific type and length
* `rep` to create a vector by repeating elements of some other vector
Empty vectors are filled with `FALSE`, `0`, or `""`, depending on the type of
the vector. Here are some examples:
```
logical(3)
```
```
## [1] FALSE FALSE FALSE
```
```
numeric(4)
```
```
## [1] 0 0 0 0
```
```
rep(c(1, 2), 2)
```
```
## [1] 1 2 1 2
```
Let’s create an empty numeric vector congruent to `numbers`:
```
n = length(numbers)
result = numeric(n)
```
As with the input, you can use the induction variable and indexing to set the
output for each iteration.
Creating a vector or list in advance to store something, as we’ve just done, is
called *preallocation*. Preallocation is extremely important for efficiency in
loops. Avoid the temptation to use `c` or `append` to build up the output bit
by bit in each iteration.
Finally, write the loop, making sure to get the input and set the output.
Here’s the loop for the squared sums example:
```
for (i in index) {
prev = if (i > 1) result[i - 1] else 0
result[i] = (numbers[i] + prev)^2
}
result
```
```
## [1] 1 4 1 9
```
### 4\.3\.4 Break \& Next
The `break` keyword causes a loop to immediately exit. It only makes sense to
use `break` inside of an if\-statement.
For example, suppose we want to print each string in a vector, but stop at the
first missing value. We can do this with `break`:
```
my_messages = c("Hi", "Hello", NA, "Goodbye")
for (msg in my_messages) {
if (is.na(msg))
break
message(msg)
}
```
```
## Hi
```
```
## Hello
```
The `next` keyword causes a loop to immediately go to the next iteration. As
with `break`, it only makes sense to use `next` inside of an if\-statement.
Let’s modify the previous example so that missing values are skipped, but don’t
cause printing to stop. Here’s the code:
```
for (msg in my_messages) {
if (is.na(msg))
next
message(msg)
}
```
```
## Hi
```
```
## Hello
```
```
## Goodbye
```
These keywords work with both for\-loops and while\-loops.
### 4\.3\.5 Example: The Collatz Conjecture
The Collatz Conjecture is a conjecture in math that was introduced
in 1937 by Lothar Collatz and remains unproven today, despite being relatively
easy to explain. Here’s a statement of the conjecture:
> Start from any positive integer. If the integer is even, divide by 2\. If the
> integer is odd, multiply by 3 and add 1\.
>
>
> If the result is not 1, repeat using the result as the new starting value.
>
>
> The result will always reach 1 eventually, regardless of the starting value.
The sequences of numbers this process generates are called *Collatz sequences*.
For instance, the Collatz sequence starting from 2 is `2, 1`. The Collatz
sequence starting from 12 is `12, 6, 3, 10, 5, 16, 8, 4, 2, 1`.
As a final loop example, let’s use a while\-loop to compute Collatz sequences.
Here’s the code:
```
n = 5
i = 0
while (n != 1) {
i = i + 1
if (n %% 2 == 0) {
n = n / 2
} else {
n = 3 * n + 1
}
message(paste0(n, " "))
}
```
```
## 16
```
```
## 8
```
```
## 4
```
```
## 2
```
```
## 1
```
As of 2020, scientists have used computers to check the Collatz sequences for
every number up to approximately \\(2^{64}\\). For more details about the Collatz
Conjecture, check out [this video](https://www.youtube.com/watch?v=094y1Z2wpJg).
### 4\.3\.1 For\-loops
A *for\-loop* runs a block of code once for each element of a vector or list.
The `for` keyword creates a for\-loop. Here’s the syntax:
```
for (I in DATA) {
# Your code goes here
}
```
The variable `I` is called the *induction variable*. At the beginning of each
iteration, `I` is assigned the next element of the vector or list `DATA`. The
loop iterates once for each element of `DATA`, unless you use a keyword to exit
the loop early (more about this in Section [4\.3\.4](automating-tasks.html#break-next)). As with
if\-statements and functions, the curly braces `{ }` are only required if the
body contains multiple lines of code.
Unlike the other iteration strategies, loops do not automatically return a
result. You have complete control over the output, which means that anything
you want to save must be assigned to a variable.
For example, let’s make a loop that repeatedly adds a number to a running total
and squares the new total. We’ll use a variable `total` to keep track of the
running total as the loop iterates:
```
numbers = c(-1, 1, -3, 2)
total = 0
for (number in numbers) {
total = (total + number)^2
}
total
```
```
## [1] 9
```
Use for\-loops when some or all of the iterations depend on results from other
iterations. If the iterations are not dependent, use one of:
1. Vectorization (because it’s faster)
2. Apply functions (because they’re idiomatic)
In some cases, you can use vectorization even when the iterations are
dependent. For example, you can use vectorization to compute the sum of the
cubes of several numbers:
```
sum(numbers^3)
```
```
## [1] -19
```
### 4\.3\.2 While\-loops
A *while\-loop* runs a block of code repeatedly as long as some condition is
`TRUE`. The `while` keyword creates a while\-loop. Here’s the syntax:
```
while (CONDITION) {
# Your code goes here
}
```
The `CONDITION` should be a scalar logical value or an expression that returns
one. At the beginning of each iteration, `CONDITION` is checked, and the loop
exits if it is `FALSE`. As always, the curly braces `{ }` are only required if
the body contains multiple lines of code.
For example, suppose you want to add up numbers from 0 to 50, but stop as soon
as the total is greater than 50:
```
num50 = seq(0, 50)
total = 0
i = 1
while (total < 50) {
total = total + num50[i]
message("i is ", i, " total is ", total)
i = i + 1
}
```
```
## i is 1 total is 0
```
```
## i is 2 total is 1
```
```
## i is 3 total is 3
```
```
## i is 4 total is 6
```
```
## i is 5 total is 10
```
```
## i is 6 total is 15
```
```
## i is 7 total is 21
```
```
## i is 8 total is 28
```
```
## i is 9 total is 36
```
```
## i is 10 total is 45
```
```
## i is 11 total is 55
```
```
total
```
```
## [1] 55
```
```
i
```
```
## [1] 12
```
While\-loops are a generalization of for\-loops. They tend to be most useful when
you don’t know how many iterations will be necessary. For example, suppose you
want to repeat a computation until the result falls within some range of
values.
### 4\.3\.3 Saving Multiple Results
Loops often produce a different result for each iteration. If you want to save
more than one result, there are a few things you must do.
First, set up an index vector. The index vector should usually be congruent to
the number of iterations or the input. The `seq_along` function returns a
congruent index vector when passed a vector or list. For instance, let’s make
in index for the `numbers` vector from Section [4\.3\.1](automating-tasks.html#for-loops):
```
index = seq_along(numbers)
```
The loop will iterate over the index rather than the input, so the induction
variable will track the current iteration number. On the first iteration, the
induction variable will be 1, on the second it will be 2, and so on. Then you
can use the induction variable and indexing to get the input for each
iteration.
Second, set up an empty output vector or list. This should usually be congruent
to the input, or one element longer (the extra element comes from the initial
value). R has several functions for creating vectors. We’ve already seen a few,
but here are more:
* `logical`, `integer`, `numeric`, `complex`, and `character` to create an
empty vector with a specific type and length
* `vector` to create an empty vector with a specific type and length
* `rep` to create a vector by repeating elements of some other vector
Empty vectors are filled with `FALSE`, `0`, or `""`, depending on the type of
the vector. Here are some examples:
```
logical(3)
```
```
## [1] FALSE FALSE FALSE
```
```
numeric(4)
```
```
## [1] 0 0 0 0
```
```
rep(c(1, 2), 2)
```
```
## [1] 1 2 1 2
```
Let’s create an empty numeric vector congruent to `numbers`:
```
n = length(numbers)
result = numeric(n)
```
As with the input, you can use the induction variable and indexing to set the
output for each iteration.
Creating a vector or list in advance to store something, as we’ve just done, is
called *preallocation*. Preallocation is extremely important for efficiency in
loops. Avoid the temptation to use `c` or `append` to build up the output bit
by bit in each iteration.
Finally, write the loop, making sure to get the input and set the output.
Here’s the loop for the squared sums example:
```
for (i in index) {
prev = if (i > 1) result[i - 1] else 0
result[i] = (numbers[i] + prev)^2
}
result
```
```
## [1] 1 4 1 9
```
### 4\.3\.4 Break \& Next
The `break` keyword causes a loop to immediately exit. It only makes sense to
use `break` inside of an if\-statement.
For example, suppose we want to print each string in a vector, but stop at the
first missing value. We can do this with `break`:
```
my_messages = c("Hi", "Hello", NA, "Goodbye")
for (msg in my_messages) {
if (is.na(msg))
break
message(msg)
}
```
```
## Hi
```
```
## Hello
```
The `next` keyword causes a loop to immediately go to the next iteration. As
with `break`, it only makes sense to use `next` inside of an if\-statement.
Let’s modify the previous example so that missing values are skipped, but don’t
cause printing to stop. Here’s the code:
```
for (msg in my_messages) {
if (is.na(msg))
next
message(msg)
}
```
```
## Hi
```
```
## Hello
```
```
## Goodbye
```
These keywords work with both for\-loops and while\-loops.
### 4\.3\.5 Example: The Collatz Conjecture
The Collatz Conjecture is a conjecture in math that was introduced
in 1937 by Lothar Collatz and remains unproven today, despite being relatively
easy to explain. Here’s a statement of the conjecture:
> Start from any positive integer. If the integer is even, divide by 2\. If the
> integer is odd, multiply by 3 and add 1\.
>
>
> If the result is not 1, repeat using the result as the new starting value.
>
>
> The result will always reach 1 eventually, regardless of the starting value.
The sequences of numbers this process generates are called *Collatz sequences*.
For instance, the Collatz sequence starting from 2 is `2, 1`. The Collatz
sequence starting from 12 is `12, 6, 3, 10, 5, 16, 8, 4, 2, 1`.
As a final loop example, let’s use a while\-loop to compute Collatz sequences.
Here’s the code:
```
n = 5
i = 0
while (n != 1) {
i = i + 1
if (n %% 2 == 0) {
n = n / 2
} else {
n = 3 * n + 1
}
message(paste0(n, " "))
}
```
```
## 16
```
```
## 8
```
```
## 4
```
```
## 2
```
```
## 1
```
As of 2020, scientists have used computers to check the Collatz sequences for
every number up to approximately \\(2^{64}\\). For more details about the Collatz
Conjecture, check out [this video](https://www.youtube.com/watch?v=094y1Z2wpJg).
4\.4 Planning for Iteration
---------------------------
At first it may seem difficult to decide if and what kind of iteration to use.
Start by thinking about whether you need to do something over and over. If you
don’t, then you probably don’t need to use iteration. If you do, then try
iteration strategies in this order:
1. vectorization
2. apply functions
* Try an apply function if iterations are independent.
3. for/while\-loops
* Try a for\-loop if some iterations depend on others.
* Try a while\-loop if the number of iterations is unknown.
4. recursion (which isn’t covered here)
* Convenient for naturally recursive problems (like Fibonacci),
but often there are faster solutions.
Start by writing the code for just one iteration. Make sure that code works;
it’s easy to test code for one iteration.
When you have one iteration working, then try using the code with an iteration
strategy (you will have to make some small changes). If it doesn’t work, try to
figure out which iteration is causing the problem. One way to do this is to use
`message` to print out information. Then try to write the code for the broken
iteration, get that iteration working, and repeat this whole process.
4\.5 Exercises
--------------
*These exercises are meant to challenge you, so they’re quite difficult
compared to the previous ones. Don’t get disheartened, and if you’re able to
complete them, excellent work!*
### 4\.5\.1 Exercise
Create a function `compute_day` which uses the [Doomsday algorithm](https://en.wikipedia.org/wiki/Doomsday_rule)
to compute the day of week for any given date in the 1900s. The function’s
parameters should be `year`, `month`, and `day`. The function’s return value
should be a day of week, as a string (for example, `"Saturday"`).
*Hint: the modulo operator is `%%` in R.*
### 4\.5\.1 Exercise
Create a function `compute_day` which uses the [Doomsday algorithm](https://en.wikipedia.org/wiki/Doomsday_rule)
to compute the day of week for any given date in the 1900s. The function’s
parameters should be `year`, `month`, and `day`. The function’s return value
should be a day of week, as a string (for example, `"Saturday"`).
*Hint: the modulo operator is `%%` in R.*
| R Programming |
ucdavisdatalab.github.io | https://ucdavisdatalab.github.io/workshop_r_basics/appendix.html |
5 Appendix
==========
5\.1 More About Comparisons
---------------------------
### 5\.1\.1 Equality
The `==` operator is the primary way to test whether two values are equal, as
explained in Section [1\.2\.3](getting-started.html#comparisons). Nonetheless, equality can be defined
in many different ways, especially when dealing with computers. As a result, R
also provides several different functions to test for different kinds of
equality. This describes tests of equality in more detail, and also describes
some other important details of comparisons.
#### 5\.1\.1\.1 The `==` Operator
The `==` operator tests whether its two arguments have the exact same
representation as a *[binary number](https://en.wikipedia.org/wiki/Binary_number)* in your computer’s memory. Before
testing the arguments, the operator applies R’s rules for vectorization
(Section [2\.1\.3](data-structures.html#vectorization)), recycling (Section [2\.1\.4](data-structures.html#recycling)), and
implicit coercion (Section [2\.2\.2](data-structures.html#implicit-coercion)). Until you’ve fully
internalized these three rules, some results from the equality operator may
seem surprising. For example:
```
# Recycling:
c(1, 2) == c(1, 2, 1, 2)
```
```
## [1] TRUE TRUE TRUE TRUE
```
```
# Implicit coercion:
TRUE == 1
```
```
## [1] TRUE
```
```
TRUE == "TRUE"
```
```
## [1] TRUE
```
```
1 == "TRUE"
```
```
## [1] FALSE
```
The length of the result from the equality operator is usually the same as its
longest argument (with some exceptions).
#### 5\.1\.1\.2 The `all.equal` Function
The `all.equal` function tests whether its two arguments are equal up to some
acceptable difference called a *tolerance*. Computer representations for
decimal numbers are inherently imprecise, so it’s necessary to allow for very
small differences between computed numbers. For example:
```
x = 0.5 - 0.3
y = 0.3 - 0.1
# FALSE on most machines:
x == y
```
```
## [1] FALSE
```
```
# TRUE:
all.equal(x, y)
```
```
## [1] TRUE
```
The `all.equal` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The function returns `TRUE` when the arguments are equal,
and returns a string summarizing the differences when they are not. For
instance:
```
all.equal(1, c(1, 2, 1))
```
```
## [1] "Numeric: lengths (1, 3) differ"
```
The `all.equal` function is often used together with the `isTRUE` function,
which tests whether the result is `TRUE`:
```
all.equal(3, 4)
```
```
## [1] "Mean relative difference: 0.3333333"
```
```
isTRUE(all.equal(3, 4))
```
```
## [1] FALSE
```
You should generally use the `all.equal` function when you want to compare
decimal numbers.
#### 5\.1\.1\.3 The `identical` Function
The `identical` function checks whether its arguments are completely identical,
including their metadata (names, dimensions, and so on). For instance:
```
x = list(a = 1)
y = list(a = 1)
z = list(1)
identical(x, y)
```
```
## [1] TRUE
```
```
identical(x, z)
```
```
## [1] FALSE
```
The `identical` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The result is always a single logical value.
You’ll generally use the `identical` function to compare non\-vector objects
such as lists or data frames. The function also works for vectors, but most of
the time the equality operator `==` is sufficient.
### 5\.1\.2 The `%in%` Operator
Another common comparison is to check whether elements of one vector are
*contained* in another vector at any position. For instance, suppose you want
to check whether `1` or `2` appear anywhere in a longer vector `x`. Here’s how
to do it:
```
x = c(3, 4, 2, 7, 3, 7)
c(1, 2) %in% x
```
```
## [1] FALSE TRUE
```
R returns `FALSE` for the `1` because there’s no `1` in `x`, and returns `TRUE`
for the `2` because there is a `2` in `x`.
Notice that this is different from comparing with the equality operator `==`.
If you use use the equality operator, the shorter vector is recycled until its
length matches the longer one, and then compared element\-by\-element. For the
example, this means only the elements at odd\-numbered positions are compared to
`1`, and only the elements at even\-numbered positions are compared to `2`:
```
c(1, 2) == x
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE
```
### 5\.1\.3 Summarizing Comparisons
The comparison operators are vectorized, so they compare their arguments
element\-by\-element:
```
c(1, 2, 3) < c(1, 3, -3)
```
```
## [1] FALSE TRUE FALSE
```
```
c("he", "saw", "her") == c("she", "saw", "him")
```
```
## [1] FALSE TRUE FALSE
```
What if you want to summarize whether all the elements in a vector are equal
(or unequal)? You can use the `all` function on any logical vector to get a
summary. The `all` function takes a vector of logical values and returns `TRUE`
if all of them are `TRUE`, and returns `FALSE` otherwise:
```
all(c(1, 2, 3) < c(1, 3, -3))
```
```
## [1] FALSE
```
The related `any` function returns `TRUE` if any one element is `TRUE`, and
returns `FALSE` otherwise:
```
any(c("hi", "hello") == c("hi", "bye"))
```
```
## [1] TRUE
```
### 5\.1\.4 Other Pitfalls
New programmers sometimes incorrectly think they need to append `== TRUE` to
their comparisons. This is redundant, makes your code harder to understand, and
wastes computational time. Comparisons already return logical values. If the
result of the comparison is `TRUE`, then `TRUE == TRUE` is again just `TRUE`.
If the result is `FALSE`, then `FALSE == TRUE` is again just `FALSE`. Likewise,
if you want to invert a condition, choose an appropriate operator rather than
appending `== FALSE`.
5\.2 Variable Scope \& Lookup
-----------------------------
### 5\.2\.1 Local Variables
A variable’s *scope* is the section of code where it exists and is accessible.
The `exists` function checks whether a variable is in scope:
```
exists("zz")
```
```
## [1] FALSE
```
```
zz = 3
exists("zz")
```
```
## [1] TRUE
```
When you create a function, you create a new scope. Variables defined inside of
a function are *local* to the function. Local variables cannot be accessed from
outside:
```
rescale = function(x, center, scale) {
centered = x - center
centered / scale
}
centered
```
```
## Error in eval(expr, envir, enclos): object 'centered' not found
```
```
exists("centered")
```
```
## [1] FALSE
```
Local variables are reset each time the function is called:
```
f = function() {
is_z_in_scope = exists("z")
z = 42
is_z_in_scope
}
f()
```
```
## [1] TRUE
```
```
f()
```
```
## [1] TRUE
```
### 5\.2\.2 Lexical Scoping
A function can use variables defined outside (non\-local), but only if those
variables are in scope where the function was **defined**. This property is
called *lexical scoping*.
Let’s see how this works in practice. First, we’ll define a variable `cats` and
then define a function `get_cats` in the same place (the top level, not inside
any functions). As a result, the `cats` variable is in scope inside of the
`get_cats` function:
```
cats = 3
get_cats = function() cats
get_cats()
```
```
## [1] 3
```
Now let’s define a variable `dogs` inside of a function `create_dogs`. We’ll
also define a function `get_dogs` at the top level. The variable `dogs` is not
in scope at the top level, so it’s not in scope inside of the `get_dogs`
function:
```
create_dogs = function() {
dogs = "hello"
}
get_dogs = function() dogs
create_dogs()
get_dogs()
```
```
## Error in get_dogs(): object 'dogs' not found
```
Variables defined directly in the R console are *global* and available to any
function.
Local variables *mask* (hide) non\-local variables with the same name:
```
get_parrot = function() {
parrot = 3
parrot
}
parrot = 42
get_parrot()
```
```
## [1] 3
```
There’s one exception to this rule. We often use variables that refer to
functions in calls:
```
#mean()
```
In this case, the variable must refer to a function, so R ignores local
variables that aren’t functions. For example:
```
my_mean = function() {
mean = 0
mean(c(1, 2, 3))
}
my_mean()
```
```
## [1] 2
```
```
my_get_cats = function() {
get_cats = 10
get_cats()
}
my_get_cats()
```
```
## [1] 3
```
### 5\.2\.3 Dynamic Lookup
Variable lookup happens when a function is **called**, not when it’s defined.
This is called *dynamic lookup*.
For example, the result from `get_cats`, which accesses the global variable
`cats`, changes if we change the value of `cats`:
```
cats = 10
get_cats()
```
```
## [1] 10
```
```
cats = 20
get_cats()
```
```
## [1] 20
```
### 5\.2\.4 Summary
This section covered a lot of details about R’s rules for variable scope and
lookup. Here are the key takeaways:
* Function definitions (or `local()`) create a new scope.
* Local variables
+ Are private
+ Get reset for each call
+ Mask non\-local variables (exception: function calls)
* *Lexical scoping*: where a function is **defined** determines which non\-local
variables are in scope.
* *Dynamic lookup*: when a function is **called** determines values of
non\-local variables.
5\.3 String Processing
----------------------
So far, we’ve mostly worked with numbers or categories that are ready to use
for data analysis. In practice, data sets often require some cleaning before or
during data analysis. One common data cleaning task is editing or extracting
parts of strings.
We’ll use the stringr package to process strings. Like ggplot2 (Section
[3\.3](exploring-data.html#data-visualization)), the package is part of the [Tidyverse](https://www.tidyverse.org/). R
also has built\-in functions for string processing. The main advantage of
stringr is that its functions use a common set of parameters, so they’re easier
to learn and remember.
stringr has detailed [documentation](https://stringr.tidyverse.org/) and also a
[cheatsheet](https://github.com/rstudio/cheatsheets/blob/master/strings.pdf).
The first time you use stringr, you’ll have to install it with
`install.packages` (the same as any other package). Then you can load the
package with the `library` function:
```
# install.packages("stringr")
library("stringr")
```
The typical syntax of a stringr function is:
```
str_NAME(string, pattern, ...)
```
Where:
* `NAME` describes what the function does
* `string` is the string to search within or transform
* `pattern` is the pattern to search for
* `...` is additional, function\-specific arguments
The `str_detect` function detects whether the pattern appears within the
string. Here’s an example:
```
str_detect("hello", "el")
```
```
## [1] TRUE
```
```
str_detect("hello", "ol")
```
```
## [1] FALSE
```
Most of the stringr functions are vectorized in the `string` parameter. For
instance:
```
str_detect(c("hello", "goodbye", "lo"), "lo")
```
```
## [1] TRUE FALSE TRUE
```
Most of the stringr functions also have support for [*regular
expressions*](https://en.wikipedia.org/wiki/Regular_expression), a powerful language for describing patterns. Several
punctuation characters, such as `.` and `?` have special meanings in the
regular expressions language. You can disable these special meanings by putting
the pattern in a call to `fixed`:
```
str_detect("a", ".")
```
```
## [1] TRUE
```
```
str_detect("a", fixed("."))
```
```
## [1] FALSE
```
You can learn more about regular expressions [here](https://r4ds.had.co.nz/strings.html#matching-patterns-with-regular-expressions).
There are a lot of stringr functions. We’ll focus on two that are especially
important, and some of their variants:
* `str_split`
* `str_replace`
You can find a complete list of stringr functions, with examples, in the
documentation.
### 5\.3\.1 Splitting Strings
The `str_split` function splits the string at each position that matches the
pattern. The characters that match are thrown away.
For example, suppose we want to split a sentence into words. Since there’s a
space between each word, we can use a space as the pattern:
```
x = "The students in this workshop are great!"
result = str_split(x, " ")
result
```
```
## [[1]]
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
The `str_split` function always returns a list with one element for each input
string. Here the list only has one element because `x` only has one element. We
can get the first element with:
```
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
We have to use the extraction operator `[[` here because `x` is a list (for a
vector, we could use the indexing operator `[` instead). Notice that in the
printout for `result`, R gives us a hint that we should use `[[` by printing
`[[1]]`.
To see why the function returns a list, consider what happens if we try to
split two different sentences at once:
```
x = c(x, "Are you listening?")
result = str_split(x, " ")
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
```
result[[2]]
```
```
## [1] "Are" "you" "listening?"
```
Each sentence has a different number of words, so the vectors in the result
have different lengths. So a list is the only way to store both.
The `str_split_fixed` function is almost the same as `str_split`, but takes a
third argument for the maximum number of splits to make. Because the number of
splits is fixed, the function can return the result in a matrix instead of a
list. For example:
```
str_split_fixed(x, " ", 3)
```
```
## [,1] [,2] [,3]
## [1,] "The" "students" "in this workshop are great!"
## [2,] "Are" "you" "listening?"
```
The `str_split_fixed` function is often more convenient than `str_split`
because the `n`th piece of each input string is just the `n`th column of the
result.
For example, suppose we want to get the area code from some phone numbers:
```
phones = c("717-555-3421", "629-555-8902", "903-555-6781")
result = str_split_fixed(phones, "-", 3)
result[, 1]
```
```
## [1] "717" "629" "903"
```
### 5\.3\.2 Replacing Parts of Strings
The `str_replace` function replaces the pattern the first time it appears in
the string. The replacement goes in the third argument.
For instance, suppose we want to change the word `"dog"` to `"cat"`:
```
x = c("dogs are great, dogs are fun", "dogs are fluffy")
str_replace(x, "dog", "cat")
```
```
## [1] "cats are great, dogs are fun" "cats are fluffy"
```
The `str_replace_all` function replaces the pattern every time it appears in
the string:
```
str_replace_all(x, "dog", "cat")
```
```
## [1] "cats are great, cats are fun" "cats are fluffy"
```
We can also use the `str_replace` and `str_replace_all` functions to delete
part of a string by setting the replacement to the empty string `""`.
For example, suppose we want to delete the comma:
```
str_replace(x, ",", "")
```
```
## [1] "dogs are great dogs are fun" "dogs are fluffy"
```
In general, stringr functions with the `_all` suffix affect all matches.
Functions without `_all` only affect the first match.
5\.4 Date Processing
--------------------
Besides strings, dates and times are another kind of data that require special
attention to prepare for analysis. This is especially important if you want to
do anything that involves sorting dates, like making a line plot with dates on
one axis. Dates may not be sorted correctly if they haven’t been converted to
one of R’s date classes.
There several built\-in functions and also many packages for date processing. As
with visualization and string processing, the Tidyverse packages have the best
combination of simple design and clear documentation. There are three Tidyverse
packages for processing dates and times:
* [lubridate](https://lubridate.tidyverse.org/), the primary package for working with dates and times
* [hms](https://hms.tidyverse.org/), a package specifically for working with times
* [clock](https://clock.r-lib.org/), a new package for working with dates and times
We’ll focus on the lubridate package. As always, you’ll have to install the
package if you haven’t already, and then load it:
```
# install.packages("lubridate")
library("lubridate")
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
The most common task is to convert a string into a date or time class. For
instance, when you load a data set, you might have dates that look like this:
```
dates = c("Jan 10, 2021", "Sep 3, 2018", "Feb 28, 1982")
dates
```
```
## [1] "Jan 10, 2021" "Sep 3, 2018" "Feb 28, 1982"
```
These are strings, so it’s relatively difficult to sort the dates, do
arithmetic on them, or extract just one part (such as the year). There are
several lubridate functions to automatically convert strings into dates. They
are named with one letter for each part of the date. For instance, the dates in
the example have the month (m), then the day (d), and then the year (y), so we
can use the `mdy` function:
```
result = mdy(dates)
result
```
```
## [1] "2021-01-10" "2018-09-03" "1982-02-28"
```
```
class(result)
```
```
## [1] "Date"
```
Notice that the dates now have class `Date`, one of R’s built\-in classes for
representing dates, and that they print differently. You can find a full list
of the automatic string to date conversion functions in the lubridate
documentation.
Occasionally, a date string may have a format that lubridate can’t convert
automatically. In that case, you can use the `fast_strptime` function to
describe the format in detail. At a minimum, the function requires two
arguments: the vector of strings to convert and a format string.
The format string describes the format of the dates, and is based on the syntax
of `strptime`, a function provided by many programming languages for converting
strings to dates (including R). In a format string, a percent sign `%` followed
by a character is called a *specification* and has a special meaning. Here are
a few of the most useful ones:
| Specification | Description | January 29, 2015 |
| --- | --- | --- |
| `%Y` | 4\-digit year | 2015 |
| `%y` | 2\-digit year | 15 |
| `%m` | 2\-digit month | 01 |
| `%B` | full month name | January |
| `%b` | short month name | Jan |
| `%d` | day of month | 29 |
| `%%` | literal % | % |
You can find a complete list in `?fast_strptime`. Other characters in the
format string do not have any special meaning. Write the format string so that
it matches the format of the dates you want to convert.
For example, let’s try converting an unusual time format:
```
odd_time = "6 minutes, 32 seconds after 10 o'clock"
fast_strptime(odd_time, "%M minutes, %S seconds after %H o'clock")
```
```
## [1] "0-01-01 10:06:32 UTC"
```
R usually represents dates with the class `Date`, and date\-times with the
classes `POSIXct` and `POSIXlt`. The difference between the two date\-time
classes is somewhat technical, but you can read more about it in `?POSIXlt`.
There is no built\-in class to represent times alone, which is why the result in
the example above includes a date. Nonetheless, the hms package provides the
`hms` class to represent times without dates.
Once you’ve converted a string to a date, the lubridate package provides a
variety of functions to get or set the parts individually. Here are a few
examples:
```
day(result)
```
```
## [1] 10 3 28
```
```
month(result)
```
```
## [1] 1 9 2
```
You can find a complete list in the lubridate documentation.
5\.1 More About Comparisons
---------------------------
### 5\.1\.1 Equality
The `==` operator is the primary way to test whether two values are equal, as
explained in Section [1\.2\.3](getting-started.html#comparisons). Nonetheless, equality can be defined
in many different ways, especially when dealing with computers. As a result, R
also provides several different functions to test for different kinds of
equality. This describes tests of equality in more detail, and also describes
some other important details of comparisons.
#### 5\.1\.1\.1 The `==` Operator
The `==` operator tests whether its two arguments have the exact same
representation as a *[binary number](https://en.wikipedia.org/wiki/Binary_number)* in your computer’s memory. Before
testing the arguments, the operator applies R’s rules for vectorization
(Section [2\.1\.3](data-structures.html#vectorization)), recycling (Section [2\.1\.4](data-structures.html#recycling)), and
implicit coercion (Section [2\.2\.2](data-structures.html#implicit-coercion)). Until you’ve fully
internalized these three rules, some results from the equality operator may
seem surprising. For example:
```
# Recycling:
c(1, 2) == c(1, 2, 1, 2)
```
```
## [1] TRUE TRUE TRUE TRUE
```
```
# Implicit coercion:
TRUE == 1
```
```
## [1] TRUE
```
```
TRUE == "TRUE"
```
```
## [1] TRUE
```
```
1 == "TRUE"
```
```
## [1] FALSE
```
The length of the result from the equality operator is usually the same as its
longest argument (with some exceptions).
#### 5\.1\.1\.2 The `all.equal` Function
The `all.equal` function tests whether its two arguments are equal up to some
acceptable difference called a *tolerance*. Computer representations for
decimal numbers are inherently imprecise, so it’s necessary to allow for very
small differences between computed numbers. For example:
```
x = 0.5 - 0.3
y = 0.3 - 0.1
# FALSE on most machines:
x == y
```
```
## [1] FALSE
```
```
# TRUE:
all.equal(x, y)
```
```
## [1] TRUE
```
The `all.equal` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The function returns `TRUE` when the arguments are equal,
and returns a string summarizing the differences when they are not. For
instance:
```
all.equal(1, c(1, 2, 1))
```
```
## [1] "Numeric: lengths (1, 3) differ"
```
The `all.equal` function is often used together with the `isTRUE` function,
which tests whether the result is `TRUE`:
```
all.equal(3, 4)
```
```
## [1] "Mean relative difference: 0.3333333"
```
```
isTRUE(all.equal(3, 4))
```
```
## [1] FALSE
```
You should generally use the `all.equal` function when you want to compare
decimal numbers.
#### 5\.1\.1\.3 The `identical` Function
The `identical` function checks whether its arguments are completely identical,
including their metadata (names, dimensions, and so on). For instance:
```
x = list(a = 1)
y = list(a = 1)
z = list(1)
identical(x, y)
```
```
## [1] TRUE
```
```
identical(x, z)
```
```
## [1] FALSE
```
The `identical` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The result is always a single logical value.
You’ll generally use the `identical` function to compare non\-vector objects
such as lists or data frames. The function also works for vectors, but most of
the time the equality operator `==` is sufficient.
### 5\.1\.2 The `%in%` Operator
Another common comparison is to check whether elements of one vector are
*contained* in another vector at any position. For instance, suppose you want
to check whether `1` or `2` appear anywhere in a longer vector `x`. Here’s how
to do it:
```
x = c(3, 4, 2, 7, 3, 7)
c(1, 2) %in% x
```
```
## [1] FALSE TRUE
```
R returns `FALSE` for the `1` because there’s no `1` in `x`, and returns `TRUE`
for the `2` because there is a `2` in `x`.
Notice that this is different from comparing with the equality operator `==`.
If you use use the equality operator, the shorter vector is recycled until its
length matches the longer one, and then compared element\-by\-element. For the
example, this means only the elements at odd\-numbered positions are compared to
`1`, and only the elements at even\-numbered positions are compared to `2`:
```
c(1, 2) == x
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE
```
### 5\.1\.3 Summarizing Comparisons
The comparison operators are vectorized, so they compare their arguments
element\-by\-element:
```
c(1, 2, 3) < c(1, 3, -3)
```
```
## [1] FALSE TRUE FALSE
```
```
c("he", "saw", "her") == c("she", "saw", "him")
```
```
## [1] FALSE TRUE FALSE
```
What if you want to summarize whether all the elements in a vector are equal
(or unequal)? You can use the `all` function on any logical vector to get a
summary. The `all` function takes a vector of logical values and returns `TRUE`
if all of them are `TRUE`, and returns `FALSE` otherwise:
```
all(c(1, 2, 3) < c(1, 3, -3))
```
```
## [1] FALSE
```
The related `any` function returns `TRUE` if any one element is `TRUE`, and
returns `FALSE` otherwise:
```
any(c("hi", "hello") == c("hi", "bye"))
```
```
## [1] TRUE
```
### 5\.1\.4 Other Pitfalls
New programmers sometimes incorrectly think they need to append `== TRUE` to
their comparisons. This is redundant, makes your code harder to understand, and
wastes computational time. Comparisons already return logical values. If the
result of the comparison is `TRUE`, then `TRUE == TRUE` is again just `TRUE`.
If the result is `FALSE`, then `FALSE == TRUE` is again just `FALSE`. Likewise,
if you want to invert a condition, choose an appropriate operator rather than
appending `== FALSE`.
### 5\.1\.1 Equality
The `==` operator is the primary way to test whether two values are equal, as
explained in Section [1\.2\.3](getting-started.html#comparisons). Nonetheless, equality can be defined
in many different ways, especially when dealing with computers. As a result, R
also provides several different functions to test for different kinds of
equality. This describes tests of equality in more detail, and also describes
some other important details of comparisons.
#### 5\.1\.1\.1 The `==` Operator
The `==` operator tests whether its two arguments have the exact same
representation as a *[binary number](https://en.wikipedia.org/wiki/Binary_number)* in your computer’s memory. Before
testing the arguments, the operator applies R’s rules for vectorization
(Section [2\.1\.3](data-structures.html#vectorization)), recycling (Section [2\.1\.4](data-structures.html#recycling)), and
implicit coercion (Section [2\.2\.2](data-structures.html#implicit-coercion)). Until you’ve fully
internalized these three rules, some results from the equality operator may
seem surprising. For example:
```
# Recycling:
c(1, 2) == c(1, 2, 1, 2)
```
```
## [1] TRUE TRUE TRUE TRUE
```
```
# Implicit coercion:
TRUE == 1
```
```
## [1] TRUE
```
```
TRUE == "TRUE"
```
```
## [1] TRUE
```
```
1 == "TRUE"
```
```
## [1] FALSE
```
The length of the result from the equality operator is usually the same as its
longest argument (with some exceptions).
#### 5\.1\.1\.2 The `all.equal` Function
The `all.equal` function tests whether its two arguments are equal up to some
acceptable difference called a *tolerance*. Computer representations for
decimal numbers are inherently imprecise, so it’s necessary to allow for very
small differences between computed numbers. For example:
```
x = 0.5 - 0.3
y = 0.3 - 0.1
# FALSE on most machines:
x == y
```
```
## [1] FALSE
```
```
# TRUE:
all.equal(x, y)
```
```
## [1] TRUE
```
The `all.equal` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The function returns `TRUE` when the arguments are equal,
and returns a string summarizing the differences when they are not. For
instance:
```
all.equal(1, c(1, 2, 1))
```
```
## [1] "Numeric: lengths (1, 3) differ"
```
The `all.equal` function is often used together with the `isTRUE` function,
which tests whether the result is `TRUE`:
```
all.equal(3, 4)
```
```
## [1] "Mean relative difference: 0.3333333"
```
```
isTRUE(all.equal(3, 4))
```
```
## [1] FALSE
```
You should generally use the `all.equal` function when you want to compare
decimal numbers.
#### 5\.1\.1\.3 The `identical` Function
The `identical` function checks whether its arguments are completely identical,
including their metadata (names, dimensions, and so on). For instance:
```
x = list(a = 1)
y = list(a = 1)
z = list(1)
identical(x, y)
```
```
## [1] TRUE
```
```
identical(x, z)
```
```
## [1] FALSE
```
The `identical` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The result is always a single logical value.
You’ll generally use the `identical` function to compare non\-vector objects
such as lists or data frames. The function also works for vectors, but most of
the time the equality operator `==` is sufficient.
#### 5\.1\.1\.1 The `==` Operator
The `==` operator tests whether its two arguments have the exact same
representation as a *[binary number](https://en.wikipedia.org/wiki/Binary_number)* in your computer’s memory. Before
testing the arguments, the operator applies R’s rules for vectorization
(Section [2\.1\.3](data-structures.html#vectorization)), recycling (Section [2\.1\.4](data-structures.html#recycling)), and
implicit coercion (Section [2\.2\.2](data-structures.html#implicit-coercion)). Until you’ve fully
internalized these three rules, some results from the equality operator may
seem surprising. For example:
```
# Recycling:
c(1, 2) == c(1, 2, 1, 2)
```
```
## [1] TRUE TRUE TRUE TRUE
```
```
# Implicit coercion:
TRUE == 1
```
```
## [1] TRUE
```
```
TRUE == "TRUE"
```
```
## [1] TRUE
```
```
1 == "TRUE"
```
```
## [1] FALSE
```
The length of the result from the equality operator is usually the same as its
longest argument (with some exceptions).
#### 5\.1\.1\.2 The `all.equal` Function
The `all.equal` function tests whether its two arguments are equal up to some
acceptable difference called a *tolerance*. Computer representations for
decimal numbers are inherently imprecise, so it’s necessary to allow for very
small differences between computed numbers. For example:
```
x = 0.5 - 0.3
y = 0.3 - 0.1
# FALSE on most machines:
x == y
```
```
## [1] FALSE
```
```
# TRUE:
all.equal(x, y)
```
```
## [1] TRUE
```
The `all.equal` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The function returns `TRUE` when the arguments are equal,
and returns a string summarizing the differences when they are not. For
instance:
```
all.equal(1, c(1, 2, 1))
```
```
## [1] "Numeric: lengths (1, 3) differ"
```
The `all.equal` function is often used together with the `isTRUE` function,
which tests whether the result is `TRUE`:
```
all.equal(3, 4)
```
```
## [1] "Mean relative difference: 0.3333333"
```
```
isTRUE(all.equal(3, 4))
```
```
## [1] FALSE
```
You should generally use the `all.equal` function when you want to compare
decimal numbers.
#### 5\.1\.1\.3 The `identical` Function
The `identical` function checks whether its arguments are completely identical,
including their metadata (names, dimensions, and so on). For instance:
```
x = list(a = 1)
y = list(a = 1)
z = list(1)
identical(x, y)
```
```
## [1] TRUE
```
```
identical(x, z)
```
```
## [1] FALSE
```
The `identical` function does not apply R’s rules for vectorization, recycling,
or implicit coercion. The result is always a single logical value.
You’ll generally use the `identical` function to compare non\-vector objects
such as lists or data frames. The function also works for vectors, but most of
the time the equality operator `==` is sufficient.
### 5\.1\.2 The `%in%` Operator
Another common comparison is to check whether elements of one vector are
*contained* in another vector at any position. For instance, suppose you want
to check whether `1` or `2` appear anywhere in a longer vector `x`. Here’s how
to do it:
```
x = c(3, 4, 2, 7, 3, 7)
c(1, 2) %in% x
```
```
## [1] FALSE TRUE
```
R returns `FALSE` for the `1` because there’s no `1` in `x`, and returns `TRUE`
for the `2` because there is a `2` in `x`.
Notice that this is different from comparing with the equality operator `==`.
If you use use the equality operator, the shorter vector is recycled until its
length matches the longer one, and then compared element\-by\-element. For the
example, this means only the elements at odd\-numbered positions are compared to
`1`, and only the elements at even\-numbered positions are compared to `2`:
```
c(1, 2) == x
```
```
## [1] FALSE FALSE FALSE FALSE FALSE FALSE
```
### 5\.1\.3 Summarizing Comparisons
The comparison operators are vectorized, so they compare their arguments
element\-by\-element:
```
c(1, 2, 3) < c(1, 3, -3)
```
```
## [1] FALSE TRUE FALSE
```
```
c("he", "saw", "her") == c("she", "saw", "him")
```
```
## [1] FALSE TRUE FALSE
```
What if you want to summarize whether all the elements in a vector are equal
(or unequal)? You can use the `all` function on any logical vector to get a
summary. The `all` function takes a vector of logical values and returns `TRUE`
if all of them are `TRUE`, and returns `FALSE` otherwise:
```
all(c(1, 2, 3) < c(1, 3, -3))
```
```
## [1] FALSE
```
The related `any` function returns `TRUE` if any one element is `TRUE`, and
returns `FALSE` otherwise:
```
any(c("hi", "hello") == c("hi", "bye"))
```
```
## [1] TRUE
```
### 5\.1\.4 Other Pitfalls
New programmers sometimes incorrectly think they need to append `== TRUE` to
their comparisons. This is redundant, makes your code harder to understand, and
wastes computational time. Comparisons already return logical values. If the
result of the comparison is `TRUE`, then `TRUE == TRUE` is again just `TRUE`.
If the result is `FALSE`, then `FALSE == TRUE` is again just `FALSE`. Likewise,
if you want to invert a condition, choose an appropriate operator rather than
appending `== FALSE`.
5\.2 Variable Scope \& Lookup
-----------------------------
### 5\.2\.1 Local Variables
A variable’s *scope* is the section of code where it exists and is accessible.
The `exists` function checks whether a variable is in scope:
```
exists("zz")
```
```
## [1] FALSE
```
```
zz = 3
exists("zz")
```
```
## [1] TRUE
```
When you create a function, you create a new scope. Variables defined inside of
a function are *local* to the function. Local variables cannot be accessed from
outside:
```
rescale = function(x, center, scale) {
centered = x - center
centered / scale
}
centered
```
```
## Error in eval(expr, envir, enclos): object 'centered' not found
```
```
exists("centered")
```
```
## [1] FALSE
```
Local variables are reset each time the function is called:
```
f = function() {
is_z_in_scope = exists("z")
z = 42
is_z_in_scope
}
f()
```
```
## [1] TRUE
```
```
f()
```
```
## [1] TRUE
```
### 5\.2\.2 Lexical Scoping
A function can use variables defined outside (non\-local), but only if those
variables are in scope where the function was **defined**. This property is
called *lexical scoping*.
Let’s see how this works in practice. First, we’ll define a variable `cats` and
then define a function `get_cats` in the same place (the top level, not inside
any functions). As a result, the `cats` variable is in scope inside of the
`get_cats` function:
```
cats = 3
get_cats = function() cats
get_cats()
```
```
## [1] 3
```
Now let’s define a variable `dogs` inside of a function `create_dogs`. We’ll
also define a function `get_dogs` at the top level. The variable `dogs` is not
in scope at the top level, so it’s not in scope inside of the `get_dogs`
function:
```
create_dogs = function() {
dogs = "hello"
}
get_dogs = function() dogs
create_dogs()
get_dogs()
```
```
## Error in get_dogs(): object 'dogs' not found
```
Variables defined directly in the R console are *global* and available to any
function.
Local variables *mask* (hide) non\-local variables with the same name:
```
get_parrot = function() {
parrot = 3
parrot
}
parrot = 42
get_parrot()
```
```
## [1] 3
```
There’s one exception to this rule. We often use variables that refer to
functions in calls:
```
#mean()
```
In this case, the variable must refer to a function, so R ignores local
variables that aren’t functions. For example:
```
my_mean = function() {
mean = 0
mean(c(1, 2, 3))
}
my_mean()
```
```
## [1] 2
```
```
my_get_cats = function() {
get_cats = 10
get_cats()
}
my_get_cats()
```
```
## [1] 3
```
### 5\.2\.3 Dynamic Lookup
Variable lookup happens when a function is **called**, not when it’s defined.
This is called *dynamic lookup*.
For example, the result from `get_cats`, which accesses the global variable
`cats`, changes if we change the value of `cats`:
```
cats = 10
get_cats()
```
```
## [1] 10
```
```
cats = 20
get_cats()
```
```
## [1] 20
```
### 5\.2\.4 Summary
This section covered a lot of details about R’s rules for variable scope and
lookup. Here are the key takeaways:
* Function definitions (or `local()`) create a new scope.
* Local variables
+ Are private
+ Get reset for each call
+ Mask non\-local variables (exception: function calls)
* *Lexical scoping*: where a function is **defined** determines which non\-local
variables are in scope.
* *Dynamic lookup*: when a function is **called** determines values of
non\-local variables.
### 5\.2\.1 Local Variables
A variable’s *scope* is the section of code where it exists and is accessible.
The `exists` function checks whether a variable is in scope:
```
exists("zz")
```
```
## [1] FALSE
```
```
zz = 3
exists("zz")
```
```
## [1] TRUE
```
When you create a function, you create a new scope. Variables defined inside of
a function are *local* to the function. Local variables cannot be accessed from
outside:
```
rescale = function(x, center, scale) {
centered = x - center
centered / scale
}
centered
```
```
## Error in eval(expr, envir, enclos): object 'centered' not found
```
```
exists("centered")
```
```
## [1] FALSE
```
Local variables are reset each time the function is called:
```
f = function() {
is_z_in_scope = exists("z")
z = 42
is_z_in_scope
}
f()
```
```
## [1] TRUE
```
```
f()
```
```
## [1] TRUE
```
### 5\.2\.2 Lexical Scoping
A function can use variables defined outside (non\-local), but only if those
variables are in scope where the function was **defined**. This property is
called *lexical scoping*.
Let’s see how this works in practice. First, we’ll define a variable `cats` and
then define a function `get_cats` in the same place (the top level, not inside
any functions). As a result, the `cats` variable is in scope inside of the
`get_cats` function:
```
cats = 3
get_cats = function() cats
get_cats()
```
```
## [1] 3
```
Now let’s define a variable `dogs` inside of a function `create_dogs`. We’ll
also define a function `get_dogs` at the top level. The variable `dogs` is not
in scope at the top level, so it’s not in scope inside of the `get_dogs`
function:
```
create_dogs = function() {
dogs = "hello"
}
get_dogs = function() dogs
create_dogs()
get_dogs()
```
```
## Error in get_dogs(): object 'dogs' not found
```
Variables defined directly in the R console are *global* and available to any
function.
Local variables *mask* (hide) non\-local variables with the same name:
```
get_parrot = function() {
parrot = 3
parrot
}
parrot = 42
get_parrot()
```
```
## [1] 3
```
There’s one exception to this rule. We often use variables that refer to
functions in calls:
```
#mean()
```
In this case, the variable must refer to a function, so R ignores local
variables that aren’t functions. For example:
```
my_mean = function() {
mean = 0
mean(c(1, 2, 3))
}
my_mean()
```
```
## [1] 2
```
```
my_get_cats = function() {
get_cats = 10
get_cats()
}
my_get_cats()
```
```
## [1] 3
```
### 5\.2\.3 Dynamic Lookup
Variable lookup happens when a function is **called**, not when it’s defined.
This is called *dynamic lookup*.
For example, the result from `get_cats`, which accesses the global variable
`cats`, changes if we change the value of `cats`:
```
cats = 10
get_cats()
```
```
## [1] 10
```
```
cats = 20
get_cats()
```
```
## [1] 20
```
### 5\.2\.4 Summary
This section covered a lot of details about R’s rules for variable scope and
lookup. Here are the key takeaways:
* Function definitions (or `local()`) create a new scope.
* Local variables
+ Are private
+ Get reset for each call
+ Mask non\-local variables (exception: function calls)
* *Lexical scoping*: where a function is **defined** determines which non\-local
variables are in scope.
* *Dynamic lookup*: when a function is **called** determines values of
non\-local variables.
5\.3 String Processing
----------------------
So far, we’ve mostly worked with numbers or categories that are ready to use
for data analysis. In practice, data sets often require some cleaning before or
during data analysis. One common data cleaning task is editing or extracting
parts of strings.
We’ll use the stringr package to process strings. Like ggplot2 (Section
[3\.3](exploring-data.html#data-visualization)), the package is part of the [Tidyverse](https://www.tidyverse.org/). R
also has built\-in functions for string processing. The main advantage of
stringr is that its functions use a common set of parameters, so they’re easier
to learn and remember.
stringr has detailed [documentation](https://stringr.tidyverse.org/) and also a
[cheatsheet](https://github.com/rstudio/cheatsheets/blob/master/strings.pdf).
The first time you use stringr, you’ll have to install it with
`install.packages` (the same as any other package). Then you can load the
package with the `library` function:
```
# install.packages("stringr")
library("stringr")
```
The typical syntax of a stringr function is:
```
str_NAME(string, pattern, ...)
```
Where:
* `NAME` describes what the function does
* `string` is the string to search within or transform
* `pattern` is the pattern to search for
* `...` is additional, function\-specific arguments
The `str_detect` function detects whether the pattern appears within the
string. Here’s an example:
```
str_detect("hello", "el")
```
```
## [1] TRUE
```
```
str_detect("hello", "ol")
```
```
## [1] FALSE
```
Most of the stringr functions are vectorized in the `string` parameter. For
instance:
```
str_detect(c("hello", "goodbye", "lo"), "lo")
```
```
## [1] TRUE FALSE TRUE
```
Most of the stringr functions also have support for [*regular
expressions*](https://en.wikipedia.org/wiki/Regular_expression), a powerful language for describing patterns. Several
punctuation characters, such as `.` and `?` have special meanings in the
regular expressions language. You can disable these special meanings by putting
the pattern in a call to `fixed`:
```
str_detect("a", ".")
```
```
## [1] TRUE
```
```
str_detect("a", fixed("."))
```
```
## [1] FALSE
```
You can learn more about regular expressions [here](https://r4ds.had.co.nz/strings.html#matching-patterns-with-regular-expressions).
There are a lot of stringr functions. We’ll focus on two that are especially
important, and some of their variants:
* `str_split`
* `str_replace`
You can find a complete list of stringr functions, with examples, in the
documentation.
### 5\.3\.1 Splitting Strings
The `str_split` function splits the string at each position that matches the
pattern. The characters that match are thrown away.
For example, suppose we want to split a sentence into words. Since there’s a
space between each word, we can use a space as the pattern:
```
x = "The students in this workshop are great!"
result = str_split(x, " ")
result
```
```
## [[1]]
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
The `str_split` function always returns a list with one element for each input
string. Here the list only has one element because `x` only has one element. We
can get the first element with:
```
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
We have to use the extraction operator `[[` here because `x` is a list (for a
vector, we could use the indexing operator `[` instead). Notice that in the
printout for `result`, R gives us a hint that we should use `[[` by printing
`[[1]]`.
To see why the function returns a list, consider what happens if we try to
split two different sentences at once:
```
x = c(x, "Are you listening?")
result = str_split(x, " ")
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
```
result[[2]]
```
```
## [1] "Are" "you" "listening?"
```
Each sentence has a different number of words, so the vectors in the result
have different lengths. So a list is the only way to store both.
The `str_split_fixed` function is almost the same as `str_split`, but takes a
third argument for the maximum number of splits to make. Because the number of
splits is fixed, the function can return the result in a matrix instead of a
list. For example:
```
str_split_fixed(x, " ", 3)
```
```
## [,1] [,2] [,3]
## [1,] "The" "students" "in this workshop are great!"
## [2,] "Are" "you" "listening?"
```
The `str_split_fixed` function is often more convenient than `str_split`
because the `n`th piece of each input string is just the `n`th column of the
result.
For example, suppose we want to get the area code from some phone numbers:
```
phones = c("717-555-3421", "629-555-8902", "903-555-6781")
result = str_split_fixed(phones, "-", 3)
result[, 1]
```
```
## [1] "717" "629" "903"
```
### 5\.3\.2 Replacing Parts of Strings
The `str_replace` function replaces the pattern the first time it appears in
the string. The replacement goes in the third argument.
For instance, suppose we want to change the word `"dog"` to `"cat"`:
```
x = c("dogs are great, dogs are fun", "dogs are fluffy")
str_replace(x, "dog", "cat")
```
```
## [1] "cats are great, dogs are fun" "cats are fluffy"
```
The `str_replace_all` function replaces the pattern every time it appears in
the string:
```
str_replace_all(x, "dog", "cat")
```
```
## [1] "cats are great, cats are fun" "cats are fluffy"
```
We can also use the `str_replace` and `str_replace_all` functions to delete
part of a string by setting the replacement to the empty string `""`.
For example, suppose we want to delete the comma:
```
str_replace(x, ",", "")
```
```
## [1] "dogs are great dogs are fun" "dogs are fluffy"
```
In general, stringr functions with the `_all` suffix affect all matches.
Functions without `_all` only affect the first match.
### 5\.3\.1 Splitting Strings
The `str_split` function splits the string at each position that matches the
pattern. The characters that match are thrown away.
For example, suppose we want to split a sentence into words. Since there’s a
space between each word, we can use a space as the pattern:
```
x = "The students in this workshop are great!"
result = str_split(x, " ")
result
```
```
## [[1]]
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
The `str_split` function always returns a list with one element for each input
string. Here the list only has one element because `x` only has one element. We
can get the first element with:
```
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
We have to use the extraction operator `[[` here because `x` is a list (for a
vector, we could use the indexing operator `[` instead). Notice that in the
printout for `result`, R gives us a hint that we should use `[[` by printing
`[[1]]`.
To see why the function returns a list, consider what happens if we try to
split two different sentences at once:
```
x = c(x, "Are you listening?")
result = str_split(x, " ")
result[[1]]
```
```
## [1] "The" "students" "in" "this" "workshop" "are" "great!"
```
```
result[[2]]
```
```
## [1] "Are" "you" "listening?"
```
Each sentence has a different number of words, so the vectors in the result
have different lengths. So a list is the only way to store both.
The `str_split_fixed` function is almost the same as `str_split`, but takes a
third argument for the maximum number of splits to make. Because the number of
splits is fixed, the function can return the result in a matrix instead of a
list. For example:
```
str_split_fixed(x, " ", 3)
```
```
## [,1] [,2] [,3]
## [1,] "The" "students" "in this workshop are great!"
## [2,] "Are" "you" "listening?"
```
The `str_split_fixed` function is often more convenient than `str_split`
because the `n`th piece of each input string is just the `n`th column of the
result.
For example, suppose we want to get the area code from some phone numbers:
```
phones = c("717-555-3421", "629-555-8902", "903-555-6781")
result = str_split_fixed(phones, "-", 3)
result[, 1]
```
```
## [1] "717" "629" "903"
```
### 5\.3\.2 Replacing Parts of Strings
The `str_replace` function replaces the pattern the first time it appears in
the string. The replacement goes in the third argument.
For instance, suppose we want to change the word `"dog"` to `"cat"`:
```
x = c("dogs are great, dogs are fun", "dogs are fluffy")
str_replace(x, "dog", "cat")
```
```
## [1] "cats are great, dogs are fun" "cats are fluffy"
```
The `str_replace_all` function replaces the pattern every time it appears in
the string:
```
str_replace_all(x, "dog", "cat")
```
```
## [1] "cats are great, cats are fun" "cats are fluffy"
```
We can also use the `str_replace` and `str_replace_all` functions to delete
part of a string by setting the replacement to the empty string `""`.
For example, suppose we want to delete the comma:
```
str_replace(x, ",", "")
```
```
## [1] "dogs are great dogs are fun" "dogs are fluffy"
```
In general, stringr functions with the `_all` suffix affect all matches.
Functions without `_all` only affect the first match.
5\.4 Date Processing
--------------------
Besides strings, dates and times are another kind of data that require special
attention to prepare for analysis. This is especially important if you want to
do anything that involves sorting dates, like making a line plot with dates on
one axis. Dates may not be sorted correctly if they haven’t been converted to
one of R’s date classes.
There several built\-in functions and also many packages for date processing. As
with visualization and string processing, the Tidyverse packages have the best
combination of simple design and clear documentation. There are three Tidyverse
packages for processing dates and times:
* [lubridate](https://lubridate.tidyverse.org/), the primary package for working with dates and times
* [hms](https://hms.tidyverse.org/), a package specifically for working with times
* [clock](https://clock.r-lib.org/), a new package for working with dates and times
We’ll focus on the lubridate package. As always, you’ll have to install the
package if you haven’t already, and then load it:
```
# install.packages("lubridate")
library("lubridate")
```
```
##
## Attaching package: 'lubridate'
```
```
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
```
The most common task is to convert a string into a date or time class. For
instance, when you load a data set, you might have dates that look like this:
```
dates = c("Jan 10, 2021", "Sep 3, 2018", "Feb 28, 1982")
dates
```
```
## [1] "Jan 10, 2021" "Sep 3, 2018" "Feb 28, 1982"
```
These are strings, so it’s relatively difficult to sort the dates, do
arithmetic on them, or extract just one part (such as the year). There are
several lubridate functions to automatically convert strings into dates. They
are named with one letter for each part of the date. For instance, the dates in
the example have the month (m), then the day (d), and then the year (y), so we
can use the `mdy` function:
```
result = mdy(dates)
result
```
```
## [1] "2021-01-10" "2018-09-03" "1982-02-28"
```
```
class(result)
```
```
## [1] "Date"
```
Notice that the dates now have class `Date`, one of R’s built\-in classes for
representing dates, and that they print differently. You can find a full list
of the automatic string to date conversion functions in the lubridate
documentation.
Occasionally, a date string may have a format that lubridate can’t convert
automatically. In that case, you can use the `fast_strptime` function to
describe the format in detail. At a minimum, the function requires two
arguments: the vector of strings to convert and a format string.
The format string describes the format of the dates, and is based on the syntax
of `strptime`, a function provided by many programming languages for converting
strings to dates (including R). In a format string, a percent sign `%` followed
by a character is called a *specification* and has a special meaning. Here are
a few of the most useful ones:
| Specification | Description | January 29, 2015 |
| --- | --- | --- |
| `%Y` | 4\-digit year | 2015 |
| `%y` | 2\-digit year | 15 |
| `%m` | 2\-digit month | 01 |
| `%B` | full month name | January |
| `%b` | short month name | Jan |
| `%d` | day of month | 29 |
| `%%` | literal % | % |
You can find a complete list in `?fast_strptime`. Other characters in the
format string do not have any special meaning. Write the format string so that
it matches the format of the dates you want to convert.
For example, let’s try converting an unusual time format:
```
odd_time = "6 minutes, 32 seconds after 10 o'clock"
fast_strptime(odd_time, "%M minutes, %S seconds after %H o'clock")
```
```
## [1] "0-01-01 10:06:32 UTC"
```
R usually represents dates with the class `Date`, and date\-times with the
classes `POSIXct` and `POSIXlt`. The difference between the two date\-time
classes is somewhat technical, but you can read more about it in `?POSIXlt`.
There is no built\-in class to represent times alone, which is why the result in
the example above includes a date. Nonetheless, the hms package provides the
`hms` class to represent times without dates.
Once you’ve converted a string to a date, the lubridate package provides a
variety of functions to get or set the parts individually. Here are a few
examples:
```
day(result)
```
```
## [1] 10 3 28
```
```
month(result)
```
```
## [1] 1 9 2
```
You can find a complete list in the lubridate documentation.
| R Programming |
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/06-disassembly.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/06-disassembly.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/07-assembly.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/07-assembly.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/12-bytecode-assembly-syntax.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/12-bytecode-assembly-syntax.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/15-stored-expressions.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/15-stored-expressions.html | R Programming |